Due to quirks in how I speak most dictation apps fail me miserably and I don’t like giving apps a lot of microphone access. I think dictation would be great, but sometimes is hard to do so at the speed one can think and type - I am hopeful for more technologies in the future that help disabled people gain more technical use, whether that be sci fi stuff like neuralink, or more friendly web protocols towards screen readers and stuff. Seems like LLM’s slot in to some layers there, somewhere, to me. One of the only great uses I get out of it is it greatly saves on my typing with code complete + vim. Fewer keystrokes is good for anyone’s hands, and it seems good at predicting what you want to do rather than what it thinks you should do.
I would love to understand your thinking around not wanting to give apps microphone access while simultaneously being excited about the prospect of giving apps direct access to your literal brain.
If my choice is two invasive technologies I’d probably end up choosing the one that works better, and I think it will be done irresponsibly, but if it means retaining access to the world around me, difficult to see any other real choice other than slowly becoming locked in one’s body. I have a degenerative muscle disease and only about 20 years left before my voice muscles will leave me, if my heart does not give out first. So, that is my thinking. Right now they simply don’t work well enough for me to adopt and give up that privacy, is the real issue with adoption (for me). If I regress further, and that’s all I can reach to, I would continue to try - just very frustrating to use.
I’m truly sorry to hear about the challenges you’re going through, and I genuinely hope improvements in the technological landscape afford you greater access to the world in each coming year than is taken away.