Hacker News new | past | comments | ask | show | jobs | submit login
What to do when you're feeling AI anxiety as a tech writer (passo.uno)
1 point by theletterf on Feb 12, 2024 | hide | past | favorite | 4 comments



This article appears to assume LLMs will not evolve beyond what they are now. Some basic extrapolation will show that many of the current limitations can and almost certainly will be overcome.

I do agree with the main premise: for now we definitely still need humans at the wheel. We just need a bit more time to add stuff like live learning, persistent overarching priority sorting and curiosity towards potentially beneficial information.


Author here: Yeah, for now that's indeed the case. I don't think things will substantially change with the improvements you mention. I'm genuinely interested in knowing which, in your opinion, would no longer be valid, though.


First off, I am not a technical writer and have little experience in writing documentation. Second, I'm a big fan of all transformer based AI developments, so you will see a pretty strong bias in my arguments. That being said:

Humans must feed the machine.

This is a temporary phase. It will be difficult, but as soon as we figure out how to create live learning models they will begin to suck up information (and behavioral styles) at a currently hard to imagine pace.

Once a model has been involved in the full creation process of a product, it should be able to spit out the full documentation in minutes, in many cases better than documents we have now, because there will not be any margin for human error.

Saying "you can be one of them" to comfort human writers honestly feels like telling them they can be a battery in the Matrix, until it has completely absorbed all of their added value. You'll need to do better than be a resource for training data. Original thought will become even more valuable than grunt work soon.

Liability is for humans.

Until it isn't. When models are good enough, companies will jump on the "AI did it" escape route as soon as they can get away with it. Much easier to defer blame to a piece of data, that you can claim to "update and fix" if something came out wrong.

Besides that, checking any documentation with a paranoid level of "care" for any potential human harm should be trivial a few models down the road. It's just a setting.

Context matters more than content.

Admittedly the "uniquely human capacity to take a step back and see the bigger picture" is a big one, but it will eventually be mitigated to a large extent when LLMs learn how to absorb internal information (all the data available inside a company or system) and external information (the internet, requests for information to relevant parties). Proof reading will still be required, but it is a lot less work than writing.

One cannot augment what isn’t there.

Again, this is just a temporary phase. Current models rely on high level language skills to trigger higher quality output. As someone who is hobby-developing a system that performs some tricks to force improved output quality by injecting input quality upgrades, I can tell you this is solvable.

The quality of the output you get is not a function of the quality of your questions and inputs, it's a function of the model, the environment hosting it and, to a continously decreasing degree, the quality of your input.

Currently the models and environments are in an early stage of development. I'd argue that they already allow any writer to be more efficient. Coming improvements will almost certainly allow writers to create better output than they could on their own as well. Just look at the fake TikTok singers to get a feel for what that will be like [1]; experts will still know what is going on, especially in the early stages, but many of the uniformed will not.

"Questions matter more than answers"

That's just an interface problem. Solved as soon as all the required information is incorporated.

To clarify, I think we are far from fully replacing humans in most jobs. But I also believe we are close to eliminating a large percentage of many of them, because in my opinion up to 80% of the workforce could be replaced by the 20% that embrace working with AI.

In short, I'd argue that yes, especially as a technical writer you should be worried and you should definitely get some experience in working *with* language models asap. Which is the same advice I give to coders and visual artists. Reinitiate shivering. You have some time, but eventually you either assimilate or go look for a different job.

To end on a positive note, I also believe being able to outsource so many essentially menial tasks will free up space for more interesting work for all of us.

[1] https://youtu.be/7ZWcbQys6sw


Thanks. I guess you'd probably agree more with a previous post I wrote here: https://passo.uno/openai-tech-writing-howto/

Technical writing is not just about writing. When done well, it requires extracting information from humans, manage projects, etc. I wrote about that here: https://passo.uno/hiring-tech-writers-chatgpt/

I can't really say I share your optimism (or pessimism) in regards to the future. Training (shepherding) AI will still be necessary, as will be using it, because it lacks initiative. I thought about companies blaming AIs for issues, but they won't get away with that and you know that.

I think perhaps 20% of jobs will go away as 80% of workers take up AI augmentation. Then again, it's just a guesstimate. I'm still not shivering.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: