Hacker News new | past | comments | ask | show | jobs | submit login

> comprehensive documentation

Documentation is an interesting use case. There are various kinds of documentation (reference, tutorial, architecture, etc.) and LLMs might be useful for things like

- repetitive formatting and summarization of APIs for reference

- tutorials which repeat the same information verbosely in an additive, logical sequence (though probably a human would be better)

- sample code (though human-written would probably be better)

The tasks that I expect might work well involve repetitive reformatting, repetitive expansion, and reduction.

I think they also might be useful for systems analysis, boiling down a large code base into various kinds of summaries and diagrams to describe data flow, computational structure, signaling, etc.

Still, there is probably no substitute for a Caroline Rose[1] type tech writer who carefully thinks about each API call and uses that understanding to identify design flaws.

[1] https://folklore.org/Inside_Macintosh.html?sort=date






Yes, but none of the current LLMs are even remotely useful doing that kind of work for even something moderately complex. I have a 2k LOC project that no LLM even "understands" *. They can't grasp what it is: It's a mostly react-compatible implementation of "hooks" to be used for a non-DOM application. Every code assistant thinks it's a project using React.

Any documentation they write at best re-states what is immediately obvious from the surrounding code (Useless: I need to explain why), or is some hallucination trying to pretend it's a React app.

To their credit they've slowly gotten better now that a lot of documentation already exists, but that was me doing the work for them. What I needed them to do was understand the project from existing code, then write documentation for me.

Though I guess once we're at the point AI is that good, we don't need to write any documentation anymore, since every dev can just generate it for themselves with their favorite AI and in the way they prefer to consume it.

* They'll pretend they understand by re-stating what is written in the README, then proceed to produce nonsense.


I've found "Claude 3.7 Sonnet (Thinking)" to be pretty good at moderately complex code bases, after going through the effort to get it to be thorough.

Without that effort it's a useless sycophant and is functionally extremely lazy (ie takes short cuts all the time).

Don't suppose you've tried that particular model, after getting it to be thorough?


Delivering a library with an llm to explain the api and idiomatic usage seems like an interesting use case.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: