Hacker News new | past | comments | ask | show | jobs | submit login

To get the most out of them you have to provide context. Treat these models like some kind of eager beaver junior engineer who wants to jump in and write code without asking questions. Force it to ask questions (eg: “do not write code yet, please restate my requirements to make sure we are in alignment. Are there any extra bits of context or information that would help? I will tell you when to write code”)

If your model / chat app has the ability to always inject some kind of pre-prompt make sure to add something like “please do not jump to writing code. If this was a coding interview and you jumped to writing code without asking questions and clarifying requirements you’d fail”.

At the top of all your source files include a comment with the file name and path. If you have a project on one of these services add an artifact that is the directory tree (“tree —-gitignore” is my goto). This helps “unaided” chats get a sense of what documents they are looking at.

And also, it’s a professional bullshitter so don’t trust it with large scale code changes that rely on some language / library feature you don’t have personal experience with. It can send you down a path where the entire assumption that something was possible turns out to be false.

Does it seek like a lot of work? Yes. Am I actually more productive with the tool than without? Probably. But it sure as shit isn’t “free” in terms of time spent providing context. I think the more I use these models, the more I get a sense of what it is good at and what is going to be a waste of time.

Long story short, prompting is everything. These things aren’t mind readers (and worse they forget everything in each new session)




You are right, but doing all that is incredibly cumbersome, at least to some people, which is why they don’t like working with LLMs.


That was one of the themes of my article: LLMs are power-user tools, mis-sold as "easy to use". To get great results out of them you need to invest a whole lot of under-documented and under-appreciated effort. https://simonwillison.net/2024/Dec/31/llms-in-2024/#llms-som...


It’s not just that you need to be a power user (I certainly am), you also need to be fine with nondeterminism and typing a lot of prose, instead of doing everything with keyboard shortcuts and CLI commands, with reproducible outcomes. It’s a different mode of operation and interaction, requiring a different predisposition to some degree.


Exactly! I don’t like talking or writing or explaining.

My mind generally uses language as little as possible, I have no inner monologue running in the background.

Greatly prefer something deterministic to random bs popping up without the ability of recognizing it.

I don’t like llms but sometimes use them as autocomplete or to generate words, like a template for a letter or boilerplate scripts, never for actual information (à la google).


unless you can type faster than you can talk, (which some people can), stop typing and start dictating. aider has a /voice command for a reason.

I don't use it exclusively, but damn does it help in the right places.


Can you elaborate, or give some examples? I am having trouble imagining in which situations that would be useful because I tend to put a lot of thought into defining the right prompt before sending it over.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: