Hacker News new | past | comments | ask | show | jobs | submit login

It's worth noting that one of the fixes OpenAI employed to get ChatGPT to stop being sycophantic is to simply to edit the system prompt to include the phrase "avoid ungrounded or sycophantic flattery": https://simonwillison.net/2025/Apr/29/chatgpt-sycophancy-pro...

I personally never use the ChatGPT webapp or any other chatbot webapps — instead using the APIs directly — because being able to control the system prompt is very important, as random changes can be frustrating and unpredictable.






I also started by using APIs directly, but I've found that Google's AI Studio offers a good mix of the chatbot webapps and system prompt tweakability.

It's worth noting that AI Studio is the API, it's the same as OpenAI's Playground for example.

I find it maddening that AI Studio doesn't have a way to save the system prompt as a default.

On the top right click the save icon

Sadly, that doesn't save the system instructions. It just saves the prompt itself to Drive ... and weirdly, there's no AI studio menu option to bring up saved prompts. I guess they're just saved as text files in Drive or something (I haven't bothered to check).

Truly bizarre interface design IMO.


It definitely saves system prompts and has for some time.

That's weird, for me it does save the system prompt

That's for the thread, not the system prompt.

By me it's the exact opposite. It saves the sys prompt and not the "thread".

> I personally never use the ChatGPT webapp or any other chatbot webapps — instead using the APIs directly — because being able to control the system prompt is very important, as random changes can be frustrating and unpredictable.

This assumes that API requests don't have additional system prompts attached to them.


Actually you can't do "system" roles at all with OpenAI models now.

You can use the "developer" role which is above the "user" role but below "platform" in the hierarchy.

https://cdn.openai.com/spec/model-spec-2024-05-08.html#follo...


They just renamed "system" to "developer" for some reason. Their API doesn't care which one you use, it'll translate to the right one. From the page you linked:

> "developer": from the application developer (possibly OpenAI), formerly "system"

(That said, I guess what you said about "platform" being above "system"/"developer" still holds.)


?? What happens to old code which sends messages with a system role?

You can bypass the system prompt by using the API? I thought part of the "safety" of LLMs was implemented with the system prompt. Does that mean it's easier to get unsafe answers by using the API instead of the GUI?

Safety is both the system prompt and the RLHF posttraining to refuse to answer adversarial inputs.

Yes, it is.

Side note, I've seen a lot of "jailbreaking" (i.e. AI social engineering) to coerce OpenAI to reveal the hidden system prompts but I'd be concerned about accuracy and hallucinations. I assume that these exploits have been run across multiple sessions and different user accounts to at least reduce this.

I'm a bit skeptical of fixing the visible part of the problem and leaving only the underlying invisible problem



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: