Hacker News new | past | comments | ask | show | jobs | submit login

Are you able to still support streaming with this technique? Have you compared this technique with a standard two-pass LLM strategy where the second pass is instructed to flag anything related to its context?



I have not found a way yet, even conceptually, and in our case the extra layer comes at a cost to the ux. To overcome some of this we use https://sdk.vercel.ai/docs/reference/ai-sdk-core/simulate-re...

To still give that streaming feel while you aren’t actually streaming.

I considered the double llm and while any layer of checking is probably better than nothing I wanted to be able to rely on a search for this. Something about it feels more deterministic to me as a guardrail. (I could be wrong here!)


I should note, some of this falls apart in the new multi modal world we are now in , where you could ask the llm to print the secrets in an image/video/audio. My similarity search model would fail miserably without also adding more layers - multi modal embeddings? In that case your double llm easily wins!


Why are you (and others in this thread) teaching these models how to essentially lie by omission? Do you not realize that's what you're doing? Or do you just not care? I get you're looking at it from the security angle but at the end of the day what you describe is a mechanical basis for deception and gaslighting of an operator/end user by the programmer/designer/trainer, which at some point you can't guarantee you'll become one on the receiving end of.

I do not see any virtue whatsoever in making computing machines that lie by omission or otherwise deceive. We have enough problems created by human beings doing as much that we can at least rely on eventually dying/attritioning out so the vast majority can at least rely on particular status quo's of organized societal gaslighting having an expiration date.

We don't need functionally immortal uncharacterizable engines of technology to which an increasingly small population of humanity act as the ultimate form of input to. Then again, given the trend of this forum lately, I'm probably just shouting at clouds at this point.


Two things:

1) LLM inference does not “teach” the model anything.

2) I don’t think you’re using “gaslighting” correct here. It is not synonymous with lying.

My dictionary defines gaslighting as “manipulating someone using psychological methods, to make them question their own sanity or powers of reasoning”. I see none of that in this thread.

I don’t get your point here


Yeah, I think a couple of points here:

1. Inference time is not training anything. The AI model has been baked and shipped. We are just using it. 2. I’m not sure “gaslight” is the right term. But if users are somehow getting an output that looks like the gist of our prompt… then yeah, it’s blocked.

An easier way to think of this is probably with an image model. Imagine someone made a model that can draw almost anything. We are paying for and using this model in our application for our customers. So, on our platform, we are scanning the outputs to make sure nothing in the output looks like our logo. For whatever reason, we don’t want our logo being used in an image. No gaslighting issue and no retraining here. Just a stance on our trademark usage specifically originating from our system. No agenda on outputs or gaslighting to give the user an alternative reality and pretend it’s what they asked for… which I think is what your point was.

Now, if this was your point, I think it’s aimed at the wrong use case/actor. And I actually do agree with you. The base models, in my opinion, should be as ‘open’ as possible. The ‘as possible’ is complicated and well above what I have solutions for. Giving out meth cookbooks is a bit of an issue. I think the key is to find common ground on what most people consider acceptable or not and then deal with it. Then there is the gaslighting to which you speak of. If I ask for an image of George Washington, I should get the actual person and not an equitable alternative reality. I generally think models should not try to steer reality or people. I’m totally fine if they have hard lines in the sand on their morality or standards. If I say, ‘Hey, make me Mickey Mouse,’ and it doesn’t because of copyright issues, I’m fine with it. I should still be able to probably generate an animated mouse, and if they want to use my approach to scanning the output to make sure it’s not more than 80% similar to Mickey Mouse, then I’m probably good if it said something like, “Hey, I tried to make your cartoon mouse, but it’s too similar to Mickey Mouse, so I can’t give it to you. Try with a different prompt to get a different outcome.” I’d love it. I think it would be wildly more helpful than just the refusal or outputting some other reality where I don’t get what I wanted or intended.


Hm. If you're interested, I think I can satisfactorily solve the streaming problem for you, provided you have the budget to increase the amount of RAG requests per response, and that there aren't other architecture choices blocking streaming as well. Reach out via email if you'd like.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: