Hacker News new | past | comments | ask | show | jobs | submit login

I've been playing with o1 on known kernel LPEs (a drum I've been beating is how good OpenAI's models are with Linux kernel stuff, which I assume is because there is such a wealth of context to pull from places like LKML) and it's been very hit-or-miss. In fact, it's actually pretty SAST†-like in its results: some handwavy general concerns, but needs extra prompting for the real stuff.

The training datasets here also seem pretty small, by comparison? "Hundreds of closed source projects we own"?

It'd be interesting to see if it works well. This is an easy product to prove: just generate a bunch of CVEs from open source code.

SAST is enterprise security dork code for "security linter"




Unfortunately, I realized the sentence reads weirdly. It's meant to say we use hundreds of repositories: close-source projects we own + open-source projects that are vulnerable by design + open source projects. I've updated the language in the post.

It's very true. SAST is really enterprise security dork code for "security linter"! I might start using that with some of our developer facing content.

We launched a recent project that combines LLMs + Static code analysis to detect more sophisticated business and code logic findings to get more real stuff. We wanted to follow the industry a bit more to create familiarity but a differentiation too in this type and we called it BLAST (Business Logic Application Security Testing).


Cross file flow analysis is linting?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: