Hacker News new | past | comments | ask | show | jobs | submit login
We fine-tuned an LLM to triage and fix insecure code (corgea.com)
75 points by asadeddin 8 months ago | hide | past | favorite | 63 comments



I've been playing with o1 on known kernel LPEs (a drum I've been beating is how good OpenAI's models are with Linux kernel stuff, which I assume is because there is such a wealth of context to pull from places like LKML) and it's been very hit-or-miss. In fact, it's actually pretty SAST†-like in its results: some handwavy general concerns, but needs extra prompting for the real stuff.

The training datasets here also seem pretty small, by comparison? "Hundreds of closed source projects we own"?

It'd be interesting to see if it works well. This is an easy product to prove: just generate a bunch of CVEs from open source code.

SAST is enterprise security dork code for "security linter"


Unfortunately, I realized the sentence reads weirdly. It's meant to say we use hundreds of repositories: close-source projects we own + open-source projects that are vulnerable by design + open source projects. I've updated the language in the post.

It's very true. SAST is really enterprise security dork code for "security linter"! I might start using that with some of our developer facing content.

We launched a recent project that combines LLMs + Static code analysis to detect more sophisticated business and code logic findings to get more real stuff. We wanted to follow the industry a bit more to create familiarity but a differentiation too in this type and we called it BLAST (Business Logic Application Security Testing).


Cross file flow analysis is linting?


I'm Ahmad, the founder of Corgea. We're building an AI AppSec engineer to help developers automatically triage and fix insecure code. We help reduce 30% of SAST findings with our false positive detection and accelerate remediation by ~80%. To do this for large enterprises we had to fine-tune a model that we can deploy that is secure and private.

We're very proud of the work we recently did, and wanted to share it with the greater HN community. We'd love to hear your feedback and thoughts. Let me know if I can clarify anything in particular.


It sounds like you are training multiple low rank adapters?


Yes


Finding SQL Injection is pretty trivial for SAST tools. The difficulty is what happens next. After whatever tool finds several thousand SQLI vulns in a Cold Fusion application from 2001 that hasn't been touched in over a decade, someone must be identified to take responsibility for changing the code, testing it, and deploying it. Even if the tool can change the code, no one will want to take responsibility for changes made to an application that has quietly running correctly since before most of their department has been at the company using an ancient technology that no one has experience with deploying into production. This is where so many vulns live.

Shift left and modern development patterns can catch a very large amount of known vulns so in newer applications things become mostly about fixing newly discovered vulns and doing it in an active development cycle. It's the older code that's the real scary monster and identifying the vulns is the least scary part of the process to get them remediated and put into production.

Anything that reduces false positives is good, especially if it does so without also making a significant reduction in identified true positives, but none of that changes the fact that it is the low hanging fruit of the system.


Totally agree. We have a term for it "Dev confidence". Devs really don't want to touch something that's been working for a long time, especially in a codebase they're not familiar with. The more removed the dev from the code they're working on + the length it's been running, the lower their confidence. We built in mechanisms to do a number of checks on our fixes to try to our best ability to make sure something doesn't break.

On false positives, we introduced false positive detection using AI & static analysis because of the exact issue you're highlighting.


What an awesome way of finding companies who suspect their code is insecure, and then having them give you their source code. And _charging_ them for it, presumably to make it an easier sell to CXOs: 'Nah, it's not those free software hippy communists, they're gonna make you pay through the nose for this, like a _proper_ compliance checkbox ticking outsourced vendor!"

I wonder if this is an NSA front? Or Palintir maybe? Or NSO?


The best companies to hit would be those foolish enough to not suspect their code is insecure because all software development produces vulns. Off prem scanning is a big issue in the AppSec space and vendors handle it in various ways, mostly through promises and documented processes, neither of which mean much if the vendor is a front for an intelligence agency or had otherwise been captured.

There are some free tools out there but most do lag behind the industry as a whole by quite a bit. There's also lots of abandoned free tools out there cluttering up the space. Plenty started with good intentions that now give a false sense of security. There's also lots of snake oil in the paid space. Doing one's homework really helps here and you'd be surprised how many tools fail miserably during a simple proof of concept test, which is probably why more and more vendors try to avoid them.


Whose code do you think is secure?


> an SQL injection vulnerability

I simply do not understand why the SQL API even allows injection vulnerability. Adam Ruppe and Steven Schweighoffer have done excellent work in writing a shell API over it (in D) that makes such injections far more difficult to inadvertently write.

On airplanes, when a bad user interface leads to an accident, the user interface gets fixed. There's no reason to put up with this in programming languages, either.


> why the SQL API even allows injection vulnerability

How would one implement this?

"SQL APIs" use prepared statements. Meaning you have a string for SQL and some dynamic variables that inject into that string via $1, $2 etc.

BUT now if developer makes that string dynamic via a variable, then you have SQL injection again.


> How would one implement this?

The low-level API could simply not allow SQL statements as strings, and instead provide separate functions to build the queries and statements.

It would provide entry points which could be used to ensure proper escaping and such, and would still allow for easily generating queries dynamically in the cases where that is needed.

Of course, it doesn't completely guard against Bobby Tables[1], one could imagine someone including a run-time code generator and feed it unprotected SQL as input.

But it should make it a lot more difficult, as it would be much more "unnatural", requiring going against the grain, to inject unprotected user data. Also, the "query_execute" function could raise an error if there's more than one statement, requiring one to use a different function for batch execution.

Pseudo-codish example off the top of my head, for the sake of illustration:

   is_active = str_to_bool(args['active']); // from user
   qry = new_query(ctx);
   users_alias = new_table_alias(qry, 't');
   query_select_column(users_alias, 'id');
   query_select_column(users_alias, 'username');
   query_from_table(users_alias, 'users');
   filter_active = query_column_eq_clause(users_alias, 'active', is_active);
   where = query_where(qry);
   query_where_append(where, filter_active);   
   cursor = query_execute(qry);
[1]: https://xkcd.com/327/


"Gee, this new programming language / API makes it hard to copy my SQL queries across. Better use something else."


If that's all what the datanase drivers supported...


Easy. Don’t write queries in a language (SQL) which interpolates content without escaping it for the enclosing structure.

Go one level up.

For example statements that are prepared should not allow strings in the SQL, but rather variables, and then bind them to values like PDO does


It would be a bit annoying to have to prepare outside and pass in every SQL literal you need to use in your query.

I'd rather have SQL API taking not strings but a special type that string can't be directly converted into without escaping (by default).

In C++ tagged literals could be used to create this special type easily. Similar constructs exist in some other languages


Literally a library can generate SQL statements and compile them

JS and PHP has tagged literals

But they have to be “escaped” properly before being interpolated!


That's the whole point of having a separate type for queries. Whenever you try to glue a string to a query the string gets escaped.


I agree. It would be nice if most SQL API's were secure by default to prevent SQLI. It's really something that the db connectors in the programming languages should handle with more grace like most ORMs today handle them pretty well.

I believe it largely is due to how SQL is designed to allow multiple queries to be concatenated with each other, and poor logic design when writing such queries.


SQL is not designed to allow multiple queries to be concatenated. That is a feature of certain databases, not SQL itself.


In virtually every dev environment, the overwhelming majority of queries are most straightforwardly written in a way that doesn't admit to SQLI. It's not really a programming language thing.


In my university one of the intro-to-CS courses spent some time on cybersecurity and SQL injections. It seemed like using prepared statements was less effort than concatenating queries together, so I asked why people would write vulnerable code anyway. The instructor wasn't sure; I'm not sure if she knew the uni taught SQL by concatenation in the prior semester.


Prepared statements are limited what you can do with them. A common stumbling block is sorting the results on a column that is user-specified.

If you look at the level of the discussion around this, it's not surprising SQL injections are still a thing.

https://stackoverflow.com/questions/12430208/using-a-prepare...


Even something as seemingly straightforward as selecting all entities whose ids belong to an array, a query that you'll find everywhere in most Graph QL apps, isn't easy to do without string concatenation.


It feels to me like there’s a level of abstraction missing when it comes to SQL and how it’s used.

Instead of just having :userId as a parameter that gets safely put in a query, it feels like there should be something like SORT_EXPRESSION(:orderBy) and for other common use cases, like in the sibling comment.

I have no idea whether this would fit in better as something handled by an ORM or the RDBMSes, but it probably doesn’t belong as the responsibility of the average developer, judging by the code I’ve seen.

I think the argument about needing to fix mechanisms that are commonly misused is a really good one, but there are no very clear solutions, I’m sure there can be found plenty wrong and overly trivialized with the suggestion above.


Curricula lags the industry by lots of years; in the early aughts concatenated SQL queries were the norm for database APIs, but prepared statements have been the default (or at least easily afforded in the default) for most APIs for a pretty long time now.


Yup.

In the mid aughts, one of my lecturers insisted that motion capture was limited to a few minutes because "several megabytes" was "too much" to store.


For some use cases, dynamically constructing the query is a requirement, for example if you’re building a data warehouse query interface , or have a user interface that allows selecting columns or similar.


Most programming languages have easy and well-known string concatenation and the simplest querying function typically takes just a string - it's easy to see why people naturally reach for string concatenation.


The vulnerability class is hardly unique to sql. any program that constructs content to be processed by another program or sub-routine, where an attacker can control the content has the potential to exhibit such a vulnerability. A good example is format strings in C or cgi-scripts that call each other or run OS commands.


> A good example is format strings in C

The D programming language allows direct use of C printf. However, D checks the arguments against the format specifiers in the format string to make it memory safe.

The constant stream of bugs due to format/arguments is now history.

There is no reason why C and C++ compilers cannot do this, too.


for static specifiers, I can see that. but for dynamically constructed format specifiers, especially where arrays to pointers/vargs are in use, is it possible to have a mitigation for that?

this pseudo-code as an example:

snprintf(fmt,userinputstring,args); printf(fmt,somearray);


Your suspicion is correct, the checks only work when the format string is a literal.


Like any LLM


> the SQL API

No such thing.


ISO 9075-3


Yeah, that's one of those "standards" that only ever existed on paper.


let me introduce you to the much better and reliable world of: static analysis


I feel we're going to have a hard time over the next months with a stream of these "magic tools" to solve already solved problems and try to milk some money out off managers who got no clue.


Static analysis paired with AI is the middle ground that makes sense to me (working in a similar security space). But the hard part needs to be regular computer science and the AI comes second.


> But the hard part needs to be regular computer science and the AI comes second.

Yes, indeed. The AI could be used to prefilter the list of warnings generated by static analysis to reduce the amount of false positives. To achieve that an AI could use the history of the projects static analysis results to find likely false positives. Or an I could propose a patch to avoid a warning. If it is automatically compiled, passed to the test suite and the whole ci pipeline, it could reduce the manual effort to deal with finding of static analysis tools.

But leaving out the static analysis tools would loose so much value.


We completely agree. I would redefine it a bit.

We combine static analysis + LLMs to do better detection, triaging and auto-fixing because static analysis alone is broken in many ways.

We've been able to reduce ~30% of tickets for customers with false positive detection, and now be able to detect classes of vulnerabilities in business and code logic that were previously undetectable.


That strategy has been working for the past 6 or so years.


I would redefine it a bit.

Reliable = deterministic

Accurate? Not at all. Studies show that ~30% of findings are false positive. We've also seen that with the companies we work with because we built a false positive detection feature in Corgea. There's another ~60% of issues that are false negative. https://personal.utdallas.edu/~lxz144130/publications/icst20...

We combine static analysis + LLMs to do better detection, triaging and auto-fixing because static analysis alone is broken in many ways.


I was ready to sign up after I read the article. But, when I click on the button at the bottom ("Ready to fix with a click?"), nothing happens. After open dev tools, I can see it registers the click with a linkedin ad tracker network event, but nothing happens. Maybe Firefox blocking?


maybe. I've had more and more issues with Firefox under Linux lately.


These small incremental AI tools seem in isolation to be helpful things for human coders. But over a period of decades, these interations will eventually become mostly autonomous, writing code by themselves and without much human intervention compared to now. And that could be a very dangerous thing for humanity, but most people working on this stuff don't care because by the time that happens, they will be retired with a nice piece of private property that will isolate them from the suffering of those who have not yet obtained their private property.


If the danger is a high degree of inequality among humans on Earth, we are already there.


Inequality though isn't on/off, and there are degrees. The current existence of inequality isn't a logical dismissal of attempts to prevent it worsening.

And of course, the danger of AI is much greater than just inequality: it is the further reduction of all human beings to cogs in a machine, and that is bad even if we all end up being relatively equal cogs.


Every time it’s the same pattern:

“Autonomous AI is dangerous”

“pfft, are you worried about X outcome? We already had it”


Because it's true? We already had a world war between autonomous AIs called national militaries before they (mostly) learned that total conflict doesn't result in them getting more resources. And autonomous AIs called corporations exploit our planet constantly in paper-clip maximizer fashion. The fact that they are running on meatware doesn't help at all.


And we see those as problems. But they were constrained by being executed by humans. Now the AI fans want to make more and more actually autonomous ones executed by machines? The problems would be orders of magnitude bigger. They can do far more at scale. They can perfectly recall, process and copy all information they’re exposed to. And they don’t have a self preservation instict like people with bodies do.


> And we see those as problems. But they were constrained by being executed by humans. Now the AI fans want to make more and more actually autonomous ones executed by machines?

Those things that we see as problems are exactly the things that our civilization relies on. Every time you make a purchase you rely on the fact that meatware AI corporations exploit environment and employees ruthlessly.

Every time you enjoy safety you rely on the fact that meatware military AIs got hellbent on acquiring the most dangerous hardware for themselves and make assessments that not using that hardware in any serious manner is more beneficial to them.

All the development of humanity comes from doing those problematic and horrible things more efficiently. That's why automating it with silicon AI is nothing new and nothing wrong.

I'm afraid that to evolve away from those problems we'd need paradigm shift in what humanity actually is. Because as it is now any AI, meatware or hardware will eventually get aligned with what humans want regardless of how problematic and horrible humans find the stuff they want.

It's a bit like with veganism. Killing animals is horrible but humanity largely remains dependent on that for its protein intake. And any strategic improvements in animal welfare came form new technologies applied to raising and killing animals at scale. In absence of those technologies welfare of animals that could feed growing human population would be far worse.

There's always of course the danger of brief period of misalignment as new technologies come to existence. We paid for industrial revolution with two world wars until the meatware AIs learned. Surprisingly they managed to learn things about nuclear technology with relatively minor loss of life (<1 million). But the overarching motif is that learning faster is better. So silicon AIs are not some new dangerous technology but rather a tool for already existing and entrenched AIs to learn faster of what doesn't serve their goals.


> They can perfectly recall, process and copy all information they’re exposed to.

I'm not sure if it's better or worse that the computers can do that while the AI running on them get confused and mix things up.

> And they don’t have a self preservation instict like people with bodies do.

Not so sure about that, self preservation is an instrumental goal for almost anything else. Even a system that doesn't have any self-awareness, but is subject to a genetic algorithm, would probably end up with that behaviour.


If we are still talking about AI enhanced companies, it's not that companies evolve. It's that those companies who are unfit, die off. Paul Graham put it humorously in a very old speech I can't find...


I was responding to (what I thought was) a point about AI themselves rather than specifically attached to corporations.

Corporations (and bureaucracies) don't follow the same maths as evolution — although they do mutate, merge, split, share memes, etc., the difference is that "success" isn't measured in number of descendants.

But even then, organisations that last, generally have their own survival encoded into their structure, which may or may not look like any particular individual within also wanting the organisation to continue.


If you are okay with more of it then it is clear on which side of the gap you are


Inequality has always had a breaking point where people revolt. There is no sides, only mechanisms.


Exactly. And it won’t isolate them btw. The AI will affect them too.


we find the idea of fine-tuning an LLM to triage and fix insecure code intriguing. However, we have concerns about the limitations posed by the size of the training dataset. As @tptacek mentioned, relying on "hundreds of closed source projects" might not provide the diversity needed to effectively identify a wide range of vulnerabilities, especially in complex systems like the Linux kernel. Incorporating open-source projects could enrich the model's understanding and improve its accuracy. Additionally, benchmarking the model by attempting to generate CVEs from open-source code seems like a practical way to assess its real-world effectiveness. Has anyone experimented with expanding the training data or testing the model against known vulnerabilities in open-source repositories?


That's what we've done. Unfortunately, I realized the sentence reads weirdly. It's meant to say we use hundreds of repositories: close-source projects we own + open-source projects that are vulnerable by design + open source projects. I've updated the language in the post.

Doing so, we've been able to capture a very wide range of vulnerabilities namely in web application vulnerabilities. We've done this across small projects to very large ones too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: