Hacker News new | past | comments | ask | show | jobs | submit login

How these are uses to split applications to reduce risk:

http://os.inf.tu-dresden.de/papers_ps/nizza.pdf

Survey in 2017 of most separation kernels with verification methods:

https://arxiv.org/pdf/1701.01535.pdf

This one is a medium-assurance kernel that tries to knock out most errors or injections using SPARK Ada.

https://en.m.wikipedia.org/wiki/SPARK_(programming_language)

In practice, a medium or high assurance evaluation would also require testing of about everything, covert-channel analysis to spot leaks, verification the object code matches the source, and independent team re-running those with a pentest on top. INTEGRITY-178B and VxWorks MILS are separation kernels that are finished with or in evaluation with all that data. seL4 has strongest formal methods on code and assembly but other stuff is still being done at R&D lab with no evaluation. Both it and Muen were integrated into Genode OS framework with all three needing more review, pentestimg, and improvements.




Regarding the arxiv link on verification methods, I'd be careful reading too much into the summary tables. The authors don't appear to have done their homework there (they overstate the C&A accomplishments of the company I work at and our competitors).

On an unrelated note, I'm pleasantly surprised to see a tech report I contributed to as an undergrad cited - How on earth did they find it?


It's worth noting. However, I can't be too harsh about it since a number of vendors were constantly bullshiting me when I was trying to get to the bottom of what they did. I had to dig deep into protection profiles, BS marketing, comments online, interviews, and so on. I also called out Green Hills through proxies about a fake, "independent" blog they set up basically to bash Lynx and talking like INTEGRITY (not INTEGRITY-178B) was EAL6+. I ended up telling people to watch out for politics with evaluators for the proprietary companies then straight up read the research reports or source + docs for the rest. It ain't quite what it was under TCSEC which had more technical stuff and less politics where technical requirements where main form of political argument back then.

Far as your paper, I'd guess they did what I often do by typing in common keywords into search engines. "Formal," "security" "policy" were three that were in about any certified project. On the policy side, the words "policy" and "model[ing]" were used interchangeably here and there. I got a lot of results with these in a few, quick Googles although not yours. I imagine your paper was in a search or a citation of something with similar terms. Just speculating, though. Cool that you got to see your work surface on HN front page, though. :)


Yeah, dealing with vendors can be frustrating, for sure.

I'm still a little sad about the I-178 eval, since a lot of the work I saw done by RC was quite good, if limited in scope. Seeing The vendor (imo) misrepresent that effort to sell other software was rather disappointing to someone who had (at the time) just spent a couple years jumping on the formal methods bandwagon.

I'm also not particularly happy with the trend towards process, rather than results based evals. (DO-178, and CSFC, I'm looking at you), but I can see how we got here after the EAL7 well was poisoned.


"I'm still a little sad about the I-178 eval, since a lot of the work I saw done by RC was quite good, if limited in scope."

I still give them credit for doing what they could for a robust, usable piece of software with some kind of evaluation. Hopefully fixing whatever problems were noted. Their page on it is also something I've shared plenty of times because some of the design elements are really good. I especially like how each process donates their own CPU and memory to accomplish kernel calls. Maybe that's bypassable but it seems like a good default. The list of certification artifacts on bottom-right also [mostly] detail the assurance activities required in secure and certified systems. I poke at FOSS projects that claim security saying, "Which of these did you not do and why?" Usually they say "a covert-what analysis?" ;)

The evaluations have become too political. Under Walker's initiative, they didn't play or make toothless threats. IBM themselves were first to be shown the door when they made a C1-class product instead of C2. STOP OS (in XTS-400), GEMSOS, SNS Server, and KeyKOS were some of most secure OS's market ever produced. That was partly due to features (current focus) but mostly due to assurance activities the predecessors of EAL6/7 mandated to systematically add quality or security to every aspect of the lifecycle. Although modern stuff is political, the assurance methods work in both EAL6/7 and DO-178B to measurably improve safety/security if applied for real. I wrote an essay on how to improve security certification to learn from prior regimes with new one focusing on assurance activities of various components (eg TOE's) with a compositional rating added and no politics allowed. Here it is:

https://pastebin.com/dACA5gPh

Far as formal methods, there are recent developments that make it a lot easier for people to do on the low end. For one, TLA+ is making waves after the Amazon report on internal use. hwayne is making a nice tutorial using the PlusCal subset here:

http://learntla.com/

One problem with full verification is they often try to do everything in one tool. I've long encouraged splitting it up among tools picking each best for the job. DeepSpec w/ CertiKOS is doing that with what looks like a DSL style of formal methods. COGENT let one team do an ext2 filesystem certified to low-level code. One I found interesting given all the work with the B method was E-SPARK where they did formal verification of specs in Event-B whose verification conditions were fed into SPARK to prove it at code level. We need more of that. Far as SPARK itself, Carlisle built IRONSIDES DNS in SPARK.

https://journal.ub.tu-berlin.de/eceasst/article/view/785/782

http://ironsides.martincarlisle.com/

My concept is using a macro-supporting, imperative language to write the application w/ formal specs for everything. Then, this is automatically translated to Rust, C, Ada2012/SPARK2014, and SPIN/TLA+ to let the code get hit by all tooling I can find for all of them in static/dynamic analysis, SPARK's prover, and test generation. Any problems found are reinterpreted in terms of the original code for developer to check if false alarm or actual problem. Concept being high-assurance labor is expensive but push-button tools that just point to severe flaws might go a long way. They can even develop and use minor tests to build plenty of features during the day with background processes or build cluster running the verification process incrementally with some tracker notifying developers of when ready are ready to look at.

I call it Brute-Force Assurance since it basically throws cheap labor and CPU's at the problem. What do you think of this concept for medium-assurance security leaning toward high at least for stopping code injections?


Thanks for the references - I'll look into them, I'm definitely interested in multi-tool workflows given my experience (systems programmers aren't necessarily mathematicians). If only the systems programmer could be pointed at tools that don't have horrible false positive rates ... ;)


On that note, have you or anyone else you've known gotten to use Astree Analyzer? Ive been wanting to hear from field users if it's as good as they claim on bug elimination and false positives.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: