We need ways to run "antivirus" software with fewer privileges. One way to do this is what some DoD high-security systems call "guards" and "sanitizers". When files come in from the outside, they're diverted to a jail, where something has to examine them and decide whether they can get through, and what changes have to be made to them. The guard and sanitization software runs jailed or on a separate machine - it has few privileges. All it can do is look at files and say yes or no, or remove something from the file.
There's a need for a division of labor here. The downloading function in a browser shouldn't be allowed to look at the contents. The guard/sanitizer function shouldn't be allowed to do anything other than say yes or no, or modify the downloaded file. After processing each file, the guard/sanitizer function is flushed and reloaded, so that if it was corrupted, it can't affect other files.
Before going that far, we need a more effective way to get the vendors to care.
Symantec could have chosen to ship only the minimal filesystem interface code in the kernel and run the huge, complex inspection code in an isolated low-privilege thread, just like the Windows NT guides recommended in 1993.
Symantec could have performed basic diligence and updated their dependencies when security updates were released.
Symantec could have followed recommended practice for code auditing, fuzzing, etc.
In each case they chose not to spend the money it'd take to be minimally competent, correctly realizing that most of their customers will never check and are unlikely to change their buying habits. Based on my experience running their enterprise management tools and dealing with their support, I'm pretty sure someone just made the business decision not to spend the money because most of their customers have audit requirements to buy something and nobody else in the industry is significantly better.
If they had done things the way you say, their price point would have been higher, while everyone else who didn't do it is lower.
I've been in many budget meetings where Product X does A, and Product Y does A too. Both products let you put a check in the audit box. Product Y is 20% cheaper than Product X. Without a glaring fault in Product Y, nobody will spend the extra money for Product X.
The audit question is often worded: Is a centrally managed antivirus product installed on every PC? Are the definitions for the product kept up-to-date? Good! Pay us for verifying that(among other similarly useless questions) and here's your SAS70/SSAE16/SOC1 papers.
Customer says "are you audited? can I see your papers? Good. " -- due diligence is considered done.
The people performing the audits rarely have a clue other than knowing that there's supposed to be a check in that box.
It's all a game of CYA and security theater, with very little real security being practiced.
> If they had done things the way you say, their price point would have been higher, while everyone else who didn't do it is lower.
This statement is too strong unless they're currently running on a razor-thin profit margin and have cut every other possible source of waste in the company. We're talking about a company with 6+ billion dollars in revenue hiring a modest number of good security engineers – even if they paid them twice the industry rate, it seems unlikely that it'd need to affect the price at all, especially when you consider the long-term reduction in emergency patch releases.
You're right that it's usually theater but consider how different it could be if security really was top priority. A major vendor could turn it into a selling point by publicizing the spotlight on gaps in audit standards. All they'd need would be one big shift – finance, health care, .gov, etc. – and it'd suddenly be a selling point for them for a product release cycle in addition to being the right thing to do for their customers.
> This statement is too strong unless they're currently running on a razor-thin profit margin and have cut every other possible source of waste in the company.
These days, Profit margins tend to be maximised. If they can make a little more profit by cutting a little more corners, they will probably do it. It takes some ethics not to maximize profits at the expense of everything else.
If you want better security, you'll have to find a way to make it more profitable for them —if not as an individual, as a community, or even as a society. If better products aren't more profitable, you'll inevitably end up with lemons.
The other path, questioning the profits paradigm altogether, is not easy.
Agreed - I don't know whether that will be insurance requirements, regulators, etc. but we have little reason to think simple market forces will work better in the future.
Their response is very lack luster as well. I work in a enterprise environment where they run Symantec. I found out about this massive vulnerability though hacker news. No email from them, nothing. Go to their website now, not a mention of "Hey, you have to update right this second" just marketing speak about how they are the best. Mark my words somebody is going to worm this and it's going to be bad.
The ASPack emulator that was vulnerable was one of the few not inside of a virtual machine, hence why this overflow could be easily used to get code execution.
Have you heard any sign of them making a big push to clean up their legacy code similar to what Microsoft did with their Trustworthy Computing effort? I know Symantec employs good people so I've been assuming that the problem is getting time dedicated to something which doesn't deliver a new feature or marketing bullet-point.
and this is where you already failed. Antivirus software is pretty much snake oil in 2016, it was already snake oil in ~2010. Running antivirus as your main security policy is how you recognise technically clueless CIO/CSOs (not that you will find many competent ones, all the education is leaning towards compliance over real security)
Bromium in one line: "Bromium Endpoint Protection is built on the Bromium Microvisor, a Xen-based, security-focused hypervisor designed to automatically isolate each vulnerable user task, such as visiting a website, opening a document, or accessing a USB drive."
That's not hard to do, but it's very restrictive. You can open a USB drive, but can't copy files from it. So, having achieved isolation, then what?
Whitelisted programs can be dangerous too- Word macros, PowerShell scripts, etc. The code running (Word, or PowerShell) is signed and kosher, but it's being scripted or coopted into behaving maliciously.
Also, for the common case for non-developers (word/excel macros), the OS could sandbox the applications in the way Mac OS X does. That limits what macros can do, but IMO would allow 99+% of actual macro executions.
https://github.com/google/santa seems like it takes this approach. Gives an enterprise the ability to decide which signed certs it wants to trust and then block everything else unless it gets vetted.
Cryptographic solutions aren't Majickal Pixyie Duste. They can however, within a social and adminstrative context, help tremendously in assuring proper results.
An independent authority (or authorities) of trusted (and untrusted) signing certificates.
And presumably those certificates are revoked, or will be revoked soon.
Nothing is 100% accurate in security. But code-signing is still far more protective than virus scanners. Given evading a virus scanner and evading code-signing, one of these is far easier than the other.
> And presumably those certificates are revoked, or will be revoked soon.
The way code signing works means this doesn't matter. So long as the certificate wasn't revoked when the file was signed, the signature will be indefinitely valid.
Ignoring the fact good AVs are difficult to evade because of things like behavior blocking and heuristics, you also won't be able to protect yourself against adware, because they are borderline legal and are almost always signed.
Of course code signing could allow revocation, not sure what you mean by it not being done currently.
Antivirus doesn't stop adware either, does it? If you're going to start disallowing certain software it's going to be far easier to do it based on certificates than it will be on heuristics.
Doing revocation checks on every executable whenever it's launched would introduce non-trivial latency with starting applications and a _lot_ of load on revocation servers. It probably wouldn't be feasible.
And yes, AVs stop adware ("potentially unwanted programs") unless you tell them not to.
Not sure why you'd check revocation servers on every launch... Check when launched the first time, and then the system checks for new revocations periodically; let's say as frequently as AV software checks for definition updates.
As for adware, if AV can stop it, code-signing methods can do it more efficiently and cheaper.
When the iPad first came out, a few executives publicly said "this is so awesome, I can do all my work on my iPad!" Sounds good to me, leave the power tools to the grown-ups who can resist punching the monkey, and can install security updates every couple of days (without being forced to reboot at the whim of the OS).
The actual price is nearly always much higher than that. Over centralization and breaches of privacy have insidious downsides that are hard to assess. Plus, they create systemic risks we fail to take into account because Tragedy of the Commons.
You should be afraid of having the social graph of half of the Western world on Facebook. You should be afraid of storing most people's email unencrypted, in only a handful of data-centres.
Then there is the progressive lockdown of the whole internet, where it becomes increasingly difficult to speak your mind, simply because it's harder and harder to get your own blog. I mean your blog, on your ___domain, hosted by yourself (or at the very least a virtual hosting provider). These days, it's Facebook, Medium… places where an overseer has the right to block whatever you write. A good thing in most cases, but at least we should be able to speak elsewhere.
What in the world does a locked down internet have to do with being able to control the code that your own devices are running? It's a tenuous connection at best, but if people really believe these are somehow connected then it would explain some of the odd (to me) resistance I'm seeing in this thread.
Take package signing in Linux: package managers check the provenance of the code before installing it. If this were extended to executables, users would have the benefit of a GPG trust chain, or have to indicate to the system that they want to trust new code before running it.
This is not related at all to the walled gardens. Even though technology could potentially be used to build walled gardens, it doesn't have to be. Just because technology could be used for bad purposes as well as good purposes does not mean that the good purposes should be avoided, does it?
Walled garden is a general trend whether on the web or on devices such as the iPad and game consoles.
Such control have advantages, but it facilitates abuses. It even requires some of them: if Apple allowed interpreters to run 3rd party code, it could introduce a loophole that may create the equivalent competing store. Massive online sandbox games face a similar problem with the ability to build arbitrary stuff: they generally don't want their player to build or draw… inappropriate depictions.
All this is not really a problem as long as it stays confined to a specific purpose (such as playing games), or a specific community (such as here). Inevitably, curation means control, enforcement, and lock down. We need that.
What scares me is the gradual disappearance of free, uncontrolled space. Both online and on devices. Walled gardens used to be confined to game consoles. Now they start creeping up on general purpose computers. The iPhone and iPad where a significant step up. And now even the desktop isn't safe. The iStuff made it acceptable to lock everything down in the name of the safety and convenience.
We are losing the war on general computation.
---
Besides, there is a link between the freedom of the network, and the freedom of the devices. Without free software, you can't have a free internet: people will just used the (proprietary) software that's available to them, and that software will naturally steer them to "safe", controlled online spaces. You quickly get a cyberpunk dystopia where you have to break the law, perhaps even risk your physical safety, to run programs not approved by some corporation on your device (if they're still officially yours by then, and not "leased" forever or until you look at them funny).
And of course, without a free internet, you'll have a hard time organizing and implementing free software —especially if the software is susceptible to free the network free again —it will be shut down as "unlawfully facilitating circumvention" or something.
The simultaneity of the gradual locking down of both devices and network is no coincidence.
Is that really a given? Most users put up with incredibly locked-down corporate IT systems – would requiring those IT departments to publish whitelists really change that experience much?
Is this an explanatory simplification, or is it really only files that get scrutinized?
I would be equally concerned, if not more concerned, about network traffic (vulnerable daemons, clients vulnerable to maliciously crafted responses, etc) vs. files.
This makes very good sense and it reminds me of proper business internal controls.
eg at least two or more employees must be required to be involved in any decision making process. and separation of duties as per your suggestion for a start.
I love the (military, not Zombieland) "double tap" nomenclature for follow-up phishing emails that pretend to be warnings about recent phishing emails. It's a pattern in social engineering that I've seen used a bunch, particularly in "vishing" phone calls [1], but never had a good buzzword for until now.
Is there any reason for a Windows user to use anything other than Microsoft Security Essentials (or Defender as it's been called since Windows 8)? It's free and everything I've seen and read indicates it works just as well if not better than commercial antivirus suites.
I have not found a good reason to use anything else. There are things that MSE will not catch, and I was derided by our "computer" company for using MSE instead of a Symantec product that he would have charged us for, but I then showed him an online virus scanner that showed that the particular piece of malware (ransom wear) that we picked up was not detected by either Symantec or MSE. He then conceded "yeah, well I guess any piece of software can't catch EVERYthing."
I have found that MSE is very lightweight and catches almost everything. I have not found a reason to use anything else.
It doesn't satisfy my auditors. Who can't seem to explain why ("it's not on the list of boxes I can tick"), other than to direct us to the big four of McAfee/Sophos/Kaspersky/Symantec.
I'm guessing that's due a combination of tradition (Security Essentials hasn't been around as long as the rest) and possibly having business connections with those antivirus companies.
No other reason than that it systematically sucks at stopping malware campaigns compared to pretty much everyone else on independent third party comparisons such as https://www.av-test.org/en/antivirus/home-windows/
From SecurityWeek's writeup on the same topic [1]:
No interaction is required to trigger the exploit. In fact, when Ormandy sent his PoC to Symantec, the security firm’s mail server crashed after its product unpacked the file.
> I think Symantec's mail server guessed the password "infected" and crashed (this password is commonly used among antivirus vendors to exchange samples), because they asked if they had missed a report I sent.
> They had missed the report, so I sent it again with a randomly generated password.
I'm not 100% sure I buy it. The follow up comment is about how he had mistakenly sent them a wrong testcase, and he had sent them similar exploits in a zip with the password infected before (see https://bugs.chromium.org/p/project-zero/issues/detail?id=81... from April 28th).
It would be incredible for Symantec to guess the password "infected" for ZIP files. It's possible though!
GMail does some really interesting analysis on e-mails. It even somehow managed to detect and block a file I wanted to send demonstrating a dcraw parser bug in a really obscure camera RAW image format.
If it didn't, malware devs could just send malware zipped in a password protected folder with that password and tell users to enter that password to unzip.
>"These vulnerabilities reminded me of phishing and the Double Tap for two reasons. First, every one of these vulns can be exploited by just sending an email. Since the product is an antivirus, so it’s going to scan every file that touches your disk and every email you get for viruses. You don’t have to get your target to click a link or even open the message you sent — Symantec will happily try to parse every email you receive."
Another reason not to run any "antivirus" on your personal PC
Wow. I've always been very against anti virus but never actually thought of this aspect. The anti virus itself is a huge attack surface, probably with higher risk than the risks it can mitigate. Sigh.
So, do the "stop what you're doing and upgrade" links in the article actually go to Symantec's site, or are they phishing links? Because that would be a perfect example of the type of highly effective phishing the article is talking about.
Footnote 1 is very interesting (and so is the rest of the post):
> You know, it’s interesting that before I became the CEO of a startup, the only time I thought about “conversion rates” of emails in my career was when I was involved in phishing campaigns.
Edit: It's interesting to me that phishers are evil bad etc., and yet more interested in responding well to the rhetorical situation than people with careers.
There's a need for a division of labor here. The downloading function in a browser shouldn't be allowed to look at the contents. The guard/sanitizer function shouldn't be allowed to do anything other than say yes or no, or modify the downloaded file. After processing each file, the guard/sanitizer function is flushed and reloaded, so that if it was corrupted, it can't affect other files.