Hacker News new | past | comments | ask | show | jobs | submit login
Symantec's Bad Week (appcanary.com)
182 points by ontoillogical on July 8, 2016 | hide | past | favorite | 67 comments



We need ways to run "antivirus" software with fewer privileges. One way to do this is what some DoD high-security systems call "guards" and "sanitizers". When files come in from the outside, they're diverted to a jail, where something has to examine them and decide whether they can get through, and what changes have to be made to them. The guard and sanitization software runs jailed or on a separate machine - it has few privileges. All it can do is look at files and say yes or no, or remove something from the file.

There's a need for a division of labor here. The downloading function in a browser shouldn't be allowed to look at the contents. The guard/sanitizer function shouldn't be allowed to do anything other than say yes or no, or modify the downloaded file. After processing each file, the guard/sanitizer function is flushed and reloaded, so that if it was corrupted, it can't affect other files.


Before going that far, we need a more effective way to get the vendors to care.

Symantec could have chosen to ship only the minimal filesystem interface code in the kernel and run the huge, complex inspection code in an isolated low-privilege thread, just like the Windows NT guides recommended in 1993.

Symantec could have performed basic diligence and updated their dependencies when security updates were released.

Symantec could have followed recommended practice for code auditing, fuzzing, etc.

In each case they chose not to spend the money it'd take to be minimally competent, correctly realizing that most of their customers will never check and are unlikely to change their buying habits. Based on my experience running their enterprise management tools and dealing with their support, I'm pretty sure someone just made the business decision not to spend the money because most of their customers have audit requirements to buy something and nobody else in the industry is significantly better.


There's always a better way to do things.

The problem is selling that better way.

If they had done things the way you say, their price point would have been higher, while everyone else who didn't do it is lower.

I've been in many budget meetings where Product X does A, and Product Y does A too. Both products let you put a check in the audit box. Product Y is 20% cheaper than Product X. Without a glaring fault in Product Y, nobody will spend the extra money for Product X.

The audit question is often worded: Is a centrally managed antivirus product installed on every PC? Are the definitions for the product kept up-to-date? Good! Pay us for verifying that(among other similarly useless questions) and here's your SAS70/SSAE16/SOC1 papers.

Customer says "are you audited? can I see your papers? Good. " -- due diligence is considered done.

The people performing the audits rarely have a clue other than knowing that there's supposed to be a check in that box.

It's all a game of CYA and security theater, with very little real security being practiced.


> If they had done things the way you say, their price point would have been higher, while everyone else who didn't do it is lower.

This statement is too strong unless they're currently running on a razor-thin profit margin and have cut every other possible source of waste in the company. We're talking about a company with 6+ billion dollars in revenue hiring a modest number of good security engineers – even if they paid them twice the industry rate, it seems unlikely that it'd need to affect the price at all, especially when you consider the long-term reduction in emergency patch releases.

You're right that it's usually theater but consider how different it could be if security really was top priority. A major vendor could turn it into a selling point by publicizing the spotlight on gaps in audit standards. All they'd need would be one big shift – finance, health care, .gov, etc. – and it'd suddenly be a selling point for them for a product release cycle in addition to being the right thing to do for their customers.


> This statement is too strong unless they're currently running on a razor-thin profit margin and have cut every other possible source of waste in the company.

These days, Profit margins tend to be maximised. If they can make a little more profit by cutting a little more corners, they will probably do it. It takes some ethics not to maximize profits at the expense of everything else.

If you want better security, you'll have to find a way to make it more profitable for them —if not as an individual, as a community, or even as a society. If better products aren't more profitable, you'll inevitably end up with lemons.

The other path, questioning the profits paradigm altogether, is not easy.


Agreed - I don't know whether that will be insurance requirements, regulators, etc. but we have little reason to think simple market forces will work better in the future.


Their response is very lack luster as well. I work in a enterprise environment where they run Symantec. I found out about this massive vulnerability though hacker news. No email from them, nothing. Go to their website now, not a mention of "Hey, you have to update right this second" just marketing speak about how they are the best. Mark my words somebody is going to worm this and it's going to be bad.


The ASPack emulator that was vulnerable was one of the few not inside of a virtual machine, hence why this overflow could be easily used to get code execution.

[previous Symantec employee]


Have you heard any sign of them making a big push to clean up their legacy code similar to what Microsoft did with their Trustworthy Computing effort? I know Symantec employs good people so I've been assuming that the problem is getting time dedicated to something which doesn't deliver a new feature or marketing bullet-point.


>We need ways to run "antivirus"

and this is where you already failed. Antivirus software is pretty much snake oil in 2016, it was already snake oil in ~2010. Running antivirus as your main security policy is how you recognise technically clueless CIO/CSOs (not that you will find many competent ones, all the education is leaning towards compliance over real security)

https://www.youtube.com/watch?v=DzC8jJ0ESJ0 https://www.youtube.com/watch?v=8Z7L498dNB0 https://www.youtube.com/watch?v=XdgDr1CIoqU


You could take the Bromium approach and just sandbox applications entirely in virtual containers that self destruct.

https://www.bromium.com

Totally poor sales program but great way to boost security and I'd wager more effective than antivirus.


Bromium in one line: "Bromium Endpoint Protection is built on the Bromium Microvisor, a Xen-based, security-focused hypervisor designed to automatically isolate each vulnerable user task, such as visiting a website, opening a document, or accessing a USB drive."

That's not hard to do, but it's very restrictive. You can open a USB drive, but can't copy files from it. So, having achieved isolation, then what?


I think the Bromium approach will survive the longest, with VM technology involved. Microsoft will have to build it into the OS eventually.

Tools like SandboxIE and other wrapper/shims are in an arms race that they will eventually lose.


Do we need virus scanners? Wouldn't it make much more sense to whitelist instead of blacklist? Far more efficient, far more secure.


Whitelisted programs can be dangerous too- Word macros, PowerShell scripts, etc. The code running (Word, or PowerShell) is signed and kosher, but it's being scripted or coopted into behaving maliciously.


You should whitelist scripts, too.

Also, for the common case for non-developers (word/excel macros), the OS could sandbox the applications in the way Mac OS X does. That limits what macros can do, but IMO would allow 99+% of actual macro executions.


Far more annoying and productivity-impeding for the users to have to put up with it.


By "white-list" I mean to include code-signing, which has been shown not to impede much of anything, honestly.

OS X's half-white-list mode of refusing to run unsigned code unless you invoke it from the right-click menu seems to be incredibly effective.

Between code-signing and sand-boxing, I see virus scanners as failed legacies of the past. They have stopped little, and cost everyone greatly.


https://github.com/google/santa seems like it takes this approach. Gives an enterprise the ability to decide which signed certs it wants to trust and then block everything else unless it gets vetted.


Unfortunately there are literally websites that will sign any file you upload using certificates they stole. It's not a great approach.


Cryptographic solutions aren't Majickal Pixyie Duste. They can however, within a social and adminstrative context, help tremendously in assuring proper results.

An independent authority (or authorities) of trusted (and untrusted) signing certificates.

In a word: reputation.


And presumably those certificates are revoked, or will be revoked soon.

Nothing is 100% accurate in security. But code-signing is still far more protective than virus scanners. Given evading a virus scanner and evading code-signing, one of these is far easier than the other.


> And presumably those certificates are revoked, or will be revoked soon.

The way code signing works means this doesn't matter. So long as the certificate wasn't revoked when the file was signed, the signature will be indefinitely valid.

Ignoring the fact good AVs are difficult to evade because of things like behavior blocking and heuristics, you also won't be able to protect yourself against adware, because they are borderline legal and are almost always signed.


Of course code signing could allow revocation, not sure what you mean by it not being done currently.

Antivirus doesn't stop adware either, does it? If you're going to start disallowing certain software it's going to be far easier to do it based on certificates than it will be on heuristics.


Doing revocation checks on every executable whenever it's launched would introduce non-trivial latency with starting applications and a _lot_ of load on revocation servers. It probably wouldn't be feasible.

And yes, AVs stop adware ("potentially unwanted programs") unless you tell them not to.


Not sure why you'd check revocation servers on every launch... Check when launched the first time, and then the system checks for new revocations periodically; let's say as frequently as AV software checks for definition updates.

As for adware, if AV can stop it, code-signing methods can do it more efficiently and cheaper.


Many already do - it's called iOS.

When the iPad first came out, a few executives publicly said "this is so awesome, I can do all my work on my iPad!" Sounds good to me, leave the power tools to the grown-ups who can resist punching the monkey, and can install security updates every couple of days (without being forced to reboot at the whim of the OS).


The price for stuff like iOS is to high. I want to own my computer.


I hate iOS, I own no iOS devices. But, many people love it.

If many people really need to be protected from themselves (and as a side effect save the world from botnets), iOS is the way to do it.


or to be truly locked-down, ChromeOS.


It's too high but only for a tiny minority of people.


The known price is okay for nearly everyone.

The actual price is nearly always much higher than that. Over centralization and breaches of privacy have insidious downsides that are hard to assess. Plus, they create systemic risks we fail to take into account because Tragedy of the Commons.


Seriously that's exactly equal to FUD.

In any case I am not saying that everyone should use iOS.


FUD, you say? Well, you should be afraid.

You should be afraid of having the social graph of half of the Western world on Facebook. You should be afraid of storing most people's email unencrypted, in only a handful of data-centres.

Then there is the progressive lockdown of the whole internet, where it becomes increasingly difficult to speak your mind, simply because it's harder and harder to get your own blog. I mean your blog, on your ___domain, hosted by yourself (or at the very least a virtual hosting provider). These days, it's Facebook, Medium… places where an overseer has the right to block whatever you write. A good thing in most cases, but at least we should be able to speak elsewhere.


What in the world does a locked down internet have to do with being able to control the code that your own devices are running? It's a tenuous connection at best, but if people really believe these are somehow connected then it would explain some of the odd (to me) resistance I'm seeing in this thread.

Take package signing in Linux: package managers check the provenance of the code before installing it. If this were extended to executables, users would have the benefit of a GPG trust chain, or have to indicate to the system that they want to trust new code before running it.

This is not related at all to the walled gardens. Even though technology could potentially be used to build walled gardens, it doesn't have to be. Just because technology could be used for bad purposes as well as good purposes does not mean that the good purposes should be avoided, does it?


Walled garden is a general trend whether on the web or on devices such as the iPad and game consoles.

Such control have advantages, but it facilitates abuses. It even requires some of them: if Apple allowed interpreters to run 3rd party code, it could introduce a loophole that may create the equivalent competing store. Massive online sandbox games face a similar problem with the ability to build arbitrary stuff: they generally don't want their player to build or draw… inappropriate depictions.

All this is not really a problem as long as it stays confined to a specific purpose (such as playing games), or a specific community (such as here). Inevitably, curation means control, enforcement, and lock down. We need that.

What scares me is the gradual disappearance of free, uncontrolled space. Both online and on devices. Walled gardens used to be confined to game consoles. Now they start creeping up on general purpose computers. The iPhone and iPad where a significant step up. And now even the desktop isn't safe. The iStuff made it acceptable to lock everything down in the name of the safety and convenience.

We are losing the war on general computation.

---

Besides, there is a link between the freedom of the network, and the freedom of the devices. Without free software, you can't have a free internet: people will just used the (proprietary) software that's available to them, and that software will naturally steer them to "safe", controlled online spaces. You quickly get a cyberpunk dystopia where you have to break the law, perhaps even risk your physical safety, to run programs not approved by some corporation on your device (if they're still officially yours by then, and not "leased" forever or until you look at them funny).

And of course, without a free internet, you'll have a hard time organizing and implementing free software —especially if the software is susceptible to free the network free again —it will be shut down as "unlawfully facilitating circumvention" or something.

The simultaneity of the gradual locking down of both devices and network is no coincidence.


Is that really a given? Most users put up with incredibly locked-down corporate IT systems – would requiring those IT departments to publish whitelists really change that experience much?


everything is annoying and productivity-impeding compared to tiger-repellent rock


> files

Is this an explanatory simplification, or is it really only files that get scrutinized?

I would be equally concerned, if not more concerned, about network traffic (vulnerable daemons, clients vulnerable to maliciously crafted responses, etc) vs. files.


This makes very good sense and it reminds me of proper business internal controls.

eg at least two or more employees must be required to be involved in any decision making process. and separation of duties as per your suggestion for a start.


I love the (military, not Zombieland) "double tap" nomenclature for follow-up phishing emails that pretend to be warnings about recent phishing emails. It's a pattern in social engineering that I've seen used a bunch, particularly in "vishing" phone calls [1], but never had a good buzzword for until now.

[1] https://youtu.be/h8kWcggio5A


Is there any reason for a Windows user to use anything other than Microsoft Security Essentials (or Defender as it's been called since Windows 8)? It's free and everything I've seen and read indicates it works just as well if not better than commercial antivirus suites.


I have not found a good reason to use anything else. There are things that MSE will not catch, and I was derided by our "computer" company for using MSE instead of a Symantec product that he would have charged us for, but I then showed him an online virus scanner that showed that the particular piece of malware (ransom wear) that we picked up was not detected by either Symantec or MSE. He then conceded "yeah, well I guess any piece of software can't catch EVERYthing."

I have found that MSE is very lightweight and catches almost everything. I have not found a reason to use anything else.

(I manage about 10 Windows PC's at work.)


It doesn't satisfy my auditors. Who can't seem to explain why ("it's not on the list of boxes I can tick"), other than to direct us to the big four of McAfee/Sophos/Kaspersky/Symantec.


Also some versions of ssl vpns scan your computer and I'm not sure if all of them accept the built in MS antivirus.

(Protip: run such connections in a VM. This has a number of benefits:

* when they require a full scan it takes next to no time as the vm is almost empty

* when they cut off the rest of your network except for their site you can still reach google or your company server in the host machine.

* who honestly wants to run all those weird drivers on their day-to-day dev or sysadmin machine?

)


I'm guessing that's due a combination of tradition (Security Essentials hasn't been around as long as the rest) and possibly having business connections with those antivirus companies.

Disappointing either way.


No other reason than that it systematically sucks at stopping malware campaigns compared to pretty much everyone else on independent third party comparisons such as https://www.av-test.org/en/antivirus/home-windows/

But at least it's free!


From SecurityWeek's writeup on the same topic [1]:

No interaction is required to trigger the exploit. In fact, when Ormandy sent his PoC to Symantec, the security firm’s mail server crashed after its product unpacked the file.

[1]: http://www.securityweek.com/critical-vulnerability-symantec-...


The source for that detail is here: https://bugs.chromium.org/p/project-zero/issues/detail?id=82...

> I think Symantec's mail server guessed the password "infected" and crashed (this password is commonly used among antivirus vendors to exchange samples), because they asked if they had missed a report I sent.

> They had missed the report, so I sent it again with a randomly generated password.

I'm not 100% sure I buy it. The follow up comment is about how he had mistakenly sent them a wrong testcase, and he had sent them similar exploits in a zip with the password infected before (see https://bugs.chromium.org/p/project-zero/issues/detail?id=81... from April 28th).

It would be incredible for Symantec to guess the password "infected" for ZIP files. It's possible though!


"infected" is the industry standard. IIRC (could be making this up) gmail knows to try "infected" in password protected zips.


GMail does some really interesting analysis on e-mails. It even somehow managed to detect and block a file I wanted to send demonstrating a dcraw parser bug in a really obscure camera RAW image format.


Yeah I'm just surprised it's going to be scanning zips with the industry standard malware password for ... malware.


If it didn't, malware devs could just send malware zipped in a password protected folder with that password and tell users to enter that password to unzip.


Correct in regards to "infected" being used industry-wide for malware infected zips


>"These vulnerabilities reminded me of phishing and the Double Tap for two reasons. First, every one of these vulns can be exploited by just sending an email. Since the product is an antivirus, so it’s going to scan every file that touches your disk and every email you get for viruses. You don’t have to get your target to click a link or even open the message you sent — Symantec will happily try to parse every email you receive."

Another reason not to run any "antivirus" on your personal PC


Wow. I've always been very against anti virus but never actually thought of this aspect. The anti virus itself is a huge attack surface, probably with higher risk than the risks it can mitigate. Sigh.


So, do the "stop what you're doing and upgrade" links in the article actually go to Symantec's site, or are they phishing links? Because that would be a perfect example of the type of highly effective phishing the article is talking about.


My finger hovered over the reply button for a second as I weighed up the possibility that HN was part of an elaborate long-game attack vector!


I would laugh if he put in a link like that just to go "lol, got you again. Is this triple tapping?"


For anyone who doesn't know the title is a pun on https://en.m.wikipedia.org/wiki/PiHKAL


Next thing you know they will be self administering their own 0-day exploits.


Footnote 1 is very interesting (and so is the rest of the post):

> You know, it’s interesting that before I became the CEO of a startup, the only time I thought about “conversion rates” of emails in my career was when I was involved in phishing campaigns.

Edit: It's interesting to me that phishers are evil bad etc., and yet more interested in responding well to the rhetorical situation than people with careers.


Perhaps malware vendors will soon begin content-marketing to increase engagement with their target markets!


Symantec has been doing that very successfully it seems


"begin"


fixed that:

"tl;dr: If you use software with “Symantec” or “Norton” somewhere in its name, stop what you’re doing and remove it completely."


I think everyone is confused because they don't understand Symantec's business model.

They're primarily a rent-collecting entity that leverages the requirements of regulating industries like PCI as a way to tax businesses.

That why all these simple logical steps to make their product better aren't (and won't be) implemented.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: