Am an admirer of Mudge & co. from back in the day, though my initial reaction was doesn't he seem a bit over powered for Twitter? Their recent security issues with that hack were devops management and governance problems, no new science involved, they just dropped the ball on some solved problems.
Speculating on what kind of problems Twitter would need that calibre of person for, maybe detecting inevitable AI driven botnets and deepfakes?
Could also see how they need leadership with that tangible cred as a decider function, e.g. the personality cult around Jack might make delegation and scaling their management team a challenge if everyone is trying to find a way to compete for his attention, so they need someone with enough weight that he can legitimately defer to on security issues.
For some reason, high level managers need very expensive consultants to tell them the obvious truths.
As a very expensive consultant I am frequently appalled at how management teams completely ignore their own employees.
Forget about wasteful, these companies can afford consultants. It is just disrespectful to their own staff. Developers and operations guys already know the solution but can't get through layers upon layers of management. And so it takes a guy with salary high enough that he can be allowed to talk to the a high level manager directly, to get the message.
I was once in a uni class where the prof was a very expensive consultant. He said that a large % of his jobs for companies like GM or Ford was to grab a notepad and walk the floor. He would ask line employees why they did things the way they did and what they would change. He got reams of suggestions, typed them up, submitted them and collected a giant paycheck.
Sometimes they were blindingly obvious things. One employee would drive two bolts, quickly change heads, then drive a third bolt. While the carcass advanced to the next station, the emloyee would change back to the original head and be ready for the next carcass. That employee's suggestion: Make all three bolts the same size so I don't have to change heads.
Of course, that is just a fun anecdote. All such efficiencies have been implemented long ago... for auto assembly. But the point is still valid. Your employees know a ton about your business. Even the ones who do the most mundane tasks. Listen to them. Or... pay a very expensive consultant to listen to them.
I'm reminded of Robert Anton Wilson's SNAFU Principle: "Communication is only possible between equals." [0] Formal hierarchies can create perverse incentives that discourage passing vital information up the chain, lest you make someone in power look bad, or stick your neck out as a troublemaker / "not a team player" / etc.
While a good manager can mitigate the problem by cultivating mutual trust, individually and culturally, the problem gets compounded when hierarchies become nested, and the org scales beyond the Dunbar Number.
Third generation expensive consultant here. Clients don't pay you for your time, they pay you for theirs. There's a level of abstraction in orgs that you need to be able to navigate and influence. The Twitter hire I get, it's strategic. Man, we all got old.
One of my friends runs a market research firm that works for big retailers. She says 90% of their work is about producing evidence for the decisions that have already been taken - so they can be implemented without causing complaints.
I mean if you talk to a bunch of developers about what they would change in a project you will get valuable feedback. You will also get a lot of stuff that's just wrong, stuff that ignores the big picture and business goals. If you can recognising which is which that's worth a lot of money because acting on a bad idea can be far more costly than not acting on a good one.
For senior management, it is usually it is your job to hire people who know or help you know( a.k.a consultants )
It is not humanly possible to be knowledgable in all the fields of people working directly or indirectly under you. Good managers are not always technically knowledgable . It is useful skill to have, neither it is most important nor is it even essential.
While bad managers tend not to have the knowledge, the worst managers are strong technically but don't have other critical skills needed for the job.
Today, those three bolts are done by robots. Perhaps a modern very expensive consultant builds the AI to "interrogate" the robots for useful suggestions.
Is that actually true? My understanding is that there is still a lot of human labor necessarily baked in, and attempts to robotize everything largely have failed for a multitude of reasons (Tesla most famously so), but the most unsolvable is that of flexibility to change something fundamentally on a dime.
I work in manufacturing automation so I may be biased, but attempts to robotize the manufacturing of a product that's being iterated on continuously (Tesla) or automating the assembly of lots of low-volume products is always going to go poorly.
Large car companies like GM or Ford hire automation companies to build automation equipment that meets a very specific spec for assembly steps, tolerances, rates, etc. They design components with ease of manufacturing as a significant factor. How something will be assembled is a constant consideration while designing. The product being run on these lines never really changes and human are only used when the the operations are so complex that the equipment doesn't pay for itself in (typically) 2 years. This is very rare, mostly people are only there to keep the machines filled with parts.
In the sequence of
design -> automate -> manufacture
manufacture needs to be much larger than the others have good margins. You can't iterate on design without iterating on automate, so if you change the design a lot it's probably best to avoid trying to automate.
Elon Musk said the thing that was really hard to automate was putting in wiring. Tesla has now developed a whole new type of wiring system that is much more amenable to automation.
Tesla uses more workers per car in assembly than other major manufacturer. By a lot. They are not the people to look for in terms of automating auto assembly anecdotes despite their PR efforts.
Elon Musk spends a lot of money/effort not to get stuck in local maxima. I guess time will tell if Tesla's trying lots of new ideas works out for them.
Bolts may need to be different sizes due to the forces on them. Other solution might have been give the employee two wrenches, one each of the proper size.
Fictional employee. Fictional scenario. But you still assume that the employee isn't smart enough to know why bolts are different sizes and whether a bigger bolt could work in the place of the smaller bolts.
You assume you know more about this fictional car than the fictional employee actually (um, I mean fictionally) building it.
There's really no reason to think the person turning the bolt knows what size the bolt needs to be. It isn't insulting to the employee, it just isn't there job. In the same way a web dev doesn't know how docker works, or how their OS's memory allocate works, or how a C programmer doesn't know how the hardware works, or how an EE can't write readable code to save their life.
Sure, but there's no reason to believe the person turning the bolt necessarily doesn't know what size it needs to be.
Example using the same logic as the post we're discussing: Someone flips a coin in another room. It doesn't necessarily have to be heads, so we should assume it's tails.
> But you still assume that the employee isn't smart enough to know why bolts are different sizes and whether a bigger bolt could work in the place of the smaller bolts.
It sounds like you are not familiar with this disaster:
Uh... what? The comment I replied to argued that a random employee might understand why a particular bolt was chosen. The walkway collapse is literally about bolts/nuts being changed around. That design change by a third party resulted in a disaster that killed over 100 people... Seems like appropriate evidence that a line worker would not actually understand design details. That's not to say they couldn't make a suggestion and feedback their experience. Of course they should do that.
While that may be true, it is equally possible that the engineer did engineer thing with out giving a crap about the assembly thing, this happens alot...
Back in the mid-1990s, I did a lot of web work for traditional media. That often meant figuring out what the client was already doing on the web, and how it was going, so I’d find the techies in the company, and ask them what they were doing, and how it was going. Then I’d tell management what I’d learned. This always struck me as a waste of my time and their money; I was like an overpaid bike messenger, moving information from one part of the firm to another. I didn’t understand the job I was doing until one meeting at a magazine company.
The thing that made this meeting unusual was that one of their programmers had been invited to attend, so management could outline their web strategy to him. After the executives thanked me for explaining what I’d learned from log files given me by their own employees just days before, the programmer leaned forward and said “You know, we have all that information downstairs, but nobody’s ever asked us for it.”
I remember thinking “Oh, finally!” I figured the executives would be relieved this information was in-house, delighted that their own people were on it, maybe even mad at me for charging an exorbitant markup on local knowledge. Then I saw the look on their faces as they considered the programmer’s offer. The look wasn’t delight, or even relief, but contempt. The situation suddenly came clear: I was getting paid to save management from the distasteful act of listening to their own employees.
In the early days of print, you had to understand the tech to run the organization. (Ben Franklin, the man who made America a media hothouse, called himself Printer.) But in the 19th century, the printing press became domesticated. Printers were no longer senior figures—they became blue-collar workers. And the executive suite no longer interacted with them much, except during contract negotiations.
This might have been nothing more than a previously hard job becoming easier, Hallelujah. But most print companies took it further. Talking to the people who understood the technology became demeaning, something to be avoided. Information was to move from management to workers, not vice-versa (a pattern that later came to other kinds of media businesses as well.) By the time the web came around and understanding the technology mattered again, many media executives hadn’t just lost the habit of talking with their own technically adept employees, they’d actively suppressed it.
Whereas I appreciate the (terribly unfortunate) point made in the excerpt, the link didn't work for me. If anyone else wanted to read more, I found a working archive.org link:
> I remember thinking “Oh, finally!” I figured the executives would be relieved this information was in-house, delighted that their own people were on it, maybe even mad at me for charging an exorbitant markup on local knowledge. Then I saw the look on their faces as they considered the programmer’s offer. The look wasn’t delight, or even relief, but contempt. The situation suddenly came clear: I was getting paid to save management from the distasteful act of listening to their own employees.
I wonder if some parallels to this story (listening, without contempt) exist within the sphere of political polarization.
I think your perception of what a high level person like Mudge does is a bit off. At Stripe he did amazing work, but it was mostly organizational. He built a world class team, defined a set of short, medium and long term goals that enabled the company to reduce risks over time, had his team build tools to measure that risk and held them accountable for moving the needle.
That is to say, he did executive level work. I imagine he’ll do similarly great work at Twitter, setting them up for long term success, just as he did at Stripe
Lot of consultants commenting seem to think what they do is simple, it is really not.
They have a long experience of asking the right questions to the right people , filtering out useless inputs, creating the right abstractions strategically from tactical inputs, there are very niche skills, it may be easy for us, but for the non technical manager it is black magic
> For some reason, high level managers need very expensive consultants to tell them the obvious truths.
As a very expensive consultant I'm surprised the reason is not more clear to you. Most managers like to pay consultancy firms so that they can cover their asses.
If they just followed the guidelines of what the McKinsey team said then surely they weren't in the wrong. Whilst if they followed the advice of a subordinate or god forbid made a decision it's on them.
The problem is usually in-fighting between middle managers protecting their turf. Why can't the incentives for middle managers be fixed? Seems like if a company took anonymous performance reviews of Directors and VPs from lower-level employees it would prevent a lot of the bullshit.
Leaders that make decisions that have ramifications that are trivially worth 9+ figures are willing to pay the extra 500k to take their trust in the word of a consultant from 95% to 99%.
> but can't get through layers upon layers of management
i think you hit the nail on the head here. the problem is that the upper management usually arent clueless, they are just better leaders than they are doers...
I used to think middle management was stupid and clueless. Now I understand that we are simply playing different games. I am trying to make a good product. They are trying to get promoted.
I think you may be making a mistake tying those together the way you are.
I've had highly effective managers before, who were trying to get promoted. Just like I've met highly effective developers, who were trying to get promoted. That's orthogonal to making a good product.
They wanted to get promoted because they wanted to be able to be more effective; everything they had control of went very, very well (we were empowered to make decisions, he helped us navigate organizational obstacles, and was the sacrificial lamb for useless meetings). All the issues we ran into were from outside his circle of influence; product had its own reporting hierarchy and they were a mess. Integrations with the rest of the org were more bureaucratic and less capable, and getting them to do anything required getting VP level support on our side, etc. Had he had more sway, we could have executed faster. We directly benefited when he got a title bump.
I think what you mean are those who are putting individual success over team success, and yes, those managers are either evil or incompetent.
> As a very expensive consultant I am frequently appalled at how management teams completely ignore their own employees.
Tell me about it. On one of my previous jobs I would constantly speak out about what we were doing would cause people to lose hundreds of thousands of USD, and every time I was laughed out of the meeting for "overthinking" things (I guess that's what wanting to do things right is called these days). When one of our customers lost US$200k and nobody had any idea why or how or when I knew that company would be the death of me, and coincidentally I was on the same meeting I planned to use to quit my job.
If anything it taught me that working in finance is not for me :p
I would imagine nobody takes Twitter security seriously (quite evidently including themselves) which probably creates recruiting impediment. If Jack feels he needs some gravitas to attract a good team, hiring Mudge isn't a bad idea. (Might also provide some legitimacy to an off-menu Twitter threat intel product as well.)
There's probably a little bit of housekeeping for infrastructure and operations security. I imagine all of the fun stuff is customer-facing.
A watering-hole as large as Twitter will be under constant attacks of every sort and every source, from individual white/grey/black-hats trying to poke holes in the service itself, spammers who want to figure out how to create as many automated accounts and tweets as possible without getting blocked, government-sponsored hackers trying to get personal information on dissidents or defectors, etc etc.
> Their recent security issues with that hack were devops management and governance problems
If those are the hard business/technology management problems (and they are), then try solving those while also securing them.
Security by itself is relatively straightforward: 0days, red teaming, scanning for known vulns, etc. Just like every other aspect of the business, you hire someone to do one specific thing and you've got lots of options.
But then try to infuse security into every single aspect of a business and product, in a way that increases both velocity & quality of product dev, without sacrificing efficient organizational management. There are already a ton of inscrutable complexities between all facets of the organization and its products. Trying to add security to all that is like teaching a juggling elephant to ice skate.
So you need someone who's very good at managing security in the context of all those other problems. That's hard to find. It helps to have people who've seen problems in the same general space from a lot of different angles.
> Their recent security issues with that hack were devops management and governance problems, no new science involved, they just dropped the ball on some solved problems.
You don't hire security personnel to address previous breaches.
When it comes to security, better to err on the side of too much than too little. I'm sure there's enough work to keep him busy, question might be whether it's intellectually stimulating enough for him.
Hypothesis: large numbers of state actors run influence networks on Twitter and have no clue where to start now that the actors have had years to adapt to the weak response from Twitter.
Twitter has global geo-political implications. Literally, a war could have broken out with Trump's account being hijacked. He just left Stripe, where he was responsible for a PCI environment. If anything, his last job he was "overpowered" for relative to Twitter.
Twitter hired Mudge because Twitter needs a security engineer with a friendly ear in the US Government. Mudge has been building goodwill with Congress since the 90s and Twitter is seeking to leverage that.
They have more money than they know what to do with. Let's face it, this is a company that runs what's essentially a very simple web site, yet they have a 34B market cap.
The fact that you don't often see announcements for big tech head-of-security appointments on the front page of hackernews says a lot about why they hired him.
Mudge name handle does rings a bell and the main reason is that he was featured and interviewed in the arguably the most famous 49th issue of Phrack magazine containing the popular article "Smashing The Stack For Fun and Profit"[1]. In term of popularity this article is at least as popular in computer security circle in term of its profound effect as Dijkstra's seminal paper on "Go To Statement Is Considered Harmful" in computer science community.
I wouldn't call that a "seminal paper". However, "Smashing The Stack For Fun and Profit" joins a couple of classic papers such as "The Geometry of Innocent Flesh on the Bone: Return-into-libc without Function Calls (on x86)" (ROP) and the Spectre/Meltdown papers as being fundamental to application security.
Possibly of interest to the HN crowd, just before this Mudge was (and I think still is) involved in a non-profit named CITL that focuses on performing static and dynamic analysis (eg fuzzing) to evaluate hardening measures in consumer software. For example, their work on browsers: https://cyber-itl.org/comparisons/comparison.html?comparison...
It's very interesting work but I haven't seen wide discussion on it so far.
Very progressive move by Twitter. Meanwhile I tried to warn a car parts shop for 6 months of their massive problem [1] (remote js execution on their site that hadled payments) and even after it was in the media they didn't answer me or fix the issue.
Wrote them on Facebook one day and they tried to sue me
As an Austrian myself, I'm not at all surprised at their reaction. To them, their website is "just IT stuff" and they simply don't have a notion that it would involve any security.
To them you'll likely seem like an overzealous geek that shouldn't mess with their business website. I've experienced this before myself and it's not particularly a good position to be in.
Their site has most likely been technically abandoned, i.e. no one capable is in charge anymore.
It'd be best to talk to the owner, show them your "hack" (change it to cats on your phone and let them verify in their browser) and offer them to fix it for free.
That's how one does these things in our small country. ;-)
that's actually what I tried to do. In my mind it would go like "hey your site has included a script from a ___domain I own" - "haha right, that's some legacy stuff thanks for noticing us" - "no worries
but obviously that's not what happened
1. Telling them in person (they didn't understand)
2. Asking for the IT persons Phone number (they didn't give it to me)
3. Leaving my phone number and email (they never contacted me)
4. Notifying the austrian CERT (they never got an answer from the owner)
5. Notifying the press (standard.at posted an article about it, they didn't respond)
6. Writing them on Facebook (ob boy did they respond :D)
But since my first police raid I don't publish anything before letting my lawyer read it. He said if they do press charges they haven't got a chance since I have a paper trail of everything I did and didn't harm them or their site in any way
The way I see it is, say, you own a house. And you're having someone telling you that it is not properly secured.
If I come up to you, the owner, and kindly warn you that your doors can easily be unlocked, your reaction would probably be a big thank you.
But, I also understand that you are free to answer me to get the hell out of your lawn, because it's none of my business.
Sure I am doing it for your safety, for the safety of your kids, your wifes, and your valuables. I have no ulterior motives.
But, you have the right to not want to listen to all the ways an intruder can come to your house and steal all your stuff. You should have the right to find that information useless, and I don't have any say in that.
Now, warning all the town that your house is not secure enough to try to provoke an answer from you ? What do you think about that ? Really curious.
well in my article I never mentioned their name and pixelated the screenshots. The local paper did the same, it was more of a story "if an it security person tells you you have a problem, you should listen"
In the end it turned out that they didn't "not want to listen" the information just never got to the right person (internally) and I talked quite a bit with the owner (after understanding everything he even thanked me) and he said he never got a contact from CERT but I asked them and they said they wrote them twice.
Delete all profiles with five or more digits in the username. Joking aside it will be interesting to see what they can come up with for solutions to platform integrity. I find myself using Twitter more and more and the only SoMe I really "follow".
“ Zatko said he appreciated Twitter’s openness to unconventional security approaches, such as his proposal for confusing bad actors by manipulating the data they receive from Twitter about how people interact with their posts.”
Would this be like offensive cyber security? Or active security?
I figured it out. Beto O’Rourke was in Cult of the Dead Cow hahaha. He also used to be in a post-hardcore band with a dude from At the Drive-In. How awesome can one person get?!
> the biggest problem with Healthcare.gov was not timeline or budget. The biggest problem was that the site did not work ..
If my memory serves me, a bunch of editable PDFs and drop-down selection boxes that gave no clue as to the correct content. Some serial number, depending on your insurance provider, AS234 BD568 and so on. A total mess.
It's interesting that there are two famous Mudges - I initially thought that this is about Raphael Mudge, who works on CobaltStrike and Armitage among other things, but is not related to Pieter Zatko 'Mudge'.
Three: Craig Mudge, of Bell, Mudge and MacNamara, a highly influential computer architect, based in South Australia. He did critical work in VLSI. And, worked at this place you may have heard of called .. Xerox Parc (he was the director..)
I only met Mudge once (a few decades back in the days of VA Linux, he visited the campus....this would have been maybe right before the @Stake days). The guy was wickedly sharp.
I was lucky enough to work with Mudge directly a bit at my last job. Wickedly sharp, but also just a great, kind person to work with, and he truly cares about what he does.
Yeah, the one conversation I had he was thoughtful, a little bit sardonic, but seemingly very thoughtful, maybe keenly aware of the skills he had and felt no need to posture about them...just sincerely curious and probing.
The wikipedia says he is an open source contributor, but doesn't mention any project. Google doesn't bring up anything either. What projects has he worked on?
> A year ago, the U.S. government accused two men of spying for Saudi Arabia when they worked at Twitter years earlier,
Sounds more like some solid counter-intelligence people to hire would be good here, rather than straight infosec. But still good hire obviously and I'm sure this sort of thing has been thought out already.
Side note: Getting to hand out grants at DARPA sounds like a fun job.
If you're cynical about cDc --- I certainly am --- you should know that talking like this about them just amplifies them. In reality, most people in the scene would associate Mudge with L0pht much more quickly than with cDc --- whatever it is cDc means, or meant, or whatever.
Really though, Peiter has just been in the bullpen for roles like this for awhile now. There's several people like him ready to be tapped for high-profile roles (Stamos is the best example, though he seems happy where he is).
Nothing, a lot of lovely people are affiliated, many of them pretty hard core, but cDc itself is really just a clique that had a newsletter for awhile.
Indeed. It even swept across our college with non-technical users getting hold of it and causing mischief. Many CD trays were opened and questionable messages displayed.
>Capture the Flag (CTF) is a special kind of information security competitions. There are three common types of CTFs: Jeopardy, Attack-Defence and mixed.
>Jeopardy-style CTFs has a couple of questions (tasks) in range of categories. For example, Web, Forensic, Crypto, Binary or something else. Team can gain some points for every solved task. More points for more complicated tasks usually. The next task in chain can be opened only after some team solve previous task. Then the game time is over sum of points shows you a CTF winer. Famous example of such CTF is Defcon CTF quals.
>Well, attack-defence is another interesting kind of competitions. Here every team has own network(or only one host) with vulnarable services. Your team has time for patching your services and developing exploits usually. So, then organizers connects participants of competition and the wargame starts! You should protect own services for defence points and hack opponents for attack points. Historically this is a first type of CTFs, everybody knows about DEF CON CTF - something like a World Cup of all other competitions.
>CTF games often touch on many other aspects of information security: cryptography, stego, binary analysis, reverse engeneering, mobile security and others. Good teams generally have strong skills and experience in all these issues.
I disagree that its harder. Sure to be an expert security engineer is harder than being an average programmer, but so is being an expert at algorithms.
I also don't think ctf's are a good model for security questions during interviews. Knowing how to do something securely is not the same as knowing how to exploit a vuln.
On the other hand, I'd far rather hire someone who can explain why TLS works and what the weaknesses and tradeoffs are than someone who can reverse a binary tree on a whiteboard, and I'd far rather work somewhere where that was standard of questioning.
That doesn't make any sense whatsoever. You do not hire someone because he knows about a certain technology. You hire people because they will provide long term value to your company and are able to adapt a rapidly changing technology space.
Reversing a binary tree on a whiteboard is certainly a bad question to ask. But I would argue, for all intends and purposes still a miles better indicator about future potential than if someone knows how/why TLS work. Yeah you can read that in a book. I can google it. Useless for interviews.
If you are hiring for a position that requires you to implement TLS, sure go for it. But that is not the rule. And what are you going to do after he has implemented TLS? Will he be able to work on something completely different?
If you hire based on algorithms, there's a good chance you'll end up with a bunch of people who are good at algorithms, or at least willing to put in the effort to study the algorithms and be able to recite them back with some variance.
If you hire based on knowing how the the internet works (TLS, HTTP, BGP, whatever), then you'll be working with a bunch of people that understand how the internet works.
I guess the idea is that TLS is sufficiently complicated that you can take tangents during the interview and establish if the candidate can understand and communicate complex concepts
When I'm looking to hire a software developer, I would personally would rather hire someone who can write well-structured and maintainable code, can describe a system architecture they worked on in a way that is accessible to people not familiar with the ___domain space, and is able to say "I don't know that, but I know how to Google it".
Now, if I'm hiring a sysop / devop / security engineer, it's going to be differently focused to some extent, but the same principles apply - core knowledge, communication, humility, ability to research.
These are not even close to equivalent. I agree the algos interview process is broken in many ways and can be gamed to some degree by grinding Leetcode or whatever, but implementing an algorithm is generally much more difficult than recalling a fact.
(Admittedly, candidates are likely to have memorized the algorithm for reversing a binary tree since that is such a common interview question)
You're testing recall only when asking about TLS. You may be testing recall for "how to implement a known algorithm", but there's still plenty of room for testing actual problem solving too.
If you are the interviewer, even if you are asking a standard "how to invert a binary tree" type algorithm question, you hold the cards to keep pushing the bounds for problem solving by extending the question.
Well if you're willing to push boundries, you can certainly do the same with TLS. Ask why the TLS spec does X instead of Y, for various subtle design decisions. This is probably actually much easier to do than with a binary tree, as TLS is a lot more complex and a lot more subtle. It would certainly be an appropriate line of questioning if you were hiring a crypto engineer, not sure it would be relavent to a security engineer or software engineer.
You can ask algorithmic questions that are not easily googleable or standard problems (basically where they have to invent the solution themselves). Although it takes some effort to find the right difficulty problems (tip: the right difficulty is pretty low) so you don't end up wasting time on stuff nobody can solve or stuff everyone easily solves. You can ask people you know to take a stab at a problem to gauge its difficulty. Or you may even be able to eyeball it.
Is fast really the goal? Or is it just that most engineers don't want to spend time interviewing so that's what we've optimized for?
Some of the absolute best engineers that I've worked with take their time to wrap their heads completely around a problem before diving in. They aren't slow thinkers, but they aren't people who excel at these kinds of interviews either.
I've been doing interviews for a long time now and I find it more effective to surface strong opinions about things they've worked on -- good and bad.
I'm not hiring into a feature factory -- I don't care about fast cogs. I'm hiring people who care about what they do and giving them an environment to thrive in.
I guess it depends but at the internship level an intern whos gast vs an intern whos slow is like 10x things done dometimes. I work in a math heavy environment tho.. so i can understand your point of view
I see yours also, but we definitely shouldn't be skewing hiring process to interns unless at your company you're mostly hiring interns (I'd think that's uncommon?).
The vast majority of web developers will never touch things like TLS configurations. We have stuff like nginx and WSGI for a reason!
Meanwhile there's a decent chance of them running into a problem that looks like binary tree manipulation.
Despite all the handwringing about btree reversal, not being able to improvise a solution to that is a much more useful indicator than not being able to describe even the basics of TLS.
Security engineers don't actually recite trivia facts about TLS in their job. Their job is more like, know when you should use TLS and when you shouldn't and know what it does and does not protect against.
A security engineer is just as likely to derive a tls cipher config from first principles instead of googling/looking at ssllabs as a programmer is likely to reverse a binary tree from scratch instead of using a library or searching stack overflow.
Engineers (of any stripe) aren't hired to recite random knowledge, they're hired to know what knowledge is appropriate to apply to a particular situation.
I've been doing web development for a long time, and I've never had to do anything like btree reversal in a web development context.
It's probably not even that hard, but I doubt I'd be able to improvise a solution in an interview setting that's nothing like a real work environment. (Memories of trying to work out some kind of graph traversal while someone was basically just staring at me.)
It seems to me that people who are doing this kind of work behind the scenes of a web app aren't really doing web development.
uhh...setting SSL parameters is part of defining an HTTP block in nginx.
As someone in a public company with actual customers, we have to deal with TLS configuration all the time (as customers come with requirements and being public comes with compliance/security) and it's important to know what we're doing there...
I'm responsible for millions of dollars in infrastructure and the way that I got here was being a web developer who knows how the internet works. And in an engineering organization with hundreds of engineers, most of them tend to come to me first with questions.
I have never once in my career had to reverse a btree.
Setting your security test labs takes way more effort than opening IDE, LeetCode, checking informatic olympics tasks or maybe some book/pdf/write up/wiki
When developing algos you're in your own world meanwhile security often has to mess with other things like software, standards and stuff
e.g you're interested in web sec/hacking, then except understanding of standards (what it allows and what not, etc.) then you have various implementations to care about like web browser - chromium, gecko, ie, safari and stuff. It's a lot of effort!
Let's say that you want to find vulns in PDF parsers/renderers by checking their code source code - I think it'd take a lot of effort to check and understand (let alone exploit) those implementations in two major browsers (I suppose they're different, but I've never checked that)
>I also don't think ctf's are a good model for security questions during interviews
I didn't meant that, sorry if I made it sound as if I expected people to solve CTF tasks during SE interviews
Just wanted to encourage people to do cool stuff :)
The core doesn't really move that fast. 90% of everything is still an injection vulnerability of one form or another, and if you go above the level of vulns, most organizations are really looking for people to do risk analysis, in which case, the actual vulns somewhat become less central (however there are lots of different types of security engineers, and lots of them have very different responsibilities)
It depends a lot on what you're doing of course. I do web application security stuff (glorified xss detector), i've personally felt that the most useful tools by far are the browser dev console and curl, but ymmv.
> I didn't meant that, sorry if I made it sound as if I expected people to solve CTF tasks during SE interviews
>Just wanted to encourage people to do cool stuff :)
"Maybe it's time to ask security questions during SE interviews instead of algos?"
I don't want to hear the answers. My faith in humanity is already hanging by threads.
I know this is my problem. It's really hard for me to not flip the Bozo Bit. To take both the good along with the bad every person has to offer.
Story time.
I worked near "Chris", a tech lead senior architecture fella. Same product group, different code bases, thank god. I disagreed with pretty much everything Chris said, but I accepted that we're all opinionated and it's more important to be consistent than to be right (correct). So long as his team kept shipping, Chris could do whatever he wants.
Until.
Because Chris was loud, large, older, and a boar -- exemplar of mansplainer -- he was influential. He convinced most of the people within ear shot (half of the floor) that password wallets were a terrible idea. I kept listening, to hear the rationale. Maybe he had a legit reason and I was the clueless noob.
Nope. "Single point of failure. If someone cracks your wallet, you're fully pwned."
Chris is why that product group could not, would not adopt any credentials (secrets) management strategy. It was all wikis and PostIt notes.
Reflecting back, it now occurs to me that hoarding the credentials may have been Chris' gatekeeping power move. Hmmm.
> Maybe it's time to ask security questions during SE interviews instead of algos?
Please no, but I agree that security questions could be a very good addition. But please let's not replace something used as an under par proxy for the ability to program with something likely even worse.
Maybe it's time to accept that sufficiently scaled companies need to hire more specialized roles, rather than having a single job title responsible for an array of technical specializations?
Unless you work as a security engineer building security tools security should be following basics as specified in common security standards (PCI DSS, ISO 27001 etc). It's comprehensive and not that hard to learn. Twitter are not remotely close to operating like that given their recent hacks.
> security should be following basics as specified in common security standards (PCI DSS, ISO 27001 etc)
Would be nice but no, not really. The standards you mentioned are mostly compliance requirements. To be honest, a good chunk of the industry considers them as kind of a joke from a security perspective.
> It's comprehensive and not that hard to learn
Have you even read these standards? I mean, they might be "not hard to learn" but they are far from comprehensive (or specific, depending on which one you are looking at)
> Twitter are not remotely close to operating like that given their recent hacks
Ask literally any security professional you trust, companies compliant with PCI DSS and ISO27k1 get security incidents and breaches all the time just like everyone else and possibly more (given that if they need compliance with these standards they are probably big enough to have very wide/heterogeneous infrastructure/applications portfolio/administration practices/etc). If they claim they don't, that's most likely because their telemetry sucks (so, it still happens they just don't know about it)
My point isn't that these standards will protect you from any incidents. My point is that these standards are the low hanging fruit and Twitter hasn't even picked it yet so it's a bit redundant to get super technical. Twitter is suffering from governance failures.
No. Engineers introduce vulnerabilities through their normal day to day duties.
The thing to do is familiarize yourself with the OWASP Cheat Sheets directly related to the things you do day-to-day...and to check if the thing you're working on relates to one you're less familiar with.
You don't need to know much about security to write safe programs. Most of the work is done by others. You need to know how to avoid common pitfalls (i.e. yeah, if your service uses 3rd party code to parse customer provided data and your colleague suggests using this awesome C++ library for it, maybe some alarm bells should go off).
Advanced security knowledge is not needed for developing software. What you need is a security team that reviews the work your developers produce. That's it.
> bad algo can almost always be rewritten, leak cannot be reverted.
And while a bad algo can be rewritten, bad software often can not. Bad programmers are a disaster for scalable software projects. A leak can not be reverted, but so can't the product you didn't ship because you hired the wrong people. Or the company who goes bankrupt because you weren't able to ship a product.
Building software with security engineers instead of software engineers is like trying to win a Formula One race with Fighter Jet pilots.
You also don't need to know much about many algorithms to write workaday programs. Most of the work is done by others. /head nod to Timsort.
You need to know how to avoid common pitfalls (i.e., yeah, maybe nested for-loops with millions of elements per level of iteration, maybe some alarm bells should go off)
Advanced algorithm knowledge is not needed for developing typical business software.
Bad security is a disaster for any public-facing software projects. A product you didn't ship because you hired the wrong people can't be reverted, but neither can a data leak.
Or the company who goes bankrupt because you leaked highly sensitive data.
Speculating on what kind of problems Twitter would need that calibre of person for, maybe detecting inevitable AI driven botnets and deepfakes?
Could also see how they need leadership with that tangible cred as a decider function, e.g. the personality cult around Jack might make delegation and scaling their management team a challenge if everyone is trying to find a way to compete for his attention, so they need someone with enough weight that he can legitimately defer to on security issues.
Congrats to all involved. Interesting times.