Hacker News new | past | comments | ask | show | jobs | submit login
My years working on black programs (thespacereview.com)
441 points by pinewurst on Nov 20, 2019 | hide | past | favorite | 164 comments



"My supervisor observed that I was anxious to learn new things. He assigned me to the Discoverer project to develop system schematics, interconnection diagrams, and wire harness definition. I was working for the Electrical Systems engineer. His was a very responsible and highly visible position. And like most engineers working in this new field, he was in his early 20’s, just three years older than me. There were senior engineers in management positions, but there were no senior spacecraft engineers. Everyone was learning on the job."

Parallels to Silicon Valley are easy to find. It's a valid career path: if you want to progress fast, find a hot young area where there are no senior experts and work hard to keep up.


Wow, if I'd had a situation like this when I was in defense I might still be there. Instead, I had managers old enough to be my father, no upward path until I "put in my time", and an attitude that wanting to move around within the company was "disloyal" to my current management.


I hope people don't fixate on this and see this as the issue:

> Instead, I had managers old enough to be my father...

This...

> ... an attitude that wanting to move around within the company was "disloyal" to my current management.

... is bad management and bad company culture and are the reasons you get older managers who are inflexible and just riding out a paycheck. Those are the areas to fixate on and if you see it, leave. Nobody should have to work under those conditions. Working for an older manager shouldn't (and rarely is), in and of itself, be a red flag.


I think it's slightly more complicated. Having an older manager/supervisor is not a bad thing. Working in a department where you need to have put in 12 years to become a supervisor is a problem.

The old automotive company I work for released a "strongly recommended" schedule of different responsibilities that an engineer should have held for what time to be considered a strong candidate for promotion. There was a similar document I heard about for going from supervisor to manager. Combined these schedules said that no one would ever work their way up from entry level to department head. Ever. Therefore the only way to get there is outside of the official system via office politics, and not being honest about that is pretty dysfunctional on its own. That said, a symptom of that is the number of supervisors and managers who have 30+ years in...


> I think it's slightly more complicated. Having an older manager/supervisor is not a bad thing. Working in a department where you need to have put in 12 years to become a supervisor is a problem.

Exactly. I was too brief with that point, but the issue is first-line managers, in some cases, having more experience than I had years of life at that point. It's an environment that seems to condemn people to life as a minor cog in a huge machine, where your opinions aren't respected and you have no hope of making it into any sort of management responsibility.


Yeah, I realize now I should have committed more words to explaining that part. An older manager per se was not a problem. It was that fact that every line manager had 15+ years of experience, and thus this level of tenure appeared to be a requirement to actually advance. It crushed any feelings of ambition, because there was literally nowhere to go even if the other management problems didn't exist.


I’m sorry to hear that. The defense companies I’ve worked for, big and small, have fought to retain employees. Whether it be the end of a contract or just wanting to change the type of work you are doing, these companies fought to find you another position versus taking your clearance to a competitor.


The Silver Tsunami is not kind to DoD employment and to hiring managers. The young-uns look around, see nothing but old white dudes, and then head to anything resembling a FAANG. Livermore has had a heck of a problem with it. Even the 35+ year olds are trying to make like a banana.


I sort of started my career in finance like this. Never sought or intended to be on finance, but I could identify and solve problems the firm(s) I worked for that they didnt even know they had. Used to quip "when I'm doing my job well, noone knows that I exist. The moment I fuck up, everyone knows so exist."


How does billing / incentive / value based compensation work when you're invisible?


You often can get a lot of money I’d you threaten to leave. But in my experience, it’s a lot more profitable to work in the front office and have people know exactly how much money you have made for the firm.


i've always felt the closer you are to revenue the more money you can make. That's why good sales guys can make >$1M/year


Exactly. For the same reason that the oyster farmer gets to eat more oysters than anyone else!


Poorly.

Unless you game the system, that is. If you ever see a "hero", they're either incompetent but putting in extra effort or setting up the events.


Witnessed a "hero" in action, always evaluating what everyone in the room didn't know so he could pretend to be the expert.


Apropos username.


This is sort of parallel to very young network engineers and NOC staff, in like 1996, discovering for the first time what an ASN is, what BGP4 is, etc. And the commercial growth of the modern Internet.


What would those be in todays context?


Areas in cyber security. Ex: Automating simple but highly repetitive tasks for incident responders will win you lots of friends and generate lots more opportunities if you want it.


I think applications of AI to creating 3d modeling assets for videogames. Gaming is a >100 billion dollar industry and art can be as much as half of costs for AAA games. It seems to me like it's the lowest hanging fruit of AI in terms of difficulty vs potential revenue.



MS Flight Simulator 2020 seems to be doing this - creating 3D maps of all structures/trees/etc out of normal Bing world flat maps. Not sure how much human touch is required afterwards, but if those early videos are representative then damn that's amazing and brings realism never seen before


That's one example, another one is for smaller assets - trees and vegetation, for example (SpeedTree). There's definitely a market for creating 3rd party tooling to help with developing a game. There's actually quite a few companies and technologies that specialize in one aspect of a game or game's world.

There's a few other interesting technologies that add a lot of content to a game for relatively small investments; think physics and destruction engines, animation technology (iirc there is something developed by EA's sports games branch that can generate intermediate animations for smooth animation transitions), world generation (height maps), etc.


Deep learning is the obvious answer: 10 years ago it was largely confined to a small number of research groups and niche conferences.


Machine learning's hot moment was back in 2012, is there still low hanging fruit that google can't wipe out?


Enterprises. Too many people doing this on startups already. But lots of money to be done with training and consulting in the corporate world.


The behemoths don't (and can't) care about markets with a value below a certain threshold.


I'd assume nowadays it could be Cloud, AI? But thinking through, they do have some history.


From where I’m sitting, it seems like most of the money to be made in the next 10 years won’t come from high tech, but rather from applying bang average tech (Python, React, Go, and standard AWS infra) to industries where the most experienced specialists have no concept of what those words mean (e.g. healthcare, agriculture, finance). There are huge opportunities to make these industries more efficient, and make money along the way.


This.

There are so many industries that aren't going to be 'disrupted' by some fancy new tech. Because they have much more urgent low tech problems to solve first.

I've lost count in the number of startups that want to 'disrupt' agriculture with blockchain, because, you know, who wouldn't want to have their potatoes tracked in a decentralized proof-of-stake ledger? PotatoCoin! But reality is that to store data in a blockchain, you first need data to begin with. Many of the business in agriculture is conducted on a handshake, usually there isn't even paperwork. Also, in many countries agriculture relies on operating in a grey area of tax, just to break a profit. I bet a lot of farmers don't even want a transparent ledger for everyone to see ;-)

Most of us developers don't have the real-life industry experience to remotely understand where the opportunities lie. And we are way to naive to think that we can solve any problem with software.


I've been a farmer for most of my life in conjunction with my day job, and this is correct.

There was an article recently in a modern farming magazine about how restaurants and downstream consumer organizations and businesses want to use blockchain technology to track where a cow was raised, where it was processed and how, and when it became the hamburger or whatever you're eating so that customers can scan it and feel better about themselves and their choices.

Cool. Super cool technology. That I will never use on my farm because it provides me with absolutely 0 incentive at the start of the stream other than increased overhead. After looking into the technology, there are ways to use that data to track cattle, and ways to use that data to track yield and feed ratios and that sort of thing, but it's not the purpose, and therefore almost impossible to get out of the system. It also would be super easy to game by the large 'family owned' farms (sort of like how they do it now anyway).

And I already do what I need with excel, it's just time consuming. So it's another example of 'BUT IT'S TECHNOLOGY' that just doesn't appeal to me.

For other examples see: automated weed control, most automated irrigation systems, most automated weather and climate tracking systems, and definitely any/all automated feed and livestock management systems.

The one thing you said that I want to push back - from a very US centered perspective - very little farming is done via handshake anymore. Any farmer who is actually able to survive does so by playing the markets, by pushing formal contracts, and by being smarter than that. Nothing on the outgoing side is left to chance anymore, because all of our inputs are based on chance already (weather, performance, yield).


Ha! I help out the NHS and they're amazed that we are able to autopopulate forms using ActiveDirectory. They're used to paper...


Yes, for example in warehouse management. Most companies still use outdates apps on Windows ME.

The problem is that stakes are so high that nobody dares to change.

You cannot turn off a running warehouse for some days.

I believe datamigration is mostly where the opportunities are. Because then you can one day flip the switch and start working with new software while having the same 'state' of the old software.


"The Third Wave" by the AOL guy :)


Please do expand - I agree with you but as usual am missing the extra connection?


I work at a company that intersects finance and healthcare.

Our competitors to dominance through smart acquisitions, but this came at a fixed linear or super-linear operating cost per unit of growth. We are growing by identifying abstractions, and incorporating the new markets into existing ones.

Their method was the best way to quickly grow a company to a national scale in the 60s, but it comes at the monotonically-increasing costs of internal coordination and duplicate work. Software allows you to grow gracefully, while avoiding those problems as they appear.

As an example, we analyze inbound calls by volume every quarter to identify how we can anticipate user needs, and solve them ahead of time, without requiring a phone call (thereby improving the client's UX and saving operating cost). Every year, we reduce our call volume while simultaneously increasing our client headcount and improving the tools that are at our team's disposal. That kind of iterative optimization isn't possible at the big conglomerates.

Look for industries that haven't figured this out yet, those are the ones where the money will be made; Stripe is a good example of this playing out in finance.


Hello,

thank you for your explanation! I would love to hear more about your approach. I am looking what to do next in my professional life, and these kind of concepts sound very exciting!

Would you perhaps be willing to have a short conference call, or perhaps, an email exchange?

My email address can be found in my profile.

Cheers!


I can’t speak for this person but it appears they’re making the case that a lot of value comes from giving some computer love to industries that gets no love but could benefit from a little tlc.


That was 5-10 years ago.

It needs to be something I've never/barely heard of.


Damn so it's basically luck...I mean even bitcoin was 10 years ago I think? Also nowadays you probably need to be a PHD to work on something really interesting.


Not necessarily luck. Just go to the professional forums of the people in those "cutting-edge to the masses" fields, and see what they're nerding out over as a "cool thing they'd like to get into when it gets a bit more mature."

(It works for programming languages, too. Hang out in a forum for a relatively-new language like, say, Elixir, and you'll hear gossip about a lot of languages whose communities would describe Elixir as a staid legacy language, that'd never merit a mention here.)


I don't see how you could actually come up with new stuff in PLT without a very heavy math background, specifically in logic. I mean it's not impossible just extremely unlikely.


New programming languages aren’t usually examples of novel advances in type theory. They’re usually just novel combinations of existing features. They’re still “bleeding edge” in the sense that they’re what the industry will be using in 20 years.


A lot of well-known, early programmers had PhD. Not all though.


Makes sense. But computers have a lot of interesting applications so maybe something will come out. For example gaming in the beginning, in 70s and 80s still attracted a lot of teenagers.


Maybe AR. I mean, look at Jeri Ellsworth, self taught and probably has the most plausibly commercial AR solution (Tilt Five) so far.


That's clever. Nobody can draw dark in AR, which means overlays on the world have to be too bright and shiny. Ellsworth came up with a use case where the AR images always have a grey background, the blank game board. So this is much more workable than the more elaborate AR schemes. Cheaper, too, at $299.


im just amazed that Jared Lanier has found new-founded relevance.

the first i read of him was ~1988 or so in Mondo 2000....

Now he is popping up all over...


PhDs are overrated

Especially on the lower barrier to entry areas.

Being able to learn things yourself, then learning on the job is more important.


Quantum information science, but I think this field is predicated on having a PhD.


Quantum information science seems like something that will take many many years before it amounts to something.

I remember reading that a quantum revolution in computing was just around the corner 12 years (in 2007), and only a month ago has any sort of "breakthrough" happened (referring to Google's "quantum supremacy/breakthrough[0]").

Considering this glacial track record, I'm very skeptical.

[0] https://www.vox.com/recode/2019/10/29/20937930/google-quantu...


This is becoming less true. The field is still very PhD heavy, but some employers definitely hire people right out of an undergraduate Physics or ECE degree. They end up effectively completing a PhD program on the job, but don’t get the fancy title. On the bright side, they get paid probably 3-4x more than the average grad student stipend.

I think you’ll start to see more demand for non-PhDs as the various competing QIS players begin to standardize their architectures and practices.


Quantum computers seem to be barely out of the "we just invented a transistor" stage of development. No non-toy general purpose examples exist.


No, there are plenty of experts in those fields. Think things more like bitcoin.


During the Iraq War, everyone marveled at the great new technology behind the Predator drone to conduct surveillance and deliver ordnance during strike operations. Then it leaked that the Predator had been developed 15 years earlier and used successfully in Desert Storm. Cut to 2011 when we learned that the SEAL team that killed Osama Bin Laden used a stealth helicopter, something that no one in the world had any public success toward achieving, and we not only achieved it but were running classified missions with it.

Based on those two examples alone, I estimate that the most secretive programs are 15 years more technologically advanced than we are in the public ___domain.

AI and Quantum Computing come to mind as likely focus areas in addition to those already mentioned.


There is also a great deal of fetishizing of these black programs and post-hoc creation myths that assume a level of advancement that simply does not exist and did not exist at the time. As a simple example, you are confusing the Pioneer drone (developed in the late 80s, deployed in Desert Storm, basically an RC plane with a video camera) with the Predator. The Predator was not ever used in Desert Storm and in fact the program was only started just prior to that conflict. Given that drones have existed since the 30s and the initial base that was used to develop the Predator started as a cruise missile, which have been widely known since the early 80s, I am not quite sure why you think drone development was a surprise to anyone other than the casual, ill-informed public.


Also, fetishizing instruments of death because of the "cool tech."


The two are mutually exclusive, so the "cool tech" can always be appreciated for what it is.


Yes. The British Army was actively using drones from 1999 [0]. The unarmed Phoenix system was developed to support artillery forward observers, and used in the Balkan wars and Gulf War 1. It was operated by the artillery regiments, not TLA hoods.

[0] https://en.wikipedia.org/wiki/BAE_Systems_Phoenix


Israel pioneered UAVs as early as 1982 [1] and Firebee's regularly flew over Vietnam [2]

[1] https://en.wikipedia.org/wiki/IAI_Scout [2] https://en.wikipedia.org/wiki/Ryan_Model_147


This is all true. Plus, the military, like aerospace, is inherently technologically conservative: it doesn't fly unless it's flown before.

On the other hand, governments can throw a lot of money at an interesting topic. See John Clark's Ignition! for rocket fuels.


> AI and Quantum Computing come to mind as likely focus areas in addition to those already mentioned.

Um, yeah, no. Pretty much all experimental QC students capable of doing the work are accounted for. The theorists who don't hustle mostly can't get jobs, with good reason: it's mostly malarkey.

You'd think if there were some blackops AI center of genius out there, at least one of them would have a job at Facebook making $2m a year or at a hedge fund gobbling up the order books. There are none.

Neither the predator nor stealth choppers were particularly surprising. You can watch documentaries on youtube about stealth choppers like the RA-66 Comanche, and drones were in wide, boasted-of use in Vietnam.

FWIIW I did a little consulting work for SBIR places; most of the procurement officers got their ideas from shitty science fiction. I'm pretty sure we'll eventually get all the dumb things from Aliens no matter how impractical and useless in actual combat they are.


> You'd think if there were some blackops AI center of genius out there, at least one of them would have a job at Facebook making $2m a year or at a hedge fund gobbling up the order books. There are none.

Perhaps they were TOO successful, and their AI escaped, eating the researchers on its way out. There's no way for us to know..


Alan Kay has some talks about how research used to be at Xerox PARC during the glory days of Smalltalk, Interlisp-D, Mesa/Cedar.

His takeaway is exactly that, if you aren't budget bound, think of what the future would be 15 years from day and start implementing that.


To implement a prototype of the future, this works well, but to sell something to consumers you have to implement something "cheap", which Xerox PARC didn't manage to do..


The timeline checks out though. The Macintosh came 12 years after the Alto was conceived, or 16 years after Engelbart's NLS.


And yet their innovations paved the way towards HCI paradigms we take for granted today. So although they weren't practical when they were invented, I think there's still value to these sorts of labs to continue paving the way for the practical creators of of 2034.


That's the way I (try to) think, sort of, but honestly the successful projects are the ones with money behind them. Java was just terrible but IIRC Sun put in approx. $0.5 billion into advertising it and it was jumped all over by execs because of advertising and not any technical, or far more importantly business, merit.

Tech is fad driven - look at all the big data projects by companies that don't need them.


> Java was just terrible [...]

Java was released in 1995. Compared to the state of the art back then, I believe it was a significant advancement. There was plain C with all its shortcomings, C++, but in a much worse state than today, and then the dynamic bunch (Python, Tcl/Tk?).

There's no doubt Java had major issues, many of which were only ameliorated with Java 2 and later, and some remain today. But on the other hand it was memory safe, statically typed, with much simpler semantics, machine independence, Unicode support etc. pp. I believe Java overall meant progress.

I'm not sure the advertising budget perspective is that salient. The main competitor for Sun at the time was Microsoft (with C++ and their Windows platform), who had a very well working marketing / developer relations / etc operation going as well.


> I believe Java overall meant progress.

I'm not so sure, you're comparing it only to C,C++ or Tcl/Tk but what about Ada? Smalltalk? Lisp? Objective-C?

Plus Java's standard library was full of bugs for a looonnng time: they focused on new features instead of fixing it.. I've experienced "Java can't print" myself while Java's hype was going full force..


Objective-C was going nowhere before Apple's acquisition, my thesis was rewriting a particle system from Objective-C/NeXT into C++/Windows as means for the department to keep research in OSes that everyone else could use.

Ada compilers were bloody expensive.

Modula-2, Delphi, VB, Oberon, Modula-3,..., were great, but they were either only available commercially, or OS specific.

Java was given away as free beer, with a batteries included standard library and with less hurdles than trying to write portable C or pre-C++98 across all platforms that existed in the 90's.

That alone made us try it out,

On my university, the computing department changed all the compiler design and distributed computing classes to Java, without Sun paying anything to a tiny Portuguese university.


The java performance complaints are also widely overstated...Mostly when running on windows it took a few seconds when starting a java program if the JVM wasn't yet loaded to memory (as it had to be loaded first).

But a well written java server-side application (so something that doesn't need to be started very often but generally just idles until it's contacted by clients) is as fast or faster than anything you were likely to be using for a couple decades after its introduction.


"are"? Yes, perhaps.

"were"? I don't know.


Ada, until Gnat came out, was similar to C++, minus templates and "real" object orientation but plus generics and threading. It was also hand crafted by military contractors out of pure unobtanium. The true interesting bit, Spark, didn't come until later. (I'm not sure who did the Ada I used in a class in the late '80s.)

Smalltalk was neat, but never caught on, possibly due to the lack of free implementations and the difficulty of integrating with anything else. (The original prototype of the OS/2 3.0/Warp UI was done in Digi-something Smalltalk. (Waves feebly from IBM.))

Lisp, in particular Common Lisp, was very popular in certain areas, but again died out. Possibly another combination of lack of free implementations, integration difficulty, and sheer difference from the alternatives. (Scheme. Sigh. The cleanest language design ever. But no single OO design when that hype train was at full speed. Plus, parentheses, which well Common Lisp had already poisoned.)

Guy Steele said that Java dragged the industry halfway to Lisp, if that means anything.


Java was dire. It could have been much better when released - though granted it was designed for embedded originally, but it was then arguably mis-sold.

Performance was avoidably bad in some parts of the libraries.

I recall a major rewrite of some product being started then killed because performance sucked (surely much better now, but things took far too long to change).

Memory behaviour was not fixed until java 5:

"The original Java memory model, developed in 1995, was widely perceived as broken, preventing many runtime optimizations and not providing strong enough guarantees for code safety"

(https://en.wikipedia.org/wiki/Java_memory_model)

I'd also like to add Brinch Hansen's heartfelt lament

"If programmers no longer see the need for interference control [in java's threading] then I have apparently wasted my most creative years developing rigorous concepts which have now been compromised or abandoned by programmers"

Per Brinch Hansen, Java's Insecure Parallelism - SIGPLAN Notices, April 1999

> statically typed

with type holes for arrays

> with much simpler semantics

yep, too simple. IMO it relied too much on the programmer. Generics took too long to appear also.

> Unicode support

Broken cos they didn't read the spec properly. It has problem with the BOM on UTF8 which has bitten me (can't remember the details, think it was this https://stackoverflow.com/questions/1835430/byte-order-mark-...)

The advertising budget was what drove things, not common sense. Java could have been good but it was designed badly and badly oversold.


Sometimes I wonder what would have happened if Sather had caught on.

(forked from Eiffel in 1990, unlike Eiffel it was free and open-source)

Authors claimed it was as fast as C++.

https://en.wikipedia.org/wiki/Sather


I think it never had Eiffel's level of tooling, and libraries, focused on UNIX platforms as CLI compiler, hence why it never took off outside its research university.

I learned about it on a Dr. Dobbs article and that was about it.

As for performance, Eiffel was already quite fast, because while it used a VM for development (MELT), AOT compilation was done via system compilers, with support for generics and value types from the beginning.

It was also one of the first OOP languages to support non-nullable references.

EDIT: This was the Sather article I referred to. https://www.drdobbs.com/tools/the-sather-programming-languag...


I subcontracted to a guy who was sold on the eiffel hype. I got an installed it to. I managed to crash the compiler in the first few minutes of use. The other guy ended up entirely distrusting the incremental compilation after losing a day or so to that.

I don't remember any particular tooling, and the language was crap (Meyer was better at hype than language/compiler design).

IIRC sather was eiffel with the subtyping holes fixed. Bertrand Meyer allowed holes in eiffel subtyping (which sather disallowed) which he (meyer) was sure he could deal with, but eventually they proved intractable IIRC.

My boss eventually abandoned eiffel, and let the eiffel company know why. It was not an enjoyable experience for either me nor him.


They’re probably advanced in achieving some strange specific technological goal like making rotor blades sound like goats. But they probably do it with floppy drives.


Focusing on the task at hand, using what they know works with well understood failure modes. It's smart.


Its just interesting how the US military is simultaneously advanced beyond belief. And behind the times.


It's like in those SciFi movies,where people live in ancient Greek kind of environments yet have lasers, teleports and etc..


Multifrequency, spread-spectrum, unjammable, secure communications that sound like an ancient POTS line over a sub-oceanic cable and have the connection resilience of two cans and a piece of string.


A Blackhawk with five blades.... Shhhhh!


Does it also have a moisturizing strip?


15 years ahead is a good rule of thumb for DoD / Aerospace. 3D printing and additive manufacturing used to carry monikers like Rapid Prototyping and Stereolithography but has been in use for decades. Another metric is cost, which is typically many orders of magnitude ahead of the inevitible consumer tech version.

I love reading about space history. And the part of the article that hit home was the floor with 200+ engineers working as a team. That engineering led culture is giving way to financial dominance.

The Long-Forgotten Flight That Sent Boeing Off Course: A company once driven by engineers became driven by finance

https://www.theatlantic.com/ideas/archive/2019/11/how-boeing...


I was struck, though, by this quote: "my Electrical System Engineer Al Gross threw me a curve. He said, 'Bob, we need to connect two new circuits with a couple of diodes. I don't know where you are going to put them, but I don’t want to add them to the Recovery Programmer as this will trigger a requal program,' That would require a lot of work and paperwork to requalify the programmer."

I don't know enough about aerospace to really follow this (or even to know what a "programmer" is in this context), but it certainly sounds strikingly similar to Boeing's use of ad-hoc engineering solutions to avoid expensive retraining of pilots.

Perhaps technical corporate culture has always been a complicated relationship between finance and engineering. Triumphantly successful projects tend to make everyone forget about the tension, while miserable failures lead to more introspection.


Both bombing drones and stealth helicopters seem to be relatively straightforward concepts that were not publicly developed simply because outside of the military there is no motivation to pursue those things.

On the other hand AI and quantum computing are generally useful and require novel breakthroughs, so I would expect a smaller gap or no gap.


For my part, as no one special in particular, I've seen information and intelligence tools available to line unit leadership which were well ahead of their time. As well, I have seen technology at work in the field during missions which I did not recognize at the time (and at least one thing I still cannot explain but which certainly must have been some kind of weapon or recon platform). I also have family that works tangentially with skunkworks engineers who is quick to say exactly this, that the bleeding edge the defense contractors are working on is easily a decade or two ahead of the curve.


I have heard first-hand accounts of people who saw the SR-71 in operation during Vietnam. Blew their minds, considering it still looks ‘futuristic.’


Always interested in ufo stories. Care to share any details?


Sure. To sort of preface this here by ruining my credibility, I was once previously witness to an "actual" UFO, before I was in the military (in the company of several others including a police officer, however still unexplained). Anyway, the thing I was talking about was much less "that kind" of UFO than something with components I may be able to explain but which I did not then and still do not today completely recognize. What I and others in my (infantry) unit saw were vehicles with a kind of very low frequency pulsejet propulsion system in the night sky above a certain large Iraqi city, flying in from the south (ish) and making a big U-turn before flying from whence they came (ish). You'll have to trust me when I tell you these were yellowish flashes from some kind of fuel detonations and not flight lights, given the flashes would appear bright, expand quickly, and then fade out over 1-3 seconds while maintaining a static or near-static position in the sky. However unlike any pulsejet system I've ever heard of, these flashes would occur only about once every 5-10 seconds, albeit at very regular intervals. They were also completely silent. There were no other flight lights of any kind, and they only ever flew at night. Through nightvision, we could very briefly see the fuselage streaking forward after each explosion, but otherwise not visible. I suspect they were painted black, as were many other kinds of aircraft we were accustomed to seeing (including of course some otherwise completely normal-looking civilian-ish aircraft whose operators would rather everyone simply forgot existed). The object through thermal optics were almost exactly the same, with the explosions visible (as hot), and what must have been the fuselage briefly visible streaking away from the explosion (also hot, though nowhere near as hot as the explosions). I never saw these things drop or fire munitions. But also neither were they ever seen to have been fired at, which I mention only because that's what we first thought it may have been, however incredibly unlikely, until we noted the interval regularity and these sightings became a not-uncommon occurrence in the night sky. However I did once witness some "normal" jet aircraft (flashing white flight lights) vaguely shadow one of these objects through it's U-turn and continue on at a slower speed after it. Just to be clear: I can't be certain if the flashes were actually propelling the fuselage forward, so I don't mean to imply the "streaking" was any kind of discernible acceleration, but rather just was the only other visible component of this thing at any time.


when we learned that the SEAL team that killed Osama Bin Laden used a stealth helicopter, something that no one in the world had any public success toward achieving

It’s more interesting that it leaked out that they crashed one of said helicopters because they ignored the mission planning advice they’d been given. How embarrassing!


> ... because they ignored the mission planning advice they’d been given.

Who is "they"? (The pilots?) Which advice was ignored?

> The crash of the Blackhawk may have been, at least in part, caused by the aerodynamic deficiencies introduced to the airframe by the stealth technology add-ons.[321]

* https://en.wikipedia.org/wiki/Death_of_Osama_bin_Laden#Helic...


They were told if you hover low over the compound walls you will lose lift and crash, but they did it anyway because they’re the Navy Seals(tm)


Could you expand on that? I've never heard about this.


The crash happened during the bin Laden raid: https://en.wikipedia.org/wiki/Death_of_Osama_bin_Laden


The most interesting thing for me is: how do you keep all those bright minds with enough knowledge and experience from striking out on their own either in academia or in business? At the end of the day creating new tech is hard, replicating 'easy'. Or is it that the "15 yr ahead" simultaneously suggests it's about 15 yrs of work on your own to replicate? Perhaps they work with great compartmentalization? But compartmentalization suggests you don't need great minds, but just very good minds with good structure. That goes against what I think I know about innovation. It's quite fun fantasizing about a world within a world with better technology.


You are ignoring a very important part of it - some people genuienly want to "help their country" and are willing to sacrifice personal success for this. For example it's not exactly a secret that 3-letter agencies don't pay software engineers anywhere near as well as other IT companies would, but the promise of doing something "for your country" is worth more than money(just like game developers sacrifice money to work on something they really like).


Since graduating from college with a EE degree, I've worked for 3 employers, all of which had or have very deep roots in the intel community. At my current firm, we've always had success by a large investment in new technology under IRaD. As one of our founders has said, we provide solutions to problems that our customers don't yet know they have. Payscales for engineers is consistent with commercial companies here in Silicon Valley. In my 40+ year career, I can honestly say that I've never sacrificed personal success for this job and the work that I've done has contributed, in a hugely important way, with our ability to collect and analyze data from the "bad guys". I have absolutely no regrets for my career path.


>For example it's not exactly a secret that 3-letter agencies don't pay software engineers anywhere near as well as other IT companies would, but the promise of doing something "for your country" is worth more than money(just like game developers sacrifice money to work on something they really like).

You hit the nail on the head in explaining this dynamic, but I hate the fact it exists in the first place, in game dev or IC. It also exists with US special operations, as another way to sort out anyone who isn't a true believer.

Fair, you might say. I just think it's dumb a Google/Netflix/Facebook engineer working on some boring cog in their company's infrastructure gets paid a top-of-market $250k+, while an engine programmer working on an AAA title makes half that, or a firmware engineer working on offensive cyber payloads makes half that, or a special operations solider makes half that.

Maybe it's just market demand: boring stuff is paid better since everyone wants to do something exciting. But I think it's also a filter, and a lot of these places know that.


On this subject, there is a very interesting new book about Jim Simmons, the founder of Renaissance Technologies, who used to work as a codebreaker in the US. I think for many of the people in these programs, the attraction is getting to solve problems beyond anyone else's reach, with tools beyond anyone else's reach.


> getting to solve problems beyond anyone else's reach

more this, and less "tools beyond anyone else's reach".

A fair number of people go to work with these groups to "check the box" that they've done it. They know who's at NSA, or CIA, or DARPA. Because they spent 2-5 years with them inside the box. There's a tremendous amount of self-confidence that comes out of a successful turn at one of these places. You know you know.


For the same reason people do research, that also "pays" a pittance compared to what those people could make in the market. And actually, it at least pays better than the academy. Most of the time you will move to some smaller, cheaper town. Sometimes even living inside a base. You can actually buy a house and have a family with defense wages. Try doing the same doing research on the academy.


knowing how to do some highly advanced tech thing and building a successful sustainable business are orthogonal skills. It is very common for somebody in one camp to overlook this.


Definitely with Quantum there seems to be a huge advantage to having it but convincing everyone that you don't. I think we can move towards quantum proof crypto, but it won't happen until we are convinced someone can spy on us if we don't.


I thought it was pretty widely accepted (if not outright known) that military applications, particularly hardware surrounding stealth or the ability to kill people, was about a decade ahead of anything comparable in the private sector. It's not surprising to imagine that gap widening over time as well.


Relevant McDonnel Douglas helicopter ad: (timelink) https://www.youtube.com/watch?v=M0R117k9tdA&feature=youtu.be...

Yes, this is from a video about cattle mutilation, but whatever.


The distinction is really about scale.

Yes generally, there are people in the government using technology well more advanced than anything available to the general public. However it's only a handful of people that are using it. Much of the high cost comes from these systems being at such a small scale.


Also earlier this year when Trump tweeted a classified satellite picture of some Iranian factory and everyone marvelled at the details and quality...

This is not surprising, though, considering that the US are the most technologically and scientifically advanced country and that they have been outspending everyone else by orders of magnitude in military (and related domains) research for about 80 years.


worth noting one of the choppers crashed because (i think?) of the bad aerodynamics


in fact it crashed because it was hovering over the (walled) compound, and those walls were modifying the air flow and causing a vortex ring state. Most helicopters would have suffered the same fate.

https://en.wikipedia.org/wiki/Vortex_ring_state


All it takes is the will to see so many go without. A price most Western nations won't pay.


If US thinks it is valuable to keep some of it developments secret why would you assume other governments don't do the same?


I see no such assumption.


Considering that some individual NRO launches have had budgetary costs of $900m to $1.1bn for a single satellite and launch vehicle, it would be fascinating to know what kind of technology they're dealing with today. Lots of rumors about KH-11s and Misty, not much concrete info, for obvious reasons (it's all TS and codeword and higher).

https://en.wikipedia.org/wiki/KH-11_Kennen

A few years back an unused mirror assembly that was near identical to Hubble was declassified.

https://www.americaspace.com/2012/06/06/top-secret-kh-11-spy...


https://en.wikipedia.org/wiki/2012_National_Reconnaissance_O...

Consider that the NRO very likely has at least a dozen superior to Hubble spy satellites orbiting as we speak.


This must be one of the purest apples to pears comparison I've seen: comparing two similar looking things intended to do completely different things and declaring one superior to the other without any hesitation, surely this is the very ur-form of the expression?


What capabilities will those (supposedly) satellites have ? read the small handwriting on the shopping list I have made through the apartment window at night ?


Keep in mind the Airy disk limit 1.22λ/d. Unless you're doing optical interferometry between satellites, 100mm resolution from 100km altitude, 1 microradian, requires an aperture about 800'000 wavelengths across. For blue light, that's a 300-mm-wide telescope; doable, although getting the optics to be that good is a serious challenge, motion blur is another issue, and such small sensor pixels don't get many photons and can be noisy. Now if you want to get to 10mm resolution, you need a 3-meter aperture; 1mm, like you're suggesting, requires 30 meters. These are much more challenging to launch.

There are some interesting things you can do with indirect light, but so far they don't help with this.


I saw something once that explained how the powerful magnification seems silly until you account for the angles that things are at.

The example they used was that license plates are vertical so to read them the satellite can't be directly above the car so it's actually a much further distance and greater magnification is necessary because of the atmosphere and the distortion of reading something at an angle.


More that the satellites are in fixed orbits that are relatively low (250 km) and moving at thousands of km per second. Even if the target is directly below them, they need to rotate quite a lot to see it for more than a few seconds at a time. If the target isn't directly below, they need to look much farther sideways than down.


Slight typo there: "thousands of km per second", you probably meant "thousands of meters per second". Low orbit is on the order of 7-8 km/s. I did find the typo humorous.


most spy satellites aren't geostationary?


No.


Perhaps SAR with very high resolution too - allowing them to image at night and through cloud cover.


The picture Trump tweeted was from a satellite with a resolution of at least 4 inches. They can definitely see your cell phone, but they can't read it- at least not from space.


Donald Trump already tweeted a supposedly classified spy satellite image taken from what is apparently one of those KH-11 sats:

https://www.wired.com/story/trump-tweeted-a-sensitive-photo-...


That was one of the more interesting black programs. Some of them are really boring.

There are times when you get through all the security checks and see the thing, and see "they still have one of those running?" The famous Blue Cube in Sunnyvale, the USAF's satellite control center, was running 1960s technology well into the late 1980s.

Now it's a parking lot for Google buses.


  The famous Blue Cube in Sunnyvale, the USAF's satellite control center
I just got off westbound 237 at Mathilda an hour ago for the first time in years. It's so strange to see that entire side of the road leveled in the background.

Blue Cube: gone. Lockheed building 101: gone. Lockheed building 102: gone. Five years of my career without a single physical artifact left.


> "One of my early assignments was to lay out a printed circuit board. My electrical engineer gave me a schematic and identified each part type, be it a TO5 transistor, carbon resistor, tantalum foil capacitor, et cetera. My task was to fit all the components on a fixed size board and connect them according to the schematic with copper circuit runs on the back of the board."

This sounds like it would be an incredibly satisfying job, especially given that there was constant variation in what he had to build


It’s interesting that things weren’t so siloed then, that a draftsman could be an ad hoc PCB designer.


TBF a lot of what he did sounds like work that would be done by the autorouter on most electronics CAD today.


Autorouting is still not generally used very much in PCB design. I suspect it could be used more than it currently is, but generally you spend a lot more time specifying design rules and the results are often quite poor.


s/siloed/high speed

At today's circuits number of layers, speeds and frequencies it's impossible to do it by hand.

Sure, you could train your average engineer to use Cadence/Mentor, etc but it's usually the design engineer that does capture and PCB routing.


Slow != impossible.


Draftsman was always a skilled job, it was never about merely producing neat version of an engineers notes, such as a typist.


Board layout is still an in-demand field.


In demand if you already have the skill. Not in demand if you’re a draftsman or another unrelated skill and want to learn or no?


>>> But what about those people in the monkey lab? They believed this spacecraft was for their payloads. And they will never find out they are just a cover for the CORONA program.

That as much as anything here seemed ... wasteful? cruel? Unlikely to fool anyone?


Context: secret US aerospace programs in the late 1950's and early 60's


I got in a conversation with my brother in law's uncle a few years back, and found out he spent a lot of time "back when" working out how to de-orbit the old Skylab station. Calculating drag based on how it was oriented (and how to orient it) to encourage de-orbiting just so. Still kind of amazing to think about how he did that. It's like trying to line up just the right bowling ball shot on a 1000' long lane. Remotely.


> He told me that what I was about to learn, I could not discuss with anyone who was not also cleared for this information. I could not talk about this with family, friends, or other GE employees forever.

I wonder if forever still applies...


Yes. If you haven't been debriefed on the topic you haven't been told what parts of what you know are still public. I know a few ex-top secret people who will admit they worked with projects that we now know a lot of information on - but they refuse to say more than they worked on the project. They haven't been debriefed, so they won't even read a newpaper out loud if it touches their old project because they legally cannot talk about it even public information lest they leak something that hasn't been made public yet.


As a person who has held various clearances through my career I wonder about this guy telling his story.

It is interesting to read for sure,but he does say in that he promised not to talk about "forever".

I have been given similar talks.

Even if part of what you work on is declassified, it does not release you from the "forever" clause.

I assume he has requested and been given permisson to talk freely about it.


2019 in late 1950s was approximately "forever". Remember, Space Odyssey filmed 10 years later was taking place in 2001, already 18 years ago. We are that far in the future.


That is absolutely not how the United States government thinks or operates.


The programs he worked on are declassified, so it absolutely does.


That has nothing to do with your comment, but regardless it absolutely doesn't. Just because other people may be allowed to talk about it doesn't mean that you are.


Not every element of a declassified program is declassified, as there can be technologies shared with extant programs.


> I assume he has requested and been given permisson to talk freely about it.

I don't assume he has to be, and it's good that people who are that old aren't afraid to talk about what they did then.

It's 50 years ago. To compare, General Groves was the military leader of a whole Manhattan Project published a whole book with the comparable amount of detail not even 20 years after the project, in 1962 ''Now It Can Be Told'':

https://en.wikipedia.org/wiki/Leslie_Groves


A while back, I watched and enjoyed a documentary about the Corona program mentioned in the article.

I believe this is it: https://www.c-span.org/video/?321255-1/discussion-cias-coron...


I love stories like this and have read several books around these topics, but always love to ask for recommendations when something like this gets posted in HNs. What is your favorite book around this topic?


"After my clearance was granted, I moved into the main drafting room with about sixty other draftsmen, designers, and lofts men. (A loftsman develops full scale patterns, or drawings, of complex shapes.)"

What does "develop full scale patterns, or drawings, of complex shapes" mean in the context that this is different from drawing blueprints like the draftsmen?


I believe loftsmen prepare full size drawings or patterns at 1 to 1 scale, whereas a blueprint may be at a reduced scale. I.e. a blueprint fits on a drafting table; the pattern created by a loftsman would go on a factory floor.


Comes from the ship building industry https://en.wikipedia.org/wiki/Lofting


Now it is common to use a laser tracker to take measurements and create full-size drawings. Spacial Analyzer really changed the game. NNSY has over 90 API laser trackers that they use.


The patterns would be for things like copy milling or cutting out complex pieces of sheet metal. Blueprints are schematic representations of parts/assemblies.


If you liked this, here’s a talk by an engineer who worked on the camera systems of the next generation black program, after corona, hexagon: https://youtu.be/GtmtYlcPYYA


That was an awesome read. Thanks for sharing!


For more recent information on a similar topic: https://www.americaspace.com/2012/06/06/top-secret-kh-11-spy...


This is a good, lite technical read for those interested in stealth.

https://www.amazon.com/Radar-Man-Personal-History-Stealth/dp...


That looks interesting; adding that to my to-read list. Related to stealth, but less technical, is Ben Rich's "Skunk Works":

https://www.amazon.com/dp/0751515035/


The photograph: dat sliderule tho.

Good tutorial on using one from a channel that focuses on the subject:

* https://www.youtube.com/watch?v=uAGCDTtIahY




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: