The substance of this news release is incredible, but its style is also admirable. The authors managed to convey in seven paragraphs a concise and comprehensible explanation of how the team resolved the technical issue.
Ha! Thank you for mentioning it. I thought the same as well while reading it. It was terse but not skimping on details while maintaining a certain style.
I really wish most other websites were like this. SEO and Google has really made the world a worse place.
That's what happens when you're freed from "SEO-optimized content".
It's also a culture thing I'd probably put under military philosophy.
I've worked with ex-military engineers, and you can tell from how they communicate. Writing technical reports and memos is a skill.
There's something very beautiful about Voyager's journey so far.
I hope one day when we're a true interstellar species we'll still keep tabs on it. The data may not be useful anymore, but it would be cool to imagine a year 3000 society with a little "Look at where Voyager is now :)" tool that you can see its path and where humans have colonized by comparison.
I think a 19 year old yobbo, whose dad is a successful interstellar logistics businessman, who out of a guilty conscience for never having had time for his son and having bought him an overpowered spacecraft, will either put grafitti on it or misjudge his afterburner and half burn it while trying to fly a very tight corner around it, in order to impress the 2 girls he has on board.
I’ve imagined a scene playing out, in sci-fi or for real in the distant future, where astronauts test out a new propulsion system by flying out towards Voyager 1 and catch up to it with ease. As they approach, they see the ancient probe grow larger and larger in their window until…
If my math is correct, we would already need to build a spaceship that can travel at 1/10th the speed of light to reach Voyager 1 within one week. It will be quite an engineering challenge for the future.
$ qalc '24.4e6 km / 1 week / c to %'
(((24.4 * (10^6)) kilometers) / (1 week)) / SpeedOfLight = approx. 0.0134572816%
So this is using the "millions of kilometers per week" as a speed unit (like km/h but bigger), then dividing by the speed of light (c) to get the fraction of c this is, and finally asks it to format it as a percentage
Edit: found a potentially easier (more intuitive to understand) query, giving the same result, as well as how to ask it nearly for the how manyth part of c this is. I'm still learning all the tricks of this tool :)
$ qalc '24.4e6 km / 1 week to c%'
[...] approx. 0.0134572816(%c)
$ qalc '24.4e6 km / 1 week to 1/c'
[...] approx. 7430.92125 c^-1
Ah yes, the first Michael Crichton novel I read - and also the book that first introduced me to binary code (https://imgur.com/N4IjIYq)! And after watching (and then reading) Jurassic Park some 10 years later, it took a while until I realized that it was from the same author...
Yes, I too had the same disparity in recognizing the author as a young 'un, reading CONGO and SPHERE and so on .. Mr. Crichton sure had his finger on the pulse of the technological world we live in. What a wild series of stories he has created .. he was my favourite author until Messrs. Stephenson and Gibson came along ..
Staying on subject, I wonder if we will see a new adaptation of ANDROMEDA STRAIN some time. As a story it seems topical and relevant.
There was a Star Trek: Voyager episode in which Voyager finds one of the original Voyager probes on a planet. The episode is called Friendship One [0].
The plutonium 238 decays according to a curve, and the thermocouples are degraded as well by heat and radiation according to a curve. So the power output drops rather predictably: "The radioisotope thermoelectric generator on each spacecraft puts out 4 watts less each year." [1]
The Voyagers will soon no longer have enough power to operate any of their instruments. They'll have enough power to continue operating the transmitter (which serves as a science experiment of its own) into the 2030s. The power of the signal will drop before the electronics and control brown out (if it works as designed), and it the signal might become too weak to detect before the probe completely stops operating. Such a fate befell Pioneer 11, who may yet still be warbling away at low power no longer pointed at Earth; its carrier was last detected in 2003.
Even if science data won't likely be collected after 2025, engineering data could continue to be returned for several more years. The two Voyager spacecraft could remain in the range of the Deep Space Network through about 2036, depending on how much power the spacecraft still have to transmit a signal back to Earth.
That FAQ covers a lot of interesting ground (though it talks about 2020 in the future tense).
After Voyager 1 took its last image (the "Solar System Family Portrait" in 1990), the cameras were turned off to save power and memory ...
I didn't realize that was the last image.
... it is very dark where the Voyagers are now. While you could still see some brighter stars and some of the planets with the cameras, you can actually see these stars and planets better with amateur telescopes on Earth.
> the cameras were turned off to save power and memory
Since it’s powered by an RTG, how does the power get “used up”? I assume that this refers to the available power budget at a given moment versus some sort of expendable power reserve.
It's the first question at the top of that ^ FAQ page. One of their comments is :
"Mission managers removed the software from both spacecraft that controls the camera." Makes me wonder if that unused RAM came in handy lately!
It's radioactive so the half-life has a serious effect. Its half-life is 87 years so it's not even used up one. I guess it wasn't really very overdimensioned. But it wasn't meant to last this long.
> the transmitter (which serves as a science experiment of its own) into the 2030s.
One of the longest-running scientific experiments, too. It's already about as old as the Queensland pitch drop experiment was when Voyager I was launched.
Even if you construe this phrase as unequivocally negative, you can't scrub negative emotions and intentions. If you were able to actually scrub some word or phrase from the language, another would take its place, like with all the racial and intellectual disability slurs.
Yes; I'm basically saying one should reconsider before ever using this phrase, as it drips with condescension, and therefore is not conducive to productive discourse.
If we can harness all of the energy and mass available in our solar system, we [1] can likely compute more than several galaxies full of classical humans. We might even begin to test the edges of physics.
Maybe we don't need to go anywhere at all. Maybe we [1] have all we need right here to become literal gods.
[1] Our digital descendants. Humans are very much fit to the gas exchange and metabolism envelope of our gravity well.
> compute more than several galaxies full of classical humans
Unfortunately the simulated classical humans in your Matrioshka Brain will want to mine Bitcoin, which means that our digital descendants will have to become a true interstellar species anyway in order to convert the Laniakea Supercluster into coin-mining computronium.
> If we can harness all of the energy and mass available in our solar system, we [1] can likely compute more than several galaxies full of classical humans. We might even begin to test the edges of physics.
I grant that. But why would that keep us from pressing on?
If we have more resources in general, we will also have more resources for interstellar adventures.
I'm sure we will send interstellar probes of some sort. But I think that assuming we'll want to send lots of entities vast distances in space and time might be a lot like thinking we'll have flying cars.
We already have quite a lot to exploit in the region around us.
We assume we understand our future wants, needs, costs, and capabilities. (Granted, any future thinking is also subject to this, including my own.)
> We already have quite a lot to exploit in the region around us.
I don't disagree.
> We assume we understand our future wants, needs, costs, and capabilities. (Granted, any future thinking is also subject to this, including my own.)
Even if 99.99% of people are content with what the solar system has to offer, there will still be billions left to go and explore and settle outside for whatever needs and reasons they have. Even if those needs are imagined, or the reasons might be purely ideological or religious.
> But I think that assuming we'll want to send lots of entities vast distances in space and time might be a lot like thinking we'll have flying cars.
If only a tiny fraction of people will _want_ this, that's enough.
How they manage to squeeze all the resources of the probe and keep it working year after year is an astounding achievement, pleasantly mind-blowing.
It is important that all the know-how about this type of maintenance never disappear. I hope the designs in electronics that this team would have wanted to have available in the probe are implemented in the new designs.
I should see whether there's documentation of what they moved and what they replaced. I imagine there's "plenty" of room to do that (in the sense that there's probably some programs that are no longer mission-relevant because they controlled systems that have been shut down), but I'd love to know what got tossed.
Voyager has been an inspiration for generations of engineers.
Bless you all that worked on it. Thank you.
Recently I designed in a Voyager inspired secret Easter egg into the surgical robot I designed. I put a gold (plated) plate with everyone's signature engraved on it. Gave everyone one as a surprise Christmas gift.
It's the Maestro from Moon Surgical, it's done over 200 surgeries, all successful. So far (no whammy) its had 100% reliability with the first 6 systems we built. We designed it (hardware wise) with only 3 engineers, including me, and we hand built the first ones right here in San Carlos, CA. The company is based in Paris and has a whole interesting history there as well.
No, but I worked on one of those too. I worked on J&Js ottava, its designed to compete with the one doing the grape surgury, Intuitive's Da Vinci. I've had the pleasure of using one (Da Vinci XI) to play around with, it's truly amazing.
With some amount of luck, Voyager might last ten more years beyond that:
"Even if science data won't likely be collected after 2025, engineering data could continue to be returned for several more years. The two Voyager spacecraft could remain in the range of the Deep Space Network through about 2036, depending on how much power the spacecraft still have to transmit a signal back to Earth."
What a sweet film. Thank you for the link. This whole time when I heard about work on the Voyager mission I assumed there was a larger team, with fewer single points of failure.
> The team started by singling out the code responsible for packaging the spacecraft’s engineering data. They sent it to its new ___location in the FDS memory on April 18. A radio signal takes about 22 ½ hours to reach Voyager 1, which is over 15 billion miles (24 billion kilometers) from Earth, and another 22 ½ hours for a signal to come back to Earth.
Talk about a slow feedback loop! And I get frustrated when I need to push code to a repo to test things in CI...
Downlink from Voyager to Earth is currently 40 bits/s but can be up to 160 bits/s. The signal is received at -160dBm or around 100 _zeptowatts_ (1e-22 kW).
Have you investigated this or are you just asking? I imagine if you wish to learn the answer it is a few simple searches away. And by “imagine” I mean, it is.
Posting a question in a forum has its benefits though. A bunch of drive by folks end up picking up information they would never have gone through the trouble of researching themselves.
> A radio signal takes about 22 ½ hours to reach Voyager 1, which is over 15 billion miles (24 billion kilometers) from Earth, and another 22 ½ hours for a signal to come back to Earth.
It has taken 46 years to get 22.5 light hours away from Earth!
Yes, this is why I always liked 'planet walks' like this [1]
If you see Images the planets in the solar system, the solar system itself etc. are mostly not to scale. If you walk such path one get a bit of a feel how large everything is, I think especially if the plantes on the walk are to scale too.
Even our solar system is mostly empty. And the distances to moon a microscopic.
Walking such a path is is not the same, but a bit like [3] if one is patient (which is difficult on the internet) and doesn't scroll manually but only via the little "c" in the lower right corner, wich scrolls with light speed.
Yep, 45 years old hardware, still getting software updates. Hey Apple, JPL is close to you, can you get someone to bike over and see how they do it? Thanks!
Planned obsolescence [1] in the problem. Apple could certainly create modular devices designed to be used and piecemeal modified/updated indefinitely. But it's way more profitable to sell you a brand new phone every couple of years than it is to sell you a modular device that you could update as desired, especially if the parts/connections are standardized meaning you could foster a [normalized] third party market for batteries, screens, etc.
Devices would last way longer, prices would be much lower, and it'd be unimaginably better for the environment. But Apple wouldn't make as much money, and the government's GDP growth figures wouldn't be as high. So clearly it's a terrible idea.
Modular devices aren't free in terms of engineering tradeoffs. Those interconnects to support modularity add their own unreliability (connectors that can fail more easily), weight (connectors) & increased size. Additionally, the connectors themselves have obsolescence anyway & at some point really expensive components start to be bottlenecks (e.g. upgrading the display uses a new connector or requires processing power/bandwidth that the CPU/RAM/GPU don't have etc). You see this in desktops where modularity is still fairly limited in terms of extending the lifespan of your machine - most people will buy a new computer anyway after 5 years vs trying to upgrade some components piecemeal because the new computer is cheaper / unit of performance than doing piecemeal upgrades.
According to https://en.wikipedia.org/wiki/Modular_smartphone, the modularity of the FairPhone helps it live 5-7 years. Notably that's not different from the 6-8 years that an iPhone is supported for.
It's easy to blame it as planned obsolescence on the part of a single party instead of viewing it as a consequence of natural marketplace dynamics. You may of course adjust those dynamics by having the government impose a different set of constraints to have a different outcome. Politically that can be difficult though, especially in America, & it's not primarily because of the GDP but because people here generally distrust the government.
Right to Repair is a reasonable proxy for this entire topic, and it's overwhelmingly supported - 74.5% of people being in favor, 1.9% opposing. [1] The reason the government is distrusted is specifically because of issues like this. It's good for society, good for the environment, extremely widely supported, but companies like Apple lobby actively and aggressively against such progress. And they generally win. There's been some glacially slow progress at the state level, but Federal support is unsurprisingly nowhere to be seen.
And you're dramatically understating the benefits of modularity. PCs are indeed the perfect example, and no - people are not replacing them every 5 years. You can see a Steam hardware survey here [2]. And that's for gamers, who are going to iterate their hardware faster than average! Even for video cards, one of the most likely to fail and most likely to be upgraded parts, the GTX 1060 (released in 2016) is the 6th most popular card, making up about 4% of all players (contrasted against 7% for the #1 card)! Without planned obsolescence, most things can last for extremely long periods of time. And those that do have, more or less, inescapably high failure rates (like old hard disk drives) can and should be standardized and able to be replaced/upgraded.
I agree, repairability is more interesting requirement than modularity. You can get the former without requiring the latter & you can get the latter without achieving the former (e.g. proprietary connectors for modularity that don't have any broader adoption).
As for luck of trust of government, that's an overbroad generalization as to the root of it. That's one class of reasons for a specific population group. There are others that don't share those reasons (e.g. the Bundy/BLM standoff had nothing to do with the government not implementing common sense ideas & the people involved share ideological 0 in common with those that are skeptical of the government for the reasons you mentioned).
Repair and modularity tend to go together, especially in the face of bad actors. Otherwise you get this scenario where companies work to intentionally create hardware that is likely to be damaged during repair/replacement efforts. The hardware equivalent of Microsoft intentionally trying to 'integrate' Internet Explorer into Windows.
As for government trust, I think it can almost entirely be explained as a result of the government behaving in an ever more opaque fashion, and constantly lying on top of that. Fool me once, and all that. In the Kennedy era, and all the way up to his assassination it remained extremely high - well above 70%. Since then it's been spiraling downward, to its now pitiful levels where we're at today - about to enter the single digits. [1] It's practically an achievement in itself to be that distrusted. I mean can you imagine a time when somebody could say non-ironically, "Ask not what your country can do for you, ask what you can do for your country."? Anyhow, I digress. It's just a topic I feel strongly on.
I agree about the opaqueness likely being the root motivating factor regardless of idealogical bent.
I'm not sold on modularity - the EU seems to be effectively dealing with malicious compliance. Even the details of repairability are tricky enough because you're internalizing an externality but it can be debated that if the externality is a better tradeoff for society which benefits from a faster pace of innovation because the companies don't have to support those products forever (support costs for older products are very real in engineering & supply chain orgs). Think about how very different phones today are from the 1st gen. That being said, now that things have stabilized, maybe forcing phones to standardize a bit would be OK provided manufacturers are still allowed to innovate (e.g. such requirements wouldn't be relevant to VR which hasn't reached an innovation plateau like phones and laptops have).
I'm not entirely sure what you mean here at all. The EU seems to be doing basically nothing about malicious compliance. Apple's DMA 'compliance' is actively hostile and ensuring it's all but impossible for anybody to actually take advantage of the freedoms the DMA is supposed to mandate. This page [1] offers some of the fee schedules developers releasing on alternative marketplaces would face. If anything it'll just turn into another GDPR where they'll occasionally fine Apple some token amount (relative to their revenue), which Apple will drag on through the courts as long as possible, and ultimately just shrug and write off as a business cost.
First, they are tackling repairability & it seems fairly reasonable as most EU legislation seems to be [0]. Repairability is a tough topic not only because there's opposition for BS reasons. Think about a CPU - are you arguing that you need to be able to repair a broken pin on the socket? I hope not because you recognize the physical and economic challenges that entails & that making that repairable likely drives up the cost of the CPU in the first place (if someone wants to they should be able to try, but is the manufacturer on the hook for offering repairs if their non-standard repair attempt fails?). And think about how much more and more integrated the SoC has become with the main CPU package absorbing almost all peripheral chips - within that context "repairing" often effectively means buying a whole new super expensive component to begin with (& in my experience the 3p repairs often try to skimp on using substandard components & don't clearly communicate it). So basically the number of repairable parts in electronics is shrinking due to improvements in manufacturing.
As for DMA malicious compliance, it's still waaaay to early to say they're doing nothing. There's a lot of hubub about it clearly and it's on their radar. Meta & Microsoft have gotten into the game lobbying the EU to do something so there's countervailing pressure [1].
> According to a new report from the Financial Times, two of the biggest critics of Apple's new App Store rules are officially lobbying the EU to reject the iPhone maker's crafty new App Store terms.
> The EU could potentially fine Apple for non-compliance when the law goes into effect if it determines that these policy updates do not embody the spirit of the DMA. Or the DMA could reject Apple's App Store proposal entirely and force the company to come up with a new DMA-compliant policy.
But basically expecting a regulatory body to start enforcing something less than one month after compliance became mandatory (it took effect March 2024) of a fairly complex piece of legislation is asking a bit much of any regulatory body. It takes time to investigate this stuff in the first place so that they can get ready to bring a court case. They'll also negotiate with Apple before-hand to see if a resolution can be reached without a lawsuit in the first place.
But the EU already hit Apple with a massive 2B antitrust fine:
Yes, I'm sure they'll fine them, and Apple will laugh. That 'massive' $2 billion fine is the world's most gentle slap on the wrist for a company of Apple's scale. Their European revenue was $89 billion 2021 and had been regularly growing, so it's easily upwards of $100 billion. [1] And their global revenue is $383 billion. They likely earned vastly more than $2 billion from the bad behavior alone.
Imagine you earn $100k a year, it would be the equivalent of a $520 fine - a speeding ticket. And it won't just be the $520 fine. Apple will appeal and drag it through the courts for as long as possible, both reducing the cost of it due to inflation, and also potentially getting the cost itself reduced. So you create a scenario where at the end of the day, we can engage in all the behavior we want and all we have to do is deal with a speeding ticket every once in a while, all the while the bureaucrats pat themselves on the back for really cracking the whip. This is not even close to a deterrent.
It's the ongoing practice of treating massive corporations with kid gloves, while behaving brutally towards small businesses or individuals - when, in an ideal world, it should be the exact opposite. And again, we're back to the trust issue. Because obviously the reason they're being so gentle is because they want to have the headline of really cracking down on e.g. Apple, but don't actually want to risk Apple withdrawing from the EU. And that conflict of interest will not go away, and thus will guide their future actions as well.
> Think about a CPU - are you arguing that you need to be able to repair a broken pin on the socket?
No, because a CPU is not a line-replaceable unit on Apple hardware. Nobody, not Louis Rossman, not Apple-certified repair stores, and not street-shops in Shenzhen are doing SOC-level repairs. It is a complete red-herring example that almost makes me question how serious you take this, accusing others of using "BS reasons."
> So basically the number of repairable parts in electronics is shrinking due to improvements in manufacturing.
That's really tangential to the point. Integrated components can still be replaced with donor parts (or OEM replacements) if each component isn't DRM-matched to a phone. Apple goes out of their way to deliberately stop their systems from being repaired by others, and it doesn't stop at miniaturization.
> it's still waaaay to early to say they're doing nothing.
If someone is doing nothing, you're never too early to call them out on it. We're not waiting for Apple to get the memo, they have had three years to anticipate this legislation and the only thing they seem interested in doing is pissing and moaning in a PDF while trying to get their last petty jabs in against Spotify and Epic: https://developer.apple.com/security/complying-with-the-dma....
Apple is doing nothing, you can't piss on my back and tell me it's raining.
The average vape has more processing power than Voyager, and the iPhone is orders of magnitude more complex. With that said, it takes skilled engineers to squeeze perfectly crafted code into such a tiny platform from the 70s.
I understand what you're getting at, but the 'average' vape pen is essentially a disposable battery and temperature sensor with no additional inputs or features.
After reading some details about the Voyager, I have my doubts that a disposable vape has more computation power [1]. Maybe the higher end devices with programable displays and temperature settings?
A Pinecil (digital soldering pen) is probably a better example. BL706 MCU,
"a low-power, high-performance IoT chip that supports BLE/Zigbee wireless networking, ...
BL702 has built-in RISC-V 32-bit single-core processor with FPU, the clock frequency can reach 144MHz, has 132KB RAM / 192KB ROM / 1Kb eFuse storage resources, supports external Flash, and optional embedded pSRAM."
Either way, it's clear that we (well, JPL) can build extremely powerful and sophisticated systems with relatively small computers, suggesting that resource constraints can sometimes be a source of stability and creativity.
yep, I had fun watching the Ben Eater videos where he builds a retrocomputer out of them. I even bought some of the chips to build a simple 4-bit counter with up/down buttons. It was a real revelation to understand that concept, and I ended up looking at an Apple I motherboard (https://en.wikipedia.org/wiki/Apple_I#/media/File:CopsonAppl...) and noticed the regular array of 74xx chips connected by some elegantly laid-out wires.
This silicon and lithium could be used for much much better things and recycled as well, than a few puffs of a substance only meant to make people addicted.
Safari does open Amazon.com in HTTPS.
But 256MB of RAM is really small for proper browsing.
Anyway, apart from the browser, comics reading, TuneInRadio, old games and digital photoframe apps still work ok.
(in case you have the strange idea to buy outdated ipad to play with, take at least ipad 2, which is indeed stupidly obsolete, but faaaaaar more powerful than the original iPad :)
Good lord could you imagine the meltdown HN would have if Apple had taken this option to solve the old-batteries-support-lower-peak-current physics problem?
“Your device battery no longer supports the camera. Or the backlight on the top third of the screen. But it runs at full speed otherwise!”
The current iOS 17 is compatible with the iPhone XS, which is from 2018. That's 6 years for a piece of tech that the majority of people replace after < 4 years...
Also to nit pick, Apple is based in Cupertino (northern CA) and JPL is in Los Angeles - so it'd be quite a bike ride lol
I have an iPad Air from 2014 that hasn’t been able to get updates since iOS 12.5, so 2019. The electronics are fine, a browser update would be awesome. Would people not replace phones in < 4 years if they could have current software? 4 years on a $800 thing doesn’t seem to be a good deal for the majority of us.
Sorry about the distance thing. I’m from Philadelphia, so all of that (vaguely waving west) has got to be bikeable. But I’ll remember the /s for next time.
The Google Pixel 8 series receives 7 years of updates.
And unlike Apple they are upfront about it, which is important. Maybe your iPhone gets 8 years of updates, maybe 6 noone knows.
I'm not sure why anyone would take Google seriously when it comes to sticking to their word, but Google is merely promising to deliver in the future what Apple has already been doing in the past, since that iPhone SE is already in it's eighth year of support.
Remember Google promising to never link your advertising profile with the other data they had about you when they bought DoubleClick?
Google literally testified before Congress that they wouldn't link their user data with advertising profiles when they bought DoubleClick. That didn't stop them from doing it anyway.
I think the fact that Apple started delivering long term support over a decade ago is much more compelling.
I think the fact that Apple has refused to unlock iPhone bootloaders for over a decade suggests that they can't naturally compel hardware upgrades without artificially depreciating software.
In the time since both devices shipped in 2016, refusing to provide support after three years has been a much more effective strategy to force users to toss out the original Pixel Phone than Apple's strategy of continuing to offer software support for the original iPhone SE through to the current year.
Pretty incredible feat of engineering (both back when it was launched and now). Does anyone know its current purpose? I’m curious if there’s anything we’re actively using it for or if it’s just a matter of “look for surprises”.
The magnetometers and charged particle detectors are still operating. So they are measuring the galactic magnetic field, and cosmic rays and the gas in interstellar space. The results are more or less what was predicted, though the exact boundary of the sun's influence was only discovered when Voyager 1 and 2 crossed over it in 2012 and 2018 respectively. Beyond that, yes, they're basically assuring us the sun is still there and that space is very empty. I don't think anyone expects the interstellar gas to vary in density on the timeframe that the Voyagers will be able to observe it but, I guess we'll find out!
Yep... days worth of work to get Python code I wrote 3 years ago working again from all of the 'bitrot.' Can't imagine how much work it must be for them to produce new binaries to update these old systems from modern computer hardware.
Although I suppose it could actually be easier depending on how the code works- perhaps it's just simple bare metal assembly without the approx 10^99999 libraries a modern python stack has.
It's probably a lot easier in a way, because they don't have to worry about external dependencies changing at all. Modern code is a real PITA that way.
What's hard about their work is 1) it's really, really slow to communicate with, so you can't iterate quickly, 2) the tech is really old and unlike today's stuff, so it's very specialized ___domain knowledge.
The lead engineer from the 70's is actually in Voyager 1's fuselage. It's cramped but they seem to be doing ok! They're just taking pictures with an old Canon point and shoot.
Sucker for any news about the voyage of the Golden Record (almost but not quite a CS Lewis title)! For fun, I wrote a short story two years ago about a top-secret "Voyager 3" mission (and the probe's unexpected return to Earth): https://f52.charlieharrington.com/stories/voyager-3/
Nice one!
Voyager carries the hopes, aspirations and fantasies of many of us space romantics.
On the technical side of things, there are also other companies doing live patching, like the Ericsson telephone exchanges. Their code can be altered “live” while operating, in order to fix or enhance the software and with zero downtime ;-)
I wonder if a "modern" probe from 2024 could ever have a similar lifespan.
With systems hundred times more complex and build by a NASA that is a fraction of what it was 50 years ago, i guess a modern day Voyager will not even make it to the edge, let alone continue to function.
All I can say is wow! This probe refuses to die. Despite being built with almost 50 year old technology. Amazing engineering by the people that designed and built it. Even more amazing is the people that continue to debug software problems from 2 light days distance.
The paper was written by Dijkstra, and even he doesn't like how it's become some kind of mindless mantra, instead of a warning against spaghetti code which it was. He never meant that you should never, ever use GOTO.
Although that’s one of the many possible explanations of the Fermi paradox [0], I prefer to think that the real reason we haven’t discovered (or we haven’t been discovered) is the fact that we’re limited by the speed of light.
The distances are so vast, almost unfathomable, that we need Faster Than Light means of traveling. Perhaps I’m being naive or romantic, but I prefer to think this is the real reason :-)
Also as particles spread out, there's less of a chance of interaction, which by extension makes it so there's less of a chance that a system of electrical impulse that fires in a synapse would exist over large distances. This system, of course, would be the catalyst for producing such thoughts as "I wonder if we're alone in the universe."
That's not to say you can't send dense information over larger distances with a shorter wavelength (re: radiation; gamma rays)... it just means the flying saucers with little green men landing on earth are probably out.
The speed of light and great distances are indeed a limiter, but the universe has been around for billions of years. Even with the great distances, an interstellar civilization that's been around for millions of years would have had plenty of time to find us by now.
The Voyager project is an amazing feat for humanity. But I wonder, does NASA take into account the repercussions of sending a probe deep into space? I know space is big but I can only think of the dark forest hypothesis. What's the plan for further space exploration?
Voyager 1 is currently 2.4e10 km from Earth. Trisolaris - sorry, I mean Proxima Centauri - is 4e13 km away [2], so in the (almost) 47 years since its launch, Voyager 1 has covered a whopping 0.06% of the distance. And it's not even headed that way.
Meanwhile, radio, TV and radar have been advertising the presence of a new technological civilization on Earth for more than a century. That means any entity worth worrying about within a 100+ light year radius - a distance 44+ times longer than to Proxima - already knows about us. And that sphere keeps growing in all directions at the speed of light, or roughly 18000 times faster than Voyager 1 is moving.
If you really want to worry about the Dark Forest, it would be more justified to ask if your local radio station takes into account the repercussions of sending commercials and TOS reruns into deep space.
Inverse square law. With the power we're transmitting at, signals become just background noise relatively quickly. They're nowhere near strong enough to be detectable at e.g. Proxima Centauri. This is what makes the radio signals we do detect, like fast radio bursts, so interesting. So for instance, the furthest signal we've detected is called FRB (fast radio burst) 20220610A, and that millisecond length signal came from a source with output energy equivalent to decades of the Sun's entire output.
This "law" is only for point (or spherical) sources, i.e. those emitting evenly in all direction - the area is increasing as a square of distance and thus signal power drops accordingly. With lasers, directional antennas, phased array antennas [1] the signal won't decay that fast. For instance with lasers it will be just a matter of alignment of internal elements to obtain a parallel light beam which doesn't lose power over distance (obviously there are other factors - atmosphere, particles in the vacuum, et al which will result in diveregence anyway).
In fact some billionaires [2] invest into using telescopes with fast sampling cameras (in this case IACT [3] telescopes used normally to detect gamma-rays by their interaction with the atmosphere) to detect flashed of extraterrestrial lasers.
>This "law" is only for point (or spherical) sources, i.e. those emitting evenly in all direction
We're not talking about lasers and directional antenna here, we're talking about humans using radio communications to talk to each other on Earth over the last 100 years, and whether that's detectable from great distances.
military radar transmissions set up during the Cold War to detect incoming ballistic missiles have the power and frequency characteristics to be detected over hundreds of light-years – and have already broadcast our existence to any aliens within around 60 light-years of the Earth [1]
But never underestimate the power of television:
The most detectable and useful escaping signals arise in a few ultra-powerful military radar systems and in normal television broad-casting. A model including over 2,000 television transmitters is used to demonstrate the wealth of astronomical and cultural information available from a distant observer’s careful monitoring of frequency and intensity variations in individual video carriers (program material is not taken to be detectable). [2][3]
once the Square Kilometer Array is completed in Australia and Africa, it would be able to detect the current TV carrier wave radio leakage at a distance of about fifty lightyears for objects in the southern hemisphere sky [4]
To scale it up, if you're an intergalactic entity, all you'd have to do is make a note of objects pulsing/resonating where they were not before.
"Oh, looks like Earth just got radio waves, which means there's probably life there."
From there you just monitor the wave Hertz output by the planet, and when they start transmitting waves at higher Hz, you know it might be time to meet up.
"Oh they're sending photonic gamma rays as messages now instead of radio waves, I'm gonna go say Hi."
How many Starships would it take to go and bring Voyager 1 back to earth?
If each ship was used like an expendable stage, and we were willing to use 100 ships, how long would it take to catch up to Voyager, stop, and return to earth?
sorry, i just have a silly question: what would it take to send new probes out there? voyager 3 and 4 for example to follow the same path (more or less, sans planet alignment) V1 and V2 followed, but with better hardware of course.
You can't. Voyager's launch date coincided with a planetary alignment allowing for gravitational slingshotting out of our galaxy. We have to wait for the next alignment.
If you only want to get a gravity boost from Jupiter and Saturn (like V1) I wonder if you wouldn't have to wait as long, say, every 20 years instead of every 176?
There's basically no point scientifically for doing the four-planet flyby again. Since Voyager, we've already done much better at Jupiter and Saturn with years-long orbiter missions. A quick flyby wouldn't get us anything we don't already know at those. Voyager gets the mindshare because it was first and the four-way slingshot is fun to visualize, but the reality is that Galileo and Cassini and Juno delivered thousands of times more data.
We could use more investigation at Uranus and Neptune, but we'd get much more out of extended orbiter missions to those rather than another quick flyby. A Uranus orbiter is currently one of the higher priority missions in planning, and there's a launch window for a Jupiter-Uranus slingshot in 2034 or 2035.
(What I wonder is, how much planning do these things need? Why can't we just launch another copy of Cassini to Uranus and skip all the expensive design? You'd need some changes to antennas and power supply for a more distant planet, but the scientific instruments and computing platform should just be reusable designs.)
Going out further than Voyager might be interesting just for studying the conditions outside the inner solar system. Though I'm not sure how much there is to observe unless you go an order of magnitude faster than Voyager to try to reach the oort cloud
If you mean just to beat them on distance, we could probably launch something capable of it in a decade or two by getting more serious about nuclear propulsion and using some slingshots to pick up speed further.
This is so cool. These guys are steely-eyed missile men. (Or women but I know what is most likely - lol). Remotely repairing a 45 year old probe 24 billion kilometers away. Wow.
Must be difficult debugging a system with a 45 hour round trip each step of the way. And here I thought debugging a system on customer premises was tough. Hats off!
Worse than the round-trip is that there's no second chances in some scenarios. If you mess up the wrong part(s) of the system, it's bricked with no way to recover it.
"The team discovered that a single chip responsible for storing a portion of the FDS memory — including some of the FDS computer’s software code — isn’t working. The loss of that code rendered the science and engineering data unusable. Unable to repair the chip, the team decided to place the affected code elsewhere in the FDS memory."
Just another proof that we may have gained a lot, but also lost something in our pursuit of modernity: on modern systems direct memory access is discouraged if not prevented by the underlying OSes, and this hack would not have been possible.
The modern equivalent to this would be an embedded system with an RTOS, where you do get full control of memory, because you are the OS. We just have nice abstractions on top of that for the most common use cases, since you very rarely need that precise of control over system timings or memory allocation.
>The team discovered that a single chip responsible for storing a portion of the FDS memory — including some of the FDS computer’s software code — isn’t working.
No one thought of having backup computers on the spacecraft?
Most of the computer hardware is duplicated but much of it has already failed/failed earlier - remember these are 50 year old hardware - a lot of the logic is TTL and discrete components - which are far larger than modern equivalents would be.
It's not just 50-year-old hardware: it's hardware that's been subjected to cosmic radiation for 50 years, part of that outside the solar system (so presumably even higher).