I wrote up the course a few years ago for easy reference. I need to update these notes it as I have about a year's worth of (minor) typos that people have pointed out [which is hugely appreciated], but in general they seem well received.
So I've always had this (entirely crackpot) idea that the Tunguska Event is actually the place and time in space/history that you'd want to test the first time-travel device to. From a 'test system' design perspective:
- You'd want it to be somewhere isolated to avoid casualties, especially if there was a radiation risk (which you wouldn't know).
- You wouldn't want people to be able to analyze the immediate wreckage with any kind of high fidelity because there might be signatures of whatever you sent back (or however you sent it) that might change the course of history/technology.
- You'd want there to be fairly detailed recordings (ideally photos), and for it to persist into modern history, not just be a weird anomaly of 'the past'.
- You wouldn't want it be interpreted as some kind of military action.
To be very clear, I don't actually believe this is what happened, but I quite like the idea...
Another event that took place in Russia and led to a variety of crackpot theories: the Dyatlov Pass incident. [0]
> The incident involved a group of nine experienced ski hikers from the Ural Polytechnical Institute who had set up camp for the night on the slopes of Kholat Syakhl. Investigators later determined that the skiers had torn their tents from the inside out. They fled the campsite inadequately dressed, some of them barefoot, under heavy snowfall and at temperatures below freezing. Six victims were determined to have died from hypothermia, while others showed signs of trauma. One victim had a fractured skull and another was found with brain damage without any sign of distress to the skull. Additionally, one woman's tongue was missing.
> Soviet authorities determined that an "unknown compelling force" had caused the deaths. Access to the region was consequently blocked for hikers and adventurers for three years after the incident. Due to the lack of survivors, the chronology of events remains uncertain.
> Several explanations have been put forward, including an avalanche, infrasound-induced panic, and a military accident. Sensationalist hypotheses include a hostile encounter with a yeti or other unknown creature.
Recent report about this case says that local police finally found an evidence that this was a murder. A witness charged in illegal possession of a gun confessed, that this gun belonged to a Mansi hunter, which participated in that massacre. The Dyatlov group accidentally discovered a holy place of Mansi tribe and took something from there. The Mansi have found that and attempted to return their possessions.
The missing tongue is not the weirdest part. Animals could have eaten it.
Every hypothesis about this event that made any sense involves a physical attack by a group of people, likely armed. The only two groups that would mass murder people just not to be found out are 1) foreign military or 2) escaped murderers.
> To be very clear, I don't actually believe this is what happened, but I quite like the idea...
I think that's the basic difference between a conspiracy crackpot[1] and a novelist.
1) sadly, as I read more history, a lot more of the "crackpot" than I am comfortable with was actually the truth. People do some strange things for some very odd reasons with some seriously poor logistic skills.
US military spraying SF bay with bacterial agents during Cold War without anyone knowing as a way to test the potential impact of biological attacks. At least one person died of an unlucky post-operation infection with that sprayed bacteria because the doctors couldn't figure out in time that he got sick with something that shouldn't even be there.
My guess is stories like that is what started the whole "chemtrails" thing.
ECHELON is probably the biggest (for tech) of the "oh, that couldn't be true" conspiracy theories that ended up being true. The whole release of information on it constitutes a big jolt to the head on what people will actually do.
I guess if you want to limit yourself to one delivery type. Its amazing how many old newspapers, transcripts of speeches & conversations, letters, and reports are available today that weren't in the past. Heck, scroll translations are also out there.
The world is an amazing place and people have written in many forms for a long time.
While we're on the subject of crackpot ideas - don't forget that Nikola Tesla has occasionally been blamed[1]. Not sure how big your tinfoil hat would need to be to avoid a Tunguska-level event!
I've always wondered why pondering about time travel devices with only the temporal dimension as a setting seemed to neglect the fact that things move about in the universe! Traveling backwards or forwards in time at the same point in space means that all the stuff you're interested in is likely to not be where you think you left it.
You would have to retain any momentum you have when traveling through time, so you would move more or less together with the earth.
Interestingly messing with momentum as you travel through time (i.e. take your momentum with you and it's lost from the old time) would be a bigger problem for physics than time travel itself.
On top of that having your gravitational influence just disappear and reappear elsewhere would be a huge problem. (Gravity is never created or destroyed, it just moves - there are never any discontinuities with gravity.) So when you travel through time your gravity would have to also influence things throughout all the time you transit.
The net result of both those things is you end up exactly where you would if you had just sat there and moved through time the natural way.
So time travel fiction that has the machine end up exactly in the same spatial ___location as it started are more physically accurate than those that try to talk about the change in ___location (and have some kind of compensator that moves it).
You may be right if you just fast-forward yourself through all of the intermediate 4D coordinates, but what if time travel is not fast-forward? Git merge timelines and oops, one million merge conflicts.
For example, constructing a wormhole might require precise knowledge of the ___location of both endpoints relative to a specific reference frame. Misplacing your destination by as little as 3 meters can cause you to end up in the ground and suffocate to death.
Could a time machine redirect, dissipate, or convert most of that energy into another form, such as a massive explosion in the middle of Siberia, leaving the traveler with very little momentum to worry about?
The solar system's rotation is about the center of the galaxy so that would be the natural pick for an inertial reference frame, or the supermassive black hole believed to be in the center.
The center of the galaxy isn't rotating, it's just a point, no? Picking the criteria for such a point would be difficult and tracking it even more so, but what's the alternative?
A black hole is a point. Its size is zero, regardless of its mass. Its event horizon has a size, but we don't need to care about that.
We do, however, need to care about the all the stuff that's revolving around that point, because even a slight imbalance will cause the galactic center of mass (barycenter) to move away from the black hole. Things get pretty wobbly out there.
Since it's well agreed that virtually every galaxy (spiral or elliptical) has a central supermassive black hole, it's fair at this point to call it part of the galaxy.
That reminds me of the novel "Pandora's Star", where two eccentric physicists surprise the first astronauts to land on mars:
> Wilson was already moving, glide walking as fast as was safe in the low gravity, making for the rear of the Eagle II. He knew they were close, and he could see everything on this side of the spaceplane. As soon as he was past the bell-shaped rocket nozzles he forced himself to a halt. Someone else was standing there, arm held high in an almost apologetic wave. Someone in what looked like a home-made space suit.
> [...] Behind the interloper was a two-metre circle of another place. It hung above the Martian soil like some bizarre superimposed TV image, with a weird rim made up from seething diffraction patterns of light from a grey universe. An opening through space, a gateway into what looked like a rundown physics lab.
> The other side had been sealed off with thick glass. A college geek-type with a wild afro hairstyle was pressed against it, looking out at Mars, laughing and pointing at Wilson. Above him, bright Californian sunlight shone in through the physics lab’s open windows.
There was a short story on starshipsofa[1] with a similar premise. A research team working time travel are worried about massive energy events, so they plan their test runs to coincide with nuclear tests. I wish I could remember what it was called, it was rather good.
Maybe it was a success. Maybe we didn't send a person back, we just created an explosion in the past for the purpose of verifying that it altered history. Success! Now we just have to find Nelson Mandela and the creator of the Berenstein Bears...
(Okay, I'm done being a crackpot now, it was fun though)
How many timelines are there? Or is there just a single timeline and the act of sending this event back in time has always existed, the timelines conjoined at some quantum level.
It feels paradoxical either way to me, but maybe the multiverse is cool with there being universes that are paradoxical.
Perhaps there's a dimension beyond time, and just as information changes in space across time, the entirety of our timeline changes within this upper dimension. The timeline that we once thought was straight and obvious turns out to be wiggling around and looping through itself in this "time of times".
Wouldnt be easier and safer for timespace to just drop an artifact in remote area and then try to retrieve it though? Killing people as an timespace event could cause changes in to the world.
And even better: suppose the cause of Tunguska was simply an asteroid. Then it would still be a great place to test the first time-travel device to, because the place is already associated with weird things. No one would be able to distinguish between readings and hypotheses of the original event and anything added by the time-travel device.
Reproducible is a major issue across science in general, but the difference is there's no reason why one shouldn't be able to easily re-run a defined analysis on a more recently updated data set to ask if conclusions drawn previously still hold. I actually published a side-project paper on this (in biological sciences) last year [1] - what was scary was there was such a lack of discussion surrounding this idea, despite the fact that large databases of biological data are CONSTANTLY changing and updating.
The other difference is that as far as I know, computer science is the only discipline for which industry has solved the problem of reproducibility; it's one thing to be asked to design a method to run reproducible studies of humans, it's another to ask researchers to run `git remote add https://github.com/user/repo && git push --set-upstream`. That's not asking for any support, or other effort on the researcher's part, and I frankly don't understand how the CS academic community doesn't have this as a standard when it'd be so easy to implement.
To be fair, this doesn't have to have originated via animal-to-animal infection, but could be an example of sporadic CWD.
As far as I know, sporadic CWD is relatively uncommon, but given CWD is caused by the PrP protein, and there are a number of known mutations in PrP which can increase its likelihood of undergoing prion-conversion, I'd hope they're going to sequence this animal's PrP gene to see it it shed's some light on the etiology.
Irrespective, this could still mean that CWD is now endemic in Europe.
>>>
Updated for clarity and extra info/context (thanks pbhjpbhj!)
>>>
CWD: Chronic Wasting Disease (deer-based prion disease - main topic of article)
PrP: The specific prion protein involved here. Note that (confusingly!) prions are both a 'class' of proteins but also refers to a specific protein (PrP).
Prions (class) are proteins which can exist in one of two states. In their soluble form they're happy-go-lucky proteins that are monomeric (i.e. exist as a single unit). However, these soluble-form prions can undergo a conformational change (re-arrange their shape) into a different conformation (the infectious form). The infectious form of the prion can do two specific things: 1) Aggregate (so all the previous soluble prion proteins get stuck into a big wad of protein) 2): Catalyze the conversion of soluble-form prion into the infectious form. Herein lies their infectivity - you get an exponential growth in the number of proteins in the infectious state.
Prions (PrP) is a specific protein found in many higher-order multicellular organisms that is the SPECIFIC protein that causes a range of prion diseases (Creutzfeldt-Jakob Disease (CJD), BSE [mad cow], CWD, Scrapie etc). There are species barriers to these diseases, even though the proteins are pretty similar (i.e. humans cannot catch CWD from deer, even though the PrP protein misfolds in CWD and the same human version misfolds in CJD). These species barriers are convenient (!!) but very poorly understood, which is somewhat concerning.
Finally - it's worth point out prions aren't always bad. Fungi use them as a mechanism to facilitate non-genetic heritability/diversity [1], and we're increasingly finding examples of prion-like mechanisms that facilitate fast and irreversible signalling in cells (e.g. in the inflammation response [2])
[1] True, H. L. & Lindquist, S. L. A yeast prion provides a mechanism for genetic variation and phenotypic diversity. Nature 407, 477–483 (2000).
[2] Cai, X. et al. Prion-like polymerization underlies signal transduction in antiviral immune defense and inflammasome activation. Cell 156, 1207–1222 (2014).
Both this result and the preceding paper are also entirely consistent with the hypotheses that:
1) People who are susceptible to vCJD are more susceptible to AD
2) The (cellular) stress caused by vCJD increases susceptibility for AD
3) People who have had brain surgery (a traumatic and stressful experience in terms of brain swelling) are more susceptible to AD
All three of which would be entirely inline with the fields' current way of thinking about AD, proteostasis, CBI etc. Which is not to say I actually don't think that there is a transmissible element, but I think we should avoid jumping to conclusions...
But what about the CJD controls that didn't receive transplants derived from cadaver tissue? Those didn't show the same rate of Alzheimers-like plaques, if I understood the report correctly.
The CJD controls had sporadic (sCJD), not variant CJD (vCJD) (whether or not these are distinct diseases is certainly an open question - I'm more trying to make the point that there are a lot of variables up in the air, and while it's a striking relationship there are other equally striking relationships one could draw).
This is very nicely writen, and looks like an extremely useful resource for those interested in entering the bioscience field. I would offer one word of caution, which I think is especially pertinent to those coming from a CS background (and, I should add, not something this book particularly does, as far as I have read).
Historically and practically, biology as a field has often represented processes and events in an highly linearized and well defined fashion. This is extremely attractive for a number of reasons. As one key example, we (as humans) remember through narrative, and so the construction of a Rube Goldberg type description of a process is often a useful technique for easy recall of complicated information ("A hits B which winds up C and switches on D etc...").
The other reason is that many of the experiments would really imply a linear pathway if that were all one look for. There is often a clear-cut progression of information signals or metabolic intermediates moving from one state to another state through well defined intermediates, such that if that were what you were looking for, you'd find it.
In this representation, many biological processes are highly analogous to computer programs which perform some task - you have an input, and through functional manipulation generate an output.
The realty, as has been uncovered in the last 15-20 years, is that most of these processes and events are not linear pathways. They are wildly heterogenous networks that integrate information spanning a range of temporal and spatial scales. The non-equilibrium nature of the cell means that, to a certain extent, everything is coupled to everything else through interactions where the associated coupling coefficients are also dependent on everything else.
The reason I bring this up is that I think it's very temping to find analogies between CS and biology (DNA = hard drive, RNA = memory etc). The problem with this is that we (again, as humans) implemented the underlying computer architecture, while we're only scratching the surface of biological complexity. By prescribing that some mapping of CS-to-biology exists we risk convincing ourselves that we understand the biology better than we do, or making assumptions regarding how the biology may or may not work.
Clearly, this kind of description can be used early on, but its important to recognize that these analogies should be viewed as broad-brush stroke descriptors and not functional ones.
+1 This is a nice reminder about how "organic" this ___domain is. I agree, even if I read this cover to cover, I wouldn't be ready to nail a bioinformatics job interview.
As an armchair biologist (among other things,) this paper is a great next-level-of detail from the pop-sci knowledge that "DNA is the Program." Indeed, Wikipedia's illustration of the workings of a ribosome takes steps towards your point, @alexholehouse. It's jagged and sloppy, and while it appears clock-like in its machinations, one must immediately ask how that could be anything but an oversimplification. Is this the workings of the computer that interprets DNA's "program?"
A final point, it's sobering to watch this and think that every movement of these proteins represents at least one doctoral thesis' worth of work. Although we have amazing ways of seeing these microscopic actions at play, we don't exactly have debuggers, REPLs or profilers that let us observe cells unaffected. Messy stuff.
I don't agree that people have long thought these were linear pathways. My undergraduate education was more than 20 years ago and beyond having the "Biochemical Reaction Pathways" chart on my wall (http://web.expasy.org/pathways/ has been around 40 years) which clearly shows things as a graph structure, it was commonly taught that there were complex networks, regulated at many points.
I'm sure you mean well, but you're just saying the same thing the author of the document has already said.
>we risk convincing ourselves that we understand the biology better than we do, or making assumptions regarding how the biology may or may not work.
Okay, so the risk is non-zero. That doesn't tell us anything though. Our scientific progress hasn't halted because of high school students being convinced that they know all there is to know because they scored high on a test of the simplified model of reality.
Every single formal method of instruction uses simplified models to ground students' understanding of these concepts. All exams are graded on students "knowing" something, that's much more complicated in reality.
>Clearly, this kind of description can be used early on, but its important to recognize that these analogies should be viewed as broad-brush stroke descriptors and not functional ones.
Analogies are always about broad brushes. All analogies break down at some level. Because at that point, you would just describe the complex idea, rather than use the analogy.
Yeah, I completely agree. I did bioinformatics research for a couple of years after grad school, but before I went to industry. It's going to take many, many years before enough basic research is done to prove that you're right (or possibly wrong!) and that it's not just right now that matters, but also the last minute or hour or day or week or month. When signalling concentrations are super low and don't decay instantly, you've just made a "memory" that will persist some time after the initial stimulus.
I think that in order to actually understand truly how things work people are going to have to simulate many different types of cells from first principles (i.e. the physics) and even though the compute time will be 99.99999% wasted checking all kinds of interactions that turn out to be unimportant, that's what will find all the rare interactions that really matter.
The combinatorics that go on inside cells is truly staggering.
To be fair to the book author, on page 22 he writes, "often there will be complex non-linear relationships between the parts of a biological chemical pathway."
Still, I found alex's comment very informative, a good addition to the book on that topic.
I'll also add that studying the brain and this gets me thinking on general-purpose analog computing. We need an exponential increase in research on that vs what we have now. The brain is analog. The cells are analog. The few GP-analog and digital-analog hybrids that are built outperform their digital counterparts. So, we need to start looking at continuous control and harmonic systems in nature from an analog modeling perspective as some great circuits and capabilities might come out of it.
Materials science is already way ahead here copying nature left and right. Favorite example that I found just last night:
From a cursory reading of the article this looks like it's solving an extremely similar problem to Gaussian Process Bayesian Optimization (GBPO [1]) in a fairly similar way. I wonder what a head-to-head comparison between these two would look like.
[1] J. Gardner, M. Kusner, Z. Xu, K. Weinberger, and J. Cunningham,
in Proceedings of the 31st International Conference on Machine
Learning (ICML-14) (JMLR Workshop and Conference Proceedings,
2014), p. 937
I should (sheepishly) say that I knew Vol. 1 and 2 were available but very recently discovered 3 was now also available. As it turns out, 3 was published a while ago, so while it was new to me, the full set being available was not, in fact, new to the world.
That said, hopefully it's all new to [some] other people!
Posts like this (and there are many of them) scare me.
I'm in the latter half of a PhD. I love it. I work insane hours entirely out of my own choice, because it is the most rewarding and enjoyable thing I've ever been a part of.
The idea that I've found something that I love, that is challenging, that (I hope) I'm relatively good at, and that has a definite net positive good for us as a species/society, yet I may not be able to pursue this long term because of the immense challenges facing academia as a whole (catalyzed, I would argue, by tragic lack of funding) is really concerning, on both a personal and a societal level.
I am almost 2 years into my postdoc now. It's easy to be caught up in the academic bubble. Especially as a grad student. Things that are insignificant seems like a life or death decision for you. I know it is very difficult to get away from your work for various reasons, but taking a week off and completely get away from academia is extremely important. It gives you a better perspective on your work and your motivations.
I think most grad students towards the end go through a self-examination phase where they really think about whether their motivation for wanting to do this is internal or whether they were just caught up in the popular perception of the monk scientist who nobly advance human knowledge.
It depends a lot on who you're working for. You might think that every professor with interesting publications at a high-ranking institution is smart, interested in their work, and invested in the success of their students. Or at least, this is what I thought when I was an undergraduate, when I did some research with some really great people. Since then, I've realized that some people of these professors are not particularly interested in science, but have big egos and are particularly skilled at promoting themselves. They're at top institutions and they are thus able to attract lots of money and smart, hard-working students, but their intellectual contribution to their lab's work is relatively small and the environment they produce within their labs (and institutions) is pretty toxic.
"You might think that every professor is ... smart, interested in their work, and invested in the success of their students. Or ... are not particularly interested in science, but have big egos and are particularly skilled at promoting themselves."
In the first place, those two categories are not necessarily mutually exclusive. In the second, there are other kinds: some of the best scientists I know have essentially set up an assembly line: students are plopped on one end of the conveyor belt, papers are chiselled off of them just before the two major conference seasons of every year, and a skilled researcher is boxed at the end of the line and shipped off to a grateful academic or industrial research institution.
The principle scientists are great people, they do excellent work, their students are good and well taken-care-of, and no one really has any reason to complain.
The only down side is that the papers coming out are kind of incremental, given the schedule of publishing at conferences twice a year, and a student that goes in at the top of the machine is going to come out the bottom as a researcher on a particular topic. I have no idea how well they're doing, in aggregate, a few years after graduation.
For me it all changed as soon as the PhD ended. Suddenly the cost to employ you skyrockets, and it becomes apparent how few permanent positions there are relative to the applicant pool. My PhD was pretty great, but my experience afterwards was pretty depressing.
I received my Ph.D. a little over a decade ago. I didn't succeed in getting a research job outside of academia, and the bit over a decade that I spent in academia convinced me I didn't want to be a part of it. As a result, my fancy piece of paper sits, completely unused, in a closet.
Whenever someone points out that I wasted numerous years of my life with nothing to show for it but a low salary history, I have to admit that I absolutely do not regret getting a Ph.D. I loved the work, and I loved the process of doing the work. I'd do it again (if I didn't have to pay for it again---I worked full time during grad school, which I recommend no one do).
So, yeah, don't go for a Ph.D. in computer science unless you absolutely cannot not go for a Ph.D. in computer science. And then, don't expect it to pay off in any way.
The system was sufficiently functional to nurture enough knowledge and enthusiasm in you that you were equipped to attend and thrive in a PhD program. On a societal level, I don't think academia's sorry state is much to worry about.
On a personal level, it's pretty terrible. If you do decide to leave academia, there are plenty of opportunities for smart, committed people. I bailed on a PhD program shortly after starting my dissertation, and having Ivy-league graduate work on my resume has opened a lot of doors.
> catalyzed, I would argue, by tragic lack of funding
No, you don't get it. There is tons of funding. But academia is a corrupt old-boys network. Those petty, conniving people rise to the top and ultimately control the system.
- A grad student (who regrets going to grad school)
From experience as a researcher in a country undergoing severe R&D funding cuts, I can tell you that the ones that are most likely to survive the lack of funding are the "corrupt old boys", who can exploit their contacts and their system-gaming skills to secure the little funding that remains. Promising young scientists that aren't under the wing of one of the old boys are much likely to suffer the raw end of the deal. Basically my perception is that the funding cuts made everything this article describes much more prevalent, not less.
So in my opinion, "catalyzed by tragic lack of funding" (note "catalyzed", not "caused") is a pretty accurate description.
I'd have to agree with this. With the possible exception of the cold war, in certain fields, there is more now than any time historically. Before WWII, "professional scientific research" was a pretty rare thing; certain large companies did it (out of which comes the "applied" and "pure" distinction in many older fields, I believe) and certain educational institutions (but most were essentially teaching gigs with a spare closet for lab space).
Now, there's lots of money, but lots of faces feeding at the trough, too. And lots of reasons why breeding more scientists who subsequently starve isn't going to stop.
Does that mean a lack of funding can't catalyze the issue?
Edit: Just for transparency (topical!) I did not downvote you, and totally agree that cronyism is a major (and yeah maybe causative) issue, but I also think the decision on how to fund is basically an entirely separate problem which really doesn't have an obviously good solution (in my mind).
But not all scientific research will benefit companies. Some will even be detrimental to large companies (disruptive technologies). Therefore blueskies pure science needs funding from some source other than industry.
What I do find disturbing is the trend to push University research towards immediately commercially viable stuff. Industry would/should do this stuff anyway.
What, like the pharmaceutical industry? I would argue that much good knowledge is created that way but it, too, has its issues. See much of Ben Goldacre's work.
So the idea that neurodegeneration is essential a manifestation of protein-specific prion disease is not a new idea[1][2][3][4][5] but is one which is steadily gaining traction. That said, from a complexity perspective we're still at the tip of the iceberg.
There's extensive evidence to suggest that the big, fiber-like aggregates that form in these diseases may in fact be neuroprotective, and represent a largely inert thermodynamic end state (basically a big inert crystal which kind of hangs out but doesn't do much damage). More labile, smaller aggregates (basically blobs of protein stuck together) which are formed during the fibril formation process may in fact be more toxic, though again it really depends on who you ask.
How the toxicity is manifest is also totally unclear - is this a loss of function (proteins which aggregate disappear, so they can no longer do their job), gain of function (proteins which aggregate now interact with other things, leading to some new and bad outcomes) or somewhere in between? How does the cellular quality control system interface with all this? Why are certain cell types much more susceptible than others? How does the disease spread in the brain?
There are a lot of very smart people doing some amazing work. If you're interested, off the top of my head I'd take a look at work by Sue Lindquist, Simon Alberti, Marc Diamond, Don Cleveland, Virginia Lee, Eric Ross, Paul Taylor, Rick Morimoto and David Eisenberg (just to get you started!).
The one thing I would say is that I wouldn't treat this as cause for concern for interpersonal spread. If there was a real risk of contamination from (say) surgical instrumentation (as there was with vCJD in the UK) there would be extensive evidence for this (just in terms of a numbers game). Which isn't to say if you put someone's brain in your brain you won't develop the disease, but at that point you probably have bigger issues to worry about...
In my opinion, a much more exciting story relating to ALS was published in Cell a couple of days ago;
Patel, A., Lee, H. O., Jawerth, L., Maharana, S., Jahnel, M., Hein, M. Y., … Alberti, S. (2015). A Liquid-to-Solid Phase Transition of the ALS Protein FUS Accelerated by Disease Mutation. Cell, 162(5), 1066–1077.
http://holehouse.org/mlclass/