I dropped out of a PhD program because I was disillusioned and never found a path of value to my group's work. After reading this I really wonder if what was missing was context and world experience.
So much of what's wrong with a lot of academia is group think and insular publish/perish mind set. What would make me want to go back at some point in my future is putting research into a larger context and grounded in my own experiences of what is truly important and valuable.
I would guess the Apple VP has enough money to never have to worry about money, which probably makes the grad school experience quite different than yours.
I think this can be very true of theoretical research, but when I entered grad school I was pretty interested in the applied fields. My problem was I didn't have substantial industry experience to identify the labs that were doing real applied work compared to the publishing factories that drew tenuous claims to the real world to get grants.
Not many graduate students have really substantial industry experience when they start. Internships help a lot, provided the goal is to learn (and not be pressurised to publish papers even there. :))
It was probably unfortunate that you didn't like your experience with your group; but not all groups are like that. It depends on your advisor.
I'm on the verge of dropping out of bioinformatics, too, after about a decade of postdoc appointments. The corruption of biomedical research by funding imperatives [1] is heart breaking.
That said, cancer genomics could be well the field which makes bioinformatics clinically relevant.
I should have been more specific. I was mostly thinking of sequence analysis like the PhD candidate in the OP wants to do. Bioinformatics in general covers a lot of ground which is directly clinical in nature, like image analysis.
Sequence analysis has had an important role to play in genome annotation and comparison, but the recent (extremely expensive) attempts to apply it to the genetics of human disease have been quite disappointing so far. The biggest technical issues are firstly that our tools for measuring genotypes are too imprecise and too expensive to support experiments which will nail down exactly which variants are contributing to a given disease and secondly that almost all related studies of human disease so far are epidemiological in nature, because you can't do experiments on actual humans (and I'm not arguing that you should be able to. :-) This means it is very difficult to establish a variant as causative, and hard to design reliably repeatable experiments.
The political issues arise out of health research funding being fundamentally motivated by hope and fear, leading to massive commitments to research strategies which are wishfully ignorant of the above technical issues. This is followed by a lot of wishful ignorance of the strategies' failures by the groups who implemented those commitments. Usually this ignorance comes from moving the goal posts post hoc. The current sunset of GWAS is a good example. (See Peter Visscher's article in the January issue of AJHG for example.)
The technical issues are being worked out. Better genotyping will be developed, and at least some disease biology can be modeled in animals or cultured tissue (cancer might be good for this.) But those issues have been ignored for too long, at tremendous cost, because the serious funding rewards such ignorance. For instance, essentially the same strategy as GWAS is now being pursued on a huge scale, just using a different genotyping technology. No one has run an initial small-scale experiment validating this new approach, it's just based on the hope that the extra information provided by the new technology will provide the ingredient that the studies to date have been missing.
Thanks for the excellent response. I can imagine how misguided over-generous funding can accelerate research to wrong direction. If I interpreted it correctly, one of the biggest weakness is that not enough effort is to put to verify findings of genome analysis in real biological setup. Is it because funding for genome sequencing and analysis has grown in over-proportion compared to more traditional clinical research or that the need for this is not realized in genome analysis circles or something else?
Btw. Can you elaborate on the following excerpt. Is there specific problems in genome analysis techniques that make it harder to nail down correct variants?
our tools for measuring genotypes are too imprecise and too expensive to support experiments which will nail down exactly which variants are contributing to a given disease
I think it's interesting that many believe the next wave of innovation and value creation will come from BioTech, and the potential implications that will have for hackers like us.
I don't know about anyone else, but frankly it worries me. The beauty of the internet revolution + Moore's law has been that barriers to innovation have become virtually non-existent. It's possible to dream up and build the first iteration of the next world-beating website with no capital or specific education.
It seems like the barriers to entry for BioTech innovation, however, will be multiple. To even participate in that world you need a PhD. I don't know what this means about the future for hackers like you or I, perhaps nothing. But I wonder whether this wave of startup innovation will look like a blip on the radar of normalcy in 50 years' time.
> But I wonder whether this wave of startup innovation will look like a blip on the radar of normalcy in 50 years' time.
Probably not even a blip really. It's hard to imagine our unborn grandchildren caring about Twitter, Facebook, Dropbox, Heroku etc - and these are the top 0.00something% of startups that actually ended up mattering at all.
The good news is we get to enjoy this era when all you need is luck and a mildly interesting spin on crud database operations, solving problems that are not exciting enough to exist in future decades.
Ah, but the rate at which biotech is become accessible isn't just growing, it's accelerating.
The DIYbio movement/subculture/etc. has been growing for the last several years, but recently several serious hackerspaces have sprouted, providing not just knowledge and expertise but also equipment and materials.
I actually have the opposite opinion - I'm more worried by the current situation. I think the reason the current web startup scene goes after such small problems is that folks are coming right out of undergrad (or earlier) and haven't had PhD-type training to learn what hard problems are really out there. It's routing potentially great engineers away from training that would have let them make a serious impact.
Interesting, but I'm not sure I agree. In my experience education - especially post-grad - tends to push people further and further down the funnel of specificity. By that logic, you're much more likely to have a macro startup idea - like Gumroad for instance - without any grad-level education.
There were times when computers were big as a building and it required a degree to program them too, until the computer revolution came along.
Can't see why the same thing wouldn't happen with biotech.
The biology revolution is coming, both sooner and later than you think. As one indicator, the price per base pair of DNA synthesis is following a Moore's Law, (http://singularity.com/charts/page73.html), but Fredster is right in that there are other factors at play. However, you could say the same with computers, with bandwidth, cpu, and storage all improving simultaneously.
A story that our TA in EE40 (Signals in Systems) at Berkeley really stuck with me: in the early days of personal computing, when you would go to Radio Shack to buy a set of transistors, you had to individually test each one, because they had a 50% chance of not performing to spec. Biology currently is in that stage, though multiplied by another order of magnitude.
You can't possibly compare biotech to the computing industry in terms of where accessibility to the former is headed. Equipment costs may go down and education may become more accessible in biotech, but the risks to humans and property associated with running an entire lab will ensure access to non-trivial equipment and materials is granted only to those who are capable.
Biological systems are chaotic (in the mathematical definition). The more we want to manipulate the more we're going to have to know, to higher precision at the very least for the safety of any patients. It's not unreasonable to think that this will be a significant barrier for some time. We sequenced the human genome over a decade ago and, quote honestly, have gotten very little out of it. We can draw some neat graphs and hand wave a whole bunch but the reality is we haven't produced much from information. There are still, what, only a handful of fully sequenced genomes? And alleles are munged together. We can do some SNP correlation studies but that doesn't really tell you much. There is a lot of BioTech search to come, just in genomics, and it's going to take some big steps before it becomes accessible to someone in their garage, IMO.
Just to pick you up on one thing, we have sequenced thousands upon thousands of genomes since the human genome project finished.
For example there is the 1000 genomes project [1], and a project I am working on is sequencing about 100 ovarian cancer tumor / normal pairs. Most of this sequencing is complete, its the bioinformatics that takes the time.
GWAS studies (studies of correlation between disease and common SNP's) have not told us much that is actionable, but have provided us with "low hanging fruit" for further study, which is valuable.
Its very early ways, but genomics will completely change the diagnosis and treatment of cancer over the next 10 years.
I'm curious what the world will look like when we are smart enough to become actionable with this information. Do we want to live in a world where someone in their garage can come up with a drug idea and send it to a PharmaFoundry to get their drug fabed and then sell it to people?
Correct me if I am wrong, but we have sequenced thousands of genomes but not human. We only have a handful where handful is < 1000) of full human genomes AFAIK. 1000 genomes isn't done yet AFAIK (their page on sequencing progress is down unfortunately). The human reference genome is a mishmash of multiple genomes. It also contains huge sections of chromosomes that are just N's (blank) because assemblers are unable to determine what goes there. On top of that, an actual human genome is not one genome. At they very least there are alleles which can be different from each other and in general there can be many genomes (cancer). Haploid sequencing is coming, hopefully soon, if SMRT really pans out but, in my opinion, to say our knowledge of even the sequence data in a human is adequate is a stretch (I'm not saying that you said that).
So much of what's wrong with a lot of academia is group think and insular publish/perish mind set. What would make me want to go back at some point in my future is putting research into a larger context and grounded in my own experiences of what is truly important and valuable.