I didn't say it was caused by the electronics, but it was probably caused by pilot inexperience in manual flying at high altitude (among other things).
There have been other cases, for instance http://en.wikipedia.org/wiki/Qantas_Flight_72 where the autopilot put the plane in a dive because of a malfunctioning sensor and a software bug.
If the pilots on that plane had done nothing like like you suggested the AF447 pilots should have done, what do you think would have happened then?
Which is my entire point, until you have situationally aware computers, you need situationally aware human pilots.
> If the pilots on that plane had done nothing like like you suggested the AF447 pilots should have done, what do you think would have happened then?
The plane would have kept flying. Perhaps it would have flown out of the storm, perhaps it would have descended. Either way at some point it would have gotten low enough or warm/dry enough that the ice in the pitot tube would have melted (also the heater in the tube was on, but didn't have enough time to melt it), and the system would go back to normal and all would have been well.
Instead they did pretty much the only thing that could doom it: They ascended. This despite knowing they were near the "coffin corner" - i.e. the highest safe altitude for the Jet. The pilot clearly panicked and didn't want to crash into the ocean, so he went up, and it was exactly the wrong thing to do.
Your point about pilot inexperience in manual flying, is close. But from what I read it was actually that the pilot was used to the computer taking over and not letting him fly in a dangerous manner, this caused him to try ascend to the point of stalling the plane because he believed the computer would stop him.
So you end up with the worst of both worlds: A human who relies on the computer, taking over in a situation with no computer. You have to pick: Either computer all the time, or no computer. Perhaps computer assistance, but no control? Certainly computer override is no good (i.e. this lets you max out the controls and the computer will adjust them to the ideal levels).
> you need situationally aware human pilots.
This was impossible for both of them. The windscreen was completely blank, and (some of) the instruments were disabled. Neither the pilot nor the human had any idea what their situation was. What they should have done was remember where they were and assume they were more of less in the same situation.
> The plane would have kept flying. Perhaps it would have flown out of the storm, perhaps it would have descended.
I was talking about Qantas Flight 72, not AF447. In the instance of AF447, everything would have been OK if the pilots had done nothing. The airspeeds returned and were valid after ca. 1 minute. But in other scenarios, the plane would likely have crashed if the pilots had done nothing.
> Instead they did pretty much the only thing that could doom it: They ascended. This despite knowing they were near the "coffin corner" - i.e. the highest safe altitude for the Jet.
They could also have descended and risked exceeding the critical Mach speed, which could have broken up the plane.
> The pilot clearly panicked and didn't want to crash into the ocean, so he went up, and it was exactly the wrong thing to do.
Actually, the standard operating procedure called for a slight nose-up pitch (5 degrees) and maximum continuous thrust.
> Your point about pilot inexperience in manual flying, is close. But from what I read it was actually that the pilot was used to the computer taking over and not letting him fly in a dangerous manner, this caused him to try ascend to the point of stalling the plane because he believed the computer would stop him.
Exactly (although the pilots did know call out "alternate law" and should have known what that meant (no more stall protection).
> So you end up with the worst of both worlds: A human who relies on the computer, taking over in a situation with no computer. You have to pick: Either computer all the time, or no computer.
The current setup does work very well (given the current safety record of the aviation industry), and looking at GA statistics we'd likely have a more airliner crashes if pilots were hand flying all the time.
> This was impossible for both of them.
And it would have been just as impossible for the computer. Maybe some time in the future we'll have a computer that will do a better job than humans with dealing with emergencies, and then the autopilots will never need to disconnect for any reason.
But until we're there, the original assumption that more automation means that we can get by with less skilled pilots is just wrong.
> The windscreen was completely blank, and (some of) the instruments were disabled. Neither the pilot nor the human had any idea what their situation was. What they should have done was remember where they were and assume they were more of less in the same situation.
This was a reasonable assumption in this instance, but in other cases it would be the wrong assumption.
The big question is what led the pilot to follow the wrong procedure for inconsistent airspeeds, and why they didn't trust the instruments anymore even when they became valid again. It'll be interesting to see what the human factors group of the investigation comes up with.
If they had just done nothing it would not have crashed. Instead the pilot climbed until it stalled.