Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Space Transportation Technology

Poor Pilot Training Blamed For Virgin Galactic Crash 83

astroengine writes: SpaceShipTwo co-pilot Michael Alsbury was not properly trained to realize the consequences of unlocking the vehicle's hinged tail section too soon, a mistake that led to his death and the destruction of the ship during a test flight in California last year. Responsibility for the accident falls to SpaceShipTwo manufacturer Scaled Composites, a Mojave, Calif., company owned by Northrop Grumman Corp, the National Transportation Safety Board (NTSB) determined at a webcast hearing on Tuesday (PDF). Poor oversight by the Federal Aviation Administration, which oversees commercial spaceflights in the United States, was also a factor in the accident, the NTSB said.
This discussion has been archived. No new comments can be posted.

Poor Pilot Training Blamed For Virgin Galactic Crash

Comments Filter:
  • Chrysler got hit, now Northrop?
    • by zamboni1138 ( 308944 ) on Tuesday July 28, 2015 @06:16PM (#50200461)

      No laws were broken. There is no way to levy a fine. The NTSB is not in the business of fining individuals or organizations for violating rules or laws. That's the job of the FAA and other various agencies that oversee road vehicles, trains, and boats.

      The NTSB does their best to identify the probable cause(s) of the incident, what factors led up to that incident, and, most importantly, what measures to take to prevent any future incidents. It's up to agencies, like the FAA in this case, to implement suggestions from the NTSB.

      In this case, most of the blame appears to fall on the FAA.

      • In this case, most of the blame appears to fall on the FAA.

        I would expect that it's classified as some sort of "Experimental" vehicle at this point, for which the usual rules do not apply. So I doubt the FAA has much to do with it either.

        Even so: given the known design of the craft, how could he possibly NOT know that unlocking the tail section prematurely was dangerous? I mean, seriously. "Oh, sure, let's just let it flap in the breeze at a few thousand miles per hour. No big deal."

        Sheesh.

        • by sh00z ( 206503 )

          I would expect that it's classified as some sort of "Experimental" vehicle at this point, for which the usual rules do not apply. So I doubt the FAA has much to do with it either.

          No, TFS has it correct. It's classified as "Commercial Spaceflight," and the Federal Government deliberately moved jurisdiction from NASA to the FAA [faa.gov].

          • No, TFS has it correct. It's classified as "Commercial Spaceflight," and the Federal Government deliberately moved jurisdiction from NASA to the FAA.

            It must have been relatively recent, then. Even so, I repeat that I doubt the "usual rules" apply here.

            Having said that, FAA would seem the logical organization, given that its mandate is about interstate commerce.

  • by thegarbz ( 1787294 ) on Tuesday July 28, 2015 @04:59PM (#50200033)

    If there was a criteria for safe unlocking of the hinged tail section then why wasn't it interlocked until the criteria was satisfied?

    A bigger error here is reliance on operator training. It's the least reliable form of ensuring a certain outcome.

    • by hawguy ( 1600213 )

      If there was a criteria for safe unlocking of the hinged tail section then why wasn't it interlocked until the criteria was satisfied?

      A bigger error here is reliance on operator training. It's the least reliable form of ensuring a certain outcome.

      From TFA:

      Those ships will include an extra mechanical device to prevent pilots from inadvertently unlocking the tail sections, known as “the feather” early, Virgin Galactic wrote in a report obtained by Discovery News.

      • The cold, hard reality of many a tragedy is that outcomes not foreseen by developers wind up relevant, and thus, engineered out of the realm of probability.

        I can guaranfuckingtee you the engineers considered this event. Their take at the time? " Nobody is going to pull the feather actuator prematurely!"

        • ... " Nobody is going to pull the feather actuator prematurely!"

          And I imagine if they *called* it the "feather actuator", perhaps nobody would have done it. From the story, every reference to the switch is as a feather unlock. To me, unlock doesn't mean actuate.

          I'd be curious as to what the official nomenclature for the switch is.

          • Unlocker... actuator...

            Let's say we agree the "official" name has probably been replaced by something way more crafty in light 0f the incident.

            The Feather actuation procedure (FAP), or fapping, per conversacronyming.

      • by Ol Olsoc ( 1175323 ) on Tuesday July 28, 2015 @08:47PM (#50201321)

        If there was a criteria for safe unlocking of the hinged tail section then why wasn't it interlocked until the criteria was satisfied?

        A bigger error here is reliance on operator training. It's the least reliable form of ensuring a certain outcome.

        From TFA:

        Those ships will include an extra mechanical device to prevent pilots from inadvertently unlocking the tail sections, known as “the feather” early, Virgin Galactic wrote in a report obtained by Discovery News.

        Which by the way, might be able to fail itself, and keep the pilot from unlocking the tail section when it needs to be unlocked. Killing the pilot and co-pilot.Hellova world, eh?

        • Which by the way, might be able to fail itself, and keep the pilot from unlocking the tail section when it needs to be unlocked. Killing the pilot and co-pilot.Hellova world, eh?

          True, but engineering is oftentimes about weighing risks against each other. The risk of pilot error in this case has been demonstrated to be a real threat. The mechanical interlock can also be designed with overriding backup systems that can be activated by the pilot in case it fails for some reason. It's far better to have the pilot use a dedicated toggle in case of a rare emergency than have to remember to flip a switch at the correct time on each flight or risk the destruction of the spacecraft.

          • Which by the way, might be able to fail itself, and keep the pilot from unlocking the tail section when it needs to be unlocked. Killing the pilot and co-pilot.Hellova world, eh?

            True, but engineering is oftentimes about weighing risks against each other. The risk of pilot error in this case has been demonstrated to be a real threat.

            I'm in total agreement there. My point - if there is much of one - is that hindsight is 20/20. Some times we talk about these things as if there was some major negligence by someone - It's thte old "Something happened, so someone has to go to jail" outlook. There will be more problems, and more fixes.

            In the end though, if we make the thing the mythical "safe enough", it might be too heavy to fly.

        • by Anonymous Coward

          Fun fact: The unlock is done fairly early (Mach 1.4) because if it doesn't work, they have to abort the ascent, dump rest of the oxidizer and land. Full burn to apogee without unlockable feather will kill everyone on re-entry (vehicle breakup).

          The design kinda hinges (pun intended) for the feather to work right every time. There is no backup.

    • It almost sounds like no-one was in the know that doing it too early would break it.

      • by Dutch Gun ( 899105 ) on Tuesday July 28, 2015 @05:28PM (#50200181)

        I can't imagine the engineers who designed this wouldn't be aware of those consequences. In fact, I'd go so far as to call this a partial failure of the engineering department as well - specifically, the ones who created the cockpit controls. I mean, the spacecraft basically had a single lever in the cockpit which if pulled at the wrong time would result in the fiery destruction of the spacecraft and death to all aboard. That's a hell of a consequence for a single mistake in the cockpit.

        Granted, clarity in hindsight and all that, but it just seems surprising to me that this possibility wasn't given more thought, given that this was a major feature of the spacecraft. You can imagine they're probably taking a second look at other systems and trying to figure out what the potential outcomes of human error might be and ways to mitigate those errors. At least, I hope they're doing that.

        • It sure seems like they would at least have had a Ready indicator for unlocking. You want the pilot making as few decisions as possible, particularly during those critical moments of the flight.
        • by metlin ( 258108 )

          As a pilot, I cannot agree more. Some of the cockpit controls out there are downright obnoxious, especially for rotary wing.

          I have a friend who is a Harrier jet pilot, and I have heard some horror stories on landing those on aircraft carriers.

          Usually, we are told what *not* to do, and so unless explicitly forbidden (e.g., do not do X before this time), we will assume it will be alright. This is clearly an engineering and a documentation/training failure.

          It's easy to blame the pilot, but if anything, he's a

        • I can't imagine the engineers who designed this wouldn't be aware of those consequences.

          My line of work deals with exactly this kind of thing. Engineers are in many cases obligated ethically or via business requirements to find the most inherently safe design that fits the design criteria, but one thing we (engineers) fail at most spectacularly is taking into account human factors, especially operator actions.

          It has been seen in some spectacularly bad decisions over the years. The introduction of control systems removed costs of adding alarms so they put alarms on everything overwhelming the o

    • by decep ( 137319 )

      Because in rocket ships and other experimental craft, as much control as possible should be deferred to the human pilot. Even to the possible detriment of the pilot.

      If the pilot needs to purposefully destroy the craft to prevent greater harm or damage, even if it kills him, you do not want the equipment to respond with "I'm sorry, Dave. I can't let you do that."

      • Comment removed based on user account deletion
        • by KGIII ( 973947 )

          Too soon! No, not really. He seems to have gotten the lyrics wrong though, he should have stuck with the original. "Rocky Mountain high..." He went with the opposite as I recall.

    • If there was a criteria for safe unlocking of the hinged tail section then why wasn't it interlocked until the criteria was satisfied?

      There are problems with interlocks that aren't often appreciated by the armchair engineer. They add weight and complexity to a system. They themselves can fail. They add to the maintenance burden. They add to training, Etc... etc... TANSTAAFL.

      A bigger error here is reliance on operator training. It's the least reliable form of ensuring a certain outcome.

      Yet, for

      • They add weight and complexity to a system. They themselves can fail. They add to the maintenance burden. They add to training, Etc... etc... TANSTAAFL.

        They add weight if they aren't free (i.e. electrical systems). They themselves fail typically when some other system fails too (the logic solver is the most reliable component of any system). They add to the maintenance burden (depending on how they are implemented, mechanical yes, electrical barely if any). They add to training .... Training people on the function of automated systems and the meaning of lights is much easier than training people on having to make the decision themselves. I do agree about the free lunch comment, but this sandwich very likely isn't as expensive as you think, unless the plane is truly purely mechanical (unlikely).

        Yet, for being the least reliable, it's a method that works very well - presuming the operator is properly trained.

        No it doesn't. Not even in the slightest. The single most reliable thing any industry has done is reduce reliance on the most fallible component. We create interlocks because operators are fallible, we install safety systems because primary control systems are fallible. When engineering protection systems any system that requires operator action can according to the standards at best be claimed to work 90% of the time, a well engineered safety system can achieve 4 orders of magnitude better reliability. But all of that is neither here nor there as fundamentally disasters are prevented by layers of protection. You add layers without common causes of failures to improve safety. Improving training is an expensive way to get an almost non-existent increase in safety.

        • this sandwich very likely isn't as expensive as you think

          Only because, like most armchair engineers, you've breezily handwaved away issues you have quite cleary no clue about.

          Yet, for being the least reliable, it's a method that works very well - presuming the operator is properly trained.

          No it doesn't. Not even in the slightest.

          Millions (billions?) of man hours of operation of aircraft, spacecraft, submarines, etc... etc.. says just the opposite. Again, you have no fucking clue what you're talkin

          • Nice try mate, but as a protection systems engineer I have more clue than most, especially about the cost and reliability of various factors. The airline industry is a model upon which many other industries are shaping their systems, precisely due to the careful control taken away from pilots.

            Specifically you may want to look up the accident statistics and the current trend in which it is moving. Most notable was the introduction of better control systems for airplanes in the 70s which has led to a steady d

        • by Viceroy ( 26756 )

          but this sandwich very likely isn't as expensive as you think, unless the plane is truly purely mechanical (unlikely).

          As a former Scaled engineer, I can tell you, the plane is almost purely mechanical. SS1 was completely mechanical, except for an electric trim mechanism for the horizontal stabilizer. SS2 was identical, except the horizontal stabilizer was boosted by a self-contained electromechanical system that took a long time to design with failure modes that were controllable. That means pushrods and cables/pulleys to the control surfaces, large springs for gear extension, and mechanical actuators for the feather an

    • by KGIII ( 973947 )

      Spoken like someone who has never made stuff for users. If you put a big read button on it and it has a label that says, "Do not push!" The first thing I am going to do, after you leave the room, is push that. Imma pushing it hard too. General Principle (and his army of ants) has told me to do so, I am not one to disobey orders lawfully given.

  • by Anonymous Coward

    I'm no expert on pilot training... far from it.

    But if I were learning to fly a spaceship, the first question out of my mouth would be "what all could kill me?"

    • But if I were learning to fly a spaceship, the first question out of my mouth would be "what all could kill me?"

      Almost everything. The question I hear astronauts apparently ask is "what is going to kill me next? [lifehack.org]" It seems to be about 90%+ of their training. Trying to figure out all the ways they can die and how to mitigate the chances of it actually happening.

      • by Whiternoise ( 1408981 ) on Tuesday July 28, 2015 @05:36PM (#50200231)

        That's pretty standard for all aviation training. Flying is easy, much easier than driving in a lot of ways. Not killing yourself is a lot harder. That's why pilots have reams and reams of checklists covering pretty much every conceivable problem that can happen. Similarly when training in a simulator, the operators can pretty much throw the book at you to see how you react to losing all your instruments and a wing while flying through a thunderstorm.

        NASA's generic rulebook is over 2000 pages long and is well worth a flick through if you're a space geek http://www.jsc.nasa.gov/news/c... [nasa.gov]

    • by PPH ( 736903 )

      This switch will kill you ... sometimes. Other times it is essential to save your life.

  • From the report:
    The unlocking of the feather during the transonic region resulted in uncommanded feather operation because the external aerodynamic loads on the feather flap
    assembly were greater than the capability of the feather actuators to hold the assembly in the unfeathered position with the locks disengaged.

    So maybe the copilot thought that he was preparing for the future feathering operation by unlocking it, and he did not think he was initiating the feathering. Usually an "unlock" switch is only a permissive, and it does not initiate the actual operation.

    • Perhaps that's one reason other spacecraft use names that are very specific. I always wondered why they would, say, command the pilot to "disengage the IVVIM (Intra-Vehicular Visual Illumination Mode)" instead of telling them to "turn the light out". If the unlock switch had some god-awful name describing exactly what it did, then maybe the pilot wouldn't have thought "let's unlock this now so we'll be ready".

      • If the unlock switch had some god-awful name describing exactly what it did, then maybe the pilot wouldn't have thought "let's unlock this now so we'll be ready".

        You mean like "self destruct button"?

      • by Anonymous Coward on Tuesday July 28, 2015 @06:16PM (#50200467)

        It appears that deploying the feather was a multi-step operation. The flap covering the feather is unlocked, then the flap is opened, then the feather is deployed. The pilot probably knew that the feather could not be deployed at the speed they were going, but did not know/understand that the flap could not stay closed if unlocked at the speed they were going. Thus, the pilot unlocked the flap, and from there, whatever other latch that made step 2 work broke, the flap opened and the feather deployed on its own.

        If the unlock switch had some god-awful name describing exactly what it did

        Maybe the button will be renamed "Remove Restraints Holding Feather Flap Closed During Transonic Region".

    • It appears that unlocking it just allowed dynamic forces outside the craft to move the feather without being commanded to. The external forces simply overcame the mechanical system that was holding it retracted. A transonic slipstream exerts a hell of a force.

      In my view this is a dual failure- a failure by the pilot who (apparently) wasn't trained on when not to unlock the system, and an engineering failure as well- it seems a common-sense thing to lockout potentially (or positively) fatal mis-operations
    • Nobody likes to "get behind the airplane". I execute checklists as soon as practical and get things set up for the next phase of flight as soon as possible so we're ready. Exceptions being things in the class of the feather on the ship in question: Flaps, landing gear, mostly airspeed and aerodynamics dependent items.

      - A commercial pilot

  • by Dutch Gun ( 899105 ) on Tuesday July 28, 2015 @05:07PM (#50200077)

    It's interesting as the unique tail section was actually touted as a "safety feature" by the company. I'm not necessarily saying it can't be the case, but like any feature, even a safety feature (see: exploding airbags), defects or improper use can cause more harm than in it's absence.

    The moveable booms are intended to provide a fail-safe mechanism for positioning SpaceShipTwo during the fiery re-entry into the atmosphere. Scaled pilots were well aware of what could happen if they unlocked the feather too late, but training about its early release were ignored, accident investigations found.

    It's a bit strange, as it seems like such a fundamental error - not some obscure feature that could be overlooked. What pilot would say to himself "Hey, I know I'm supposed to unlock the tail at time X, but what the hell, why not just do it now?" It seems really strange that they wouldn't have precise procedures for this, since it's such a critical part of the entire design.

    It's a hard way to learn a lesson like this.

    • >> It's a hard way to learn a lesson like this.

      I don't think he learned the lesson.

      But we did.

    • by Anonymous Coward

      It's no mere coincidence that the attitude change and breakup occurred at Mach 1.0 during deceleration.

      The feathering mechanism operated in much the same way flaps do on a conventional aircraft. Flaps create additional drag and serve to change the angle of attack (to increase forward visibility) during descent. The feather was supposed to stabilize SpaceshipTwo's attitude in much the same fashion.

      Flaps cannot safely be deployed above a defined velocity, and this speed is specifically called out in every air

    • by Rinikusu ( 28164 )

      Agile development. Deploy early, fail quickly.

    • It's interesting as the unique tail section was actually touted as a "safety feature" by the company. I'm not necessarily saying it can't be the case, but like any feature, even a safety feature (see: exploding airbags), defects or improper use can cause more harm than in it's absence.

      An improperly implemented safety feature (emergency ballast blow system) contributed to the loss of USS Thresher... In the same way, the Apollo 1 crew died (in part) because of a system (a well locked down hatch) that had b

      • Thanks for the link. The report's gist seemed to be "Everyone should have realized that you shouldn't rely on the pilot to not flip a switch at the wrong time, the results of which will cause the spaceship to be destroyed", and then describing in detail the procedures that should have been in place to catch those sorts of issues, along with recommendations for future procedures to prevent this from happening.

        As you said, it's a bit morbid, but this is generally how we learn how to make things better and sa

    • by Viceroy ( 26756 )

      They do have precise procedures about this. Its called the flight test card. The test pilot flies the procedures exactly as they are written on the card. They do nothing else unless some other anomaly during the flight requires them to fall back to basic flight training and just fly the airplane. They brief to that card before the test flight. The practice and train to that card on the simulator. If you know that the cockpit environment is going to be busy, you train your muscle memory to follow that

  • You're an experienced test pilot of a rocket powered ship and you have to be specifically trained to anticipate the effects of slamming on the brakes while traveling at supersonic speed?

    I suspect he knew full well the likely outcome but just had a brain fade. Probably what was missing was some kind of hardware interlock so that this couldn't have happened, or else it required both pilots acting at once to enable it.

    • by 0123456 ( 636235 ) on Tuesday July 28, 2015 @06:10PM (#50200425)

      You're an experienced test pilot of a rocket powered ship and you have to be specifically trained to anticipate the effects of slamming on the brakes while traveling at supersonic speed?

      As touched on in a comment above, he didn't deploy them, he unlocked them. As I understand it, he unlocked them too early, so the deployment mechanism was unable to prevent them from deploying under the stress of supersonic flight at relatively low altitude.

      You want to unlock them early, because, if you can't unlock them, you can still cut the engine and glide back. You don't want to unlock them too early, because this happens.

      • This. It makes sense to unlock the feathers early for the reason you stated. It was probably not thought about in the cockpit at the time, that unlocking them NOW would cause them to deploy NOW. All aircraft have some engineering compromises and this apparently was one of them. Unfortunate as it is, now we know.
  • If the pilot is dead in the airplane crash, then it is the fault of the pilot.
    ALWAYS.
    You know, the guy who can't defend himself anymore. Put everything on him.
    It's not the maker's fault, the designer's fault, the maintenance's fault, the ergonomics' fault, the company's fault, nothing of these.
    Always the pilot's.
    This is sickening.

    • by KGIII ( 973947 )

      Except this is, you know, "poor pilot training" (as well as other things listed). Perhaps you misunderstood (or I do)? Poor pilot training is, you know, not the pilot's fault but the fault of the trainer and not the trainee. It is as if I teach you how to solder components into a PCB and you follow my instructions and still do it wrong then it is not your fault but my own fault for having improperly trained you.

      Your rant is nice, I will give credit where it is due, but does not have much to do with reality.

  • I think this is a case of "Blame the dead guy, because he can't defend himself."

    I cannot believe that an experienced test pilot, in his right mind, would not have thought through the possible consequences of actuating that lever at a higher speed than it was designed for. I simply cannot believe it. Especially given than history is littered with examples of airplanes not being able to pull out of dives due to control surfaces not responding properly (or ripping off) in supersonic or transonic flow. Alsbu

    • Especially given than history is littered with examples of airplanes not being able to pull out of dives due to control surfaces not responding properly (or ripping off) in supersonic or transonic flow. Alsbury would have been intensely aware of these concepts.

      Well it's not exactly like deploying flaps at high speed and having them rip off the plane and damaging stuff. He didn't deploy anything, he only unlocked it. He thought the motors would hold the feather in place and nobody at Virgin or Scaled drilled into his head that prematurely unlocking feather = DEATH. Quite possibly because they also didn't think the feather would move by itself just from air pressure and vibration. We know it NOW, but before?

      Unlocking the feather while going up was part of the proc

      • by Viceroy ( 26756 )

        Like every test pilot at Scaled, Mike was a competent engineer in his own right, in addition to being a test pilot. I guarantee that everyone knew that if the loads were high enough the feather would move if it was unlocked, including the pilots. Like I said in another comment, I also guarantee that Mike flew the procedures on that test card plenty of times on the simulator and threw the feather unlock at the Mach 1.4 callout correctly every time. But in a high workload environment, no matter how much tr

    • by KGIII ( 973947 )

      Wow... Two in a row?!?

      Nobody, except you and the last guy, are blaming the pilot. Well, you two are specifically not blaming the pilot. If you read then you should be able to pick this up. They are blaming poor pilot training. That is not the pilots fault and nobody is saying that it is. Perhaps, in your zeal to seek a reason to be displeased, you failed to either read the article (or summary) or failed to comprehend it? The pilot did what they did because they were improperly trained. That is hardly the fa

      • Just want to highlight the quick mention you made of the Navy. The US Navy's safety programs are mature and thorough enough that NASA has several meeting with Navy personnel about their SUBSAFE program after the Columbia disaster to improve the NASA programs. It says a lot about the Navy's SUBSAFE program that the best rating stops at merely "satisfactory."
        • by KGIII ( 973947 )

          I spent a wee bit of time on a ship and guarded a detention facility for a while. I was in the Marines at the time (obviously) and was impressed. "They run a tight ship." They are, hands down, the model for a blue water navy and safety is paramount. Look at their firefighting as well. US Navy, taking shit seriously since the start. They also have a very strict culture about adhering to rules (and have good rules in place for a reason).

  • by tipo159 ( 1151047 ) on Wednesday July 29, 2015 @12:54AM (#50202223)

    I have read the NTSB Executive Summary. As far as I have seen, the full report has not yet been made available.

    The claim made by the report is the accident was the result of human error because one of the pilots unlocked the feather prematurely and that the actuators that control movement of the feather were overcome by aerodynamic forces (while going through trans-sonic speeds) and the feather moved. Deploying the feather is a two-step process, unlocking, which one pilot can do, and commanding it to move, which require both pilots to take action.

    What I didn't see in the Executive Summary was whether Scaled Composites expected the actuators to be able to control movement of the feather while the vehicle was going trans-sonic.

    Just after the accident, there were statements attributed to Scaled that the actuators should have been able to hold the feather in position after it was unlocked. If the people working on and with the vehicle thought this, how could it be human error for the feather to be unlocked when it was?

    If it turns out that those earlier statements were incorrect and Scaled knew that it was a bad idea to, say, unlock while going through trans-sonic, then the Executive Summary should have indicated that. I just find it odd that it doesn't say anything about what Scaled had communicated to its pilots about the capabilities of the actuators for the feather once it was unlocked.

  • From Phys.org [phys.org]./

    In determining the probable cause of the accident, board members were focused on how well officials prepared for the worst. NTSB member Robert Sumwalt said Scaled Composites "put all their eggs in the basket of the pilots doing it correctly."

    "My point is that a single-point human failure has to be anticipated," Sumwalt said. "The system has to be designed to compensate for the error."

    Accusing the test pilot of being untrained and/or incompetent or whining about the risks of interlocks is

    • by Viceroy ( 26756 )

      As a friend of Mike, a former Scaled Engineer, and one that was directly involved in a previous Scaled accident, I have to completely disagree with your statements.

      The culture at Scaled has been and will continue to be focused on nothing but safety. This was the first flight accident that Scaled ever had in its existence since 1982, with dozens of first flights of new aircraft designs and hundreds of follow up test flights. There had been engineering mistakes on many flights previously (I certainly made s

      • Good answer, and sorry about the loss of a friend. While I understand his venom towards management of large corporations, I think SC is in a different league.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...