If you didn’t already know, we are living in a brave new world where computers fly planes and pilots act as failsafes that make sure these highly intelligent computer systems don’t malfunction. But is watching over a computer system something that pilots, or even human beings, for that matter, are good at? Is it something that they should be tasked with doing?
According to a new study conducted by researchers from the University of California at Santa Barbara (UCSB) and the National Aeronautics and Space Administration (NASA), using pilots as monitors for computer flight systems can be incredibly trying, so much so that it can lead to mistakes.
Automation and the Crash of Asiana Airlines Flight 214
It was late in the morning of July 6, 2013, when Asiana Airlines Flight 214 was making its final descent into San Francisco International Airport (SFO). The flight from Incheon International Airport in Seoul, Korea to San Francisco, some 10 hours and 15 minutes, had been uneventful until the flight crew of OZ214 began to prepare for landing.
The Boeing 777-200ER plane, with 307 people aboard, was flying too low and too slow in the seconds before it was supposed to touch down at runway 28L. The plane’s tail ended up smashing into a seawall just short of the runway, ripping apart from the rest of the plane, the rest of which spun and slid along the SFO runway. When the tattered plane finally came to a complete stop, it caught fire, and passengers were forced to evacuate from using the emergency chutes.
Prior to the Asiana Airlines Flight 214 crash at SFO, the U.S. hadn’t seen a fatal passenger airline crash since 2008. Three young Chinese students were killed in the OZ214 crash and dozens of other people sustained injuries.
In the investigation that followed the fatal crash, officials from the National Transportation Safety Board (NTSB) determined that pilot error and confusion were the main causes of the air disaster. In the final report on the OZ214 crash, investigators said pilots were confused over whether one of the Boeing 777-200ER key controls was maintaining speed. Put simply, by the time they noticed that the plane was making an approach that was too low and too slow, it was too late to prevent the crash.
According to NTSB chair Christopher Hart, the flight crew of Asiana Airlines Flight 214 were overly reliant on automated flight control systems that they didn’t fully understand.
This is the part where you ask: With all this automated technology detailing aircraft position, speed, altitude, and a number of other flight functions, how could this major air disaster have happened? If so much of flying an airliner these days is based on automation, how could these pilots have failed?
Pilots Monitoring Automated Flight Functions Could be Recipe for Disaster
The answer to the above question is that when pilots are charged with monitoring flight technology, whilst also dealing with all the variables that coincide with flying a plane safely, mistakes can actually be made quite easily. Worse yet, according to the authors of the NASA/UCSB study, there is no clear fix to make pilots better monitors.
Perhaps the best way to lead into a discussion of this study is to define what is meant by the word ‘monitoring.’ In addition to handling tasks like talking with air traffic controllers, configuring flight systems and all sorts of other things in the cockpit environment, pilots who fly commercial airliners (like the Boeing 777) also act as the monitors or overseers of flight automation technology, making sure that systems operate as designed.
Modern planes like the 777 are generally flown by computer autopilot systems that track the aircraft’s position via motion sensors and dead reckoning. Automation technology is also used to land commercial planes. A recent survey of airline pilots showed that Boeing 777 pilots reported spending just seven minutes manually piloting aircraft in a typical flight. Airbus pilots spent even less time than that.
The flight monitoring study paired NASA research psychologist Steve Casner (also a pilot and aviation psychologist) with Jonathan Schooler, professor of psychological and brain sciences from UCSB. Their paper, entitled “Vigilance Impossible: Diligence, Distraction, and Daydreaming All Lead to Failures in Practical Monitoring Task,” is currently available in the latest issue of the journal Consciousness and Cognition.
Casner and Schooler wanted to find out why monitoring failures happen, even among veteran pilots. For the study, the researchers asked 16 commercial airline pilots to monitor the progress of a simulated routine flight where high levels of cockpit automation were used for navigation and steering of the aircraft.
Past experiments outside of a flight cockpit have shown that monitoring a computer can be a tiring process that can lead to fatigue and inattention, so the researchers wanted to know specifically how professional airline pilots have handled the mundanity of staring at a computer screen.
What Did the Study Show?
Schooler and Casner found that pilots in the cockpit often get sidetracked from monitoring when they perform tasks like speaking to air traffic controllers or configuring the plane’s systems. These distractions have a net positive in that they can break up the tediousness of monitoring. However, these tasks can also cause pilots to miss important events during flight.
Perhaps the most interesting part of the study is what researchers found when pilots weren’t interrupted. Rather than being able to focus solely on monitoring, pilots ended up creating their own distractions by doing what Schooler and Casner refer to as “mind wandering.”
When researchers asked pilots from time to time what was on their minds, pilots admitted to thinking about “task-unrelated thoughts” up to 50 percent of the time. These brief lapses in monitoring attention, the study found, frequently caused missed events mid-flight. In total, the pilots in the study failed to notice 25 percent of the altitude crossings they were supposed to monitor.
The researchers admitted that they were surprised by the number of times the pilots missed the altitude callouts, as well as the high frequency with which the pilots admitted to letting their minds wander.
“We should be very wary of relying on people to serve in a monitoring capacity-especially now, when we do have technology that can fill the monitoring role,” Schooler would say of the study.
Casner agreed with this sentiment, adding that watching over a computer system is incredibly trying, if not impossible, for a human being to do well. “You can try paying attention, and you can try taking brief breaks, but sooner or later you’ll miss something important,” Casner said.
Perhaps this study shows that robots would be better-served monitoring flight systems, making technology the failsafe, rather than a mind-wandering human being?