Self-Driving Car Accident Lawyer
A self-driving car is no longer a futuristic idea – if you live in
a major city, it is possible that you have already seen one, perhaps without
even realizing it.
Dozens of states have passed legislation related to the operation of autonomous
vehicles with more to follow. Many of the world’s leading auto manufacturers
and technology companies, including General Motors, Ford, Daimler-Bosch, Tesla,
Waymo, BMW and Volkswagen, among many others, are investing billions of dollars
in autonomous vehicle (AV) research.
Are Self-Driving Cars Safe?
Proponents of self-driving car technology tout it as a game changer in
transportation safety. Human error, after all, is one of the leading causes of
truck accidents,
car accidents and
bus accidents. Automated driving takes away the possibility of human error by instead
relying on computers, sensors, artificial intelligence, and algorithms.
However, taking human drivers out of the equation will create the potential
for other safety issues. For example,
will driverless vehicles be thoroughly tested before they are deemed safe
by regulators? Could a rush to get robot-cars on our roads lead to preventable accidents?
Cyber-attacks affect millions of people each year. Will the computer operating systems for autonomous vehicles be protected
from cyber-attacks?
What about software malfunctions?
Who is liable when self-driving vehicles cause an accident?
We hope the following information will be a useful resource for anyone
harmed by a self-driving car accident. If you would like to speak with
an experienced attorney about a driverless vehicle accident, the law firm
of Wisner Baum is here to help you.
Our firm has handled thousands of transportation accident lawsuits on behalf
of victims whose lives were turned upside down by negligence, vehicle
defects and other safety issues. Across all areas of practice, our attorneys
have won over $4 billion on behalf of clients.
Call (855) 948-5098 today for a free case evaluation.
How Do Driverless Cars Work?
To understand how driverless cars work, we need to define what “self-driving”
actually means. The
Society of Automotive Engineers International (SAE) defines five levels of driverless automation:
-
Level 0 – No automation. Human driver is in control of 100% of the driving.
-
Level 1 – Driver assistance. A vehicle can assist the human driver with steering, braking and accelerating.
-
Level 2 – Partial automation. The vehicle can control both steering and braking/accelerating under some
circumstances. Human driver must monitor the driving environment at all
times and perform other driving tasks.
-
Level 3 – Conditional automation. A vehicle can perform all driving tasks under some circumstances, but human
driver must be ready to take back control at the request of the automated system.
-
Level 4 – High automation. Vehicle systems can perform all driving tasks and monitor the driving environment
in certain circumstances. The human driver does not need to pay attention
under these circumstances.
-
Level 5 – Full automation. Automated vehicle is in control of 100% of the driving.
Driverless cars utilize several systems that work together to direct the
vehicle. Sensors and GPS continuously update information about the driving
environment. A central computing system analyzes and interprets all of
the data, then makes decisions to manipulate the vehicle.
Robocars usually have some combination of four key sensors:
-
Ultrasonic Sensors: Use sound waves to detect the position of curbs and other obstacles. Often
located in wheels, many newer model vehicles already use ultrasonic sensors,
primarily for parking.
-
Radar Sensors: Placed around the perimeter of the vehicle, radar sensors use radio waves
for tracking and monitoring of other vehicles in real-time.
-
Image Sensors: Using cameras, image sensors read traffic signs and keep track of other
obstacles, vehicles and pedestrians.
-
LiDAR Sensors: Use laser beams to detect fine details of the vehicle’s environment,
including the edges of roads and lane markings.
Injuries and Deaths Involving Self-Driving Cars are “Inevitable”
One of the purported benefits of autonomous vehicle technology is that
it will reduce the number of accidents caused by human error. While we
can only speculate on what impact driverless cars will have on traffic
accidents, one thing is clear: a car, bus or truck accident can be devastating,
regardless of whether it was caused by a computer or a human driver.
Driverless vehicle accidents have already resulted in several deaths. One
2018 fatal crash forced Uber to halt testing of its self-driving cars
temporarily. Toyota and graphic chips manufacturer, Nvidia, have also
been forced to halt testing due to accidents.
According to John Paul MacDuffie, director of the Program on Vehicle and Mobility Innovation at University
of Pennsylvania’s Wharton School, people are “inevitably”
going to be injured and killed in the testing and improvement of autonomous vehicles.
“Any situation where you’re expecting the human and the computer
algorithms to share control of the car, it is very tricky to hand that
control back and forth.” MacDuffie said.
Constantine Samaras, director of Carnegie Mellon University’s Center
for Engineering and Resilience for Climate Adaptation, shares this view,
noting that early testing of autonomous vehicle technology that involves
differing roles for human drivers will be rife with confusion.
“This is a challenge for this transition to automation, where there’s
this muddled mixture of human responsibility and robot responsibility,”
says Samaras.
As the technology continues to improve, there is no reason to doubt that
driverless vehicles will have a positive effect on reducing human error
traffic collisions. However, it is unrealistic to expect that driverless
vehicles are going to operate perfectly every single trip, especially
in the early stages. Unfortunately, anytime the technology fails, lives
will be put at risk.
Autonomous Vehicles and the Trolley Problem: Programming AI Ethics
The possibilities for self-driving vehicle accidents raise countless ethical
questions. One of the most common is a modern variation of the
trolley problem:
A car carrying three people comes upon three pedestrians crossing the street.
There are two choices – to crash into a concrete barrier, killing
all the occupants of the vehicle, or hit the pedestrians, killing them.
Which tragic scenario should an autonomous car choose?

In 2016,
scientists from the Massachusetts Institute of Technology (MIT) posed this scenario and others to millions of people around the world
in an effort to gauge how humans would respond to decisions made by artificial
intelligence (AI). While the Moral Machine experiment did see some consensus―people
generally chose to save people over animals, save more lives over fewer,
and prioritize young people over the elderly. Other responses varied in
accordance with regional cultural norms and economic status.
The study was the first of many tests that will help inform how companies
like Waymo, Tesla, and many others, program the AI for autonomous vehicles.
Will these companies share the same ethical standards? Or will certain
companies create AI that chooses one of the above scenarios while others
make different choices?
If self-driving car companies do not have the same ethics standards for
AI worldwide—the outcomes for accidents, and thus liability, will
vary widely.