According to the National Highway Traffic Safety Administration, Google’s self-driving vehicles can be considered a driver. According to ZDNet, Google wanted to clarify how their driverless cars could meet the Federal Motor Vehicle Safety Standards. In order for Google’s cars to be seen as compliant with the safety standards, all they had to do was change the position of the brake pedal and sensors, after which the vehicles were declared safe enough.

For examples of how one of these automated cars views its surroundings, watch this video:

This declaration is a huge step forward for the artificial intelligence development endeavor, but it presents an intriguing concept. Who’s to blame for an automobile accident stemming from the incompetence of a self-driving vehicle? You can’t necessarily sue a vehicle for causing an accident, unless you want to blame the manufacturer for creating a faulty product. But, what if the manufacturer simply blames the passenger because they failed to properly “set up” the vehicle? How would something like this work?

As you can probably expect, liability is a major concern for any autonomous process. With autonomous technology, though, this is a blurred grey line at best. As the feds claimed in their letter to Google, “If no human occupant of the vehicle can actually drive the vehicle, it is more reasonable to identify the ‘driver’ as whatever (as opposed to whoever) is doing the driving.” If something goes wrong, people want to find out who (or what) is at fault, and having vehicles capable of driving themselves makes it more difficult to do so.

Another huge issue is just how well Google’s autonomous cars fit into the current Federal Motor Vehicle Safety Standards. In particular, the regulations mention specific actions taken by human anatomy which describe how a motor vehicle should be controlled. As reported by WIRED:

The rule regarding the car’s braking system, for example, says it “shall be activated by means of a foot control.” The rules around headlights and turn signals refer to hands. NHTSA can easily change how it interprets those rules, but there’s no reasonable way to define Google’s software—capable as it is—as having body parts. All of which means, the feds “would need to commence a rulemaking to consider how FMVSS No. 135 [the rule governing braking] might be amended in response to ‘changed circumstances,’” the letter says. Getting an exemption to one of these rules is a long and difficult process, Walker Smith says. But “the regular rulemaking process is even more onerous.”

While liability will remain a major problem for autonomous cars, it’s still a significant step in the right direction. What this approval means is that computers can be considered humans, or at least human-like. This acknowledgement means that developers of artificially intelligent entities will have an easier time with their goals; yet, the process will still likely be filled with all sorts of legal maneuvers and such. Though Google has slated its automated cars to be available to the public by 2020, we might have to wait just a little bit longer, even for the most basic form of AI.

Would you trust an autonomous car to get you from point A to point B safely? Let us know in the comments!

Need The Best IT Services & IT Support In Dallas / Ft. Worth?

Your search for the best IT services company in the Dallas & Fort Worth area ends now.
You Found Us!
Aspire Technical Solutions Is here to help. Fill out the form below for an immediate call back.

Latest Blog Posts

Read Aspire Tech Insights