fbpx
Learn to build large language model applications: vector databases, langchain, fine tuning and prompt engineering. Learn more

Autonomous technology: Ethical dilemmas that have surfaced with self-driving cars

Data Science Dojo
Rebecca Merrett

November 22

Self-driving car ethics require proper study, training, and attention to detail. We must understand the ethical concerns of autonomous technology to minimize risk. 

New technology, new problems

When it comes to autonomous technology of any kind, the first thing that often comes to our minds is our safety, our well-being, and our survival. What are the self-driving car ethics? It’s not ridiculous for us to have these concerns. First, we should ask the hard questions-

Who is responsible should a death result from an edge case accident?

What is an acceptable level of autonomy and what isn’t?

How does this technology come to a decision?

Second, with driverless cars – a prime example of autonomous technology – starting to be deployed on public roads across the world, we must seek answers to these questions sooner rather than later. The ethical dilemmas we face with driverless cars now will be similar to the ethical dilemmas we will face later on. Facing these issues head-on now could help us get a head start on the many ethical issues we will need to face as technology becomes ever-more high-tech.

Confronting the self-driving ethical issues

Currently, MIT researchers are confronting the ethical dilemmas of driverless cars by playing out hypothetical scenarios. In time, when it comes to autonomous cars making decisions on the safety of their passengers and people the car contacts on the street, how do they choose between the lesser of two evils? Then, the viewer must judge which decision they would make if placed in a particularly intense scenario. Eventually, this data is then compared with others and made publicly available.

In the meantime, researchers are gathering many people’s views on what is considered acceptable and not acceptable behavior of an autonomous car. So, what leads to the impossible choice of sacrificing one life over another’s? As alarming as it is, this research could be used to help data scientists and engineers.

This will help them gain a better understanding of what actions might be taken should a far-fetched accident occur. Of course, avoiding the far-fetched accident in the first place is a bigger priority. Furthermore, the research is a step toward facing the issue head-on rather than believing that engineering alone is going to solve the problem.

Visit: Data Science Dojo to learn more about algorithms

Decision making alternatives

Meanwhile, some proposed ideas for minimizing the risk of self-driving car accidents include limiting the speed of autonomous cars beyond that speed limit. This is in certain densely populated areas and has a designated right of way for these cars.

More sophisticated mechanisms for this include using machine learning to continuously assess the risk of an accident and predict the probability of an accident occurring so that action can be taken preemptively to avoid such a situation.  The Center for Autonomous Research at Stanford (a name suspiciously chosen for its acronym CARS, it seems) is looking into these ideas for “ethical programming.”

Putting in place ethical guidelines for all those involved in the build, implementation, and deployment of driverless cars is another step towards dealing with ethical dilemmas. For example, the Federal Ministry of Transport and Digital Infrastructure in Germany released ethical guidelines for driverless cars this year. The ministry plans to enforce these guidelines to help ensure driverless cars adhere to certain expectations in behaviors.

For example, one guideline prohibits the classification of people based on their characteristics such as race and gender so that this does not influence decision-making should an accident occur.

Next, transparency in the design of driverless cars and how algorithms come to a decision needs to be looked at. Then, we will work through the ethical dilemmas of driverless cars and other autonomous technology. This includes consumers of these cars, and the general public, who have a right to contribute to the algorithms and models that come to a decision.

Copy_of_Self_Driving_Car_Ethics_1

Factors to consider

A child, for example, might have a stronger weight than a full-grown adult when it comes to a car deciding who gets priority in safety and survival. A pregnant woman, for example, might be given priority over a single man. Humans are the ones who will need to decide what kinds of weights are placed on what kinds of people, and research like MIT’s simulations of hypothetical scenarios is one way of letting the public openly engage in the design and development of these vehicles.

Where do we go from here?

In conclusion, as data scientists, we hold great responsibility when building models that directly impact people’s lives. The algorithms, smarts, rules, and logic that we create are not too far off from a doctor working in an emergency who has to make critical decisions in a short amount of time.

Lastly, understanding the ethical concerns of autonomous technology, implementing ways to minimize risk, and then programming the hard decisions is by no means a trivial task. For this reason, self-driving car ethics require proper study, training, and attention to detail.

DSD Sign
Written by Rebecca Merrett
Have a similar idea? Submit your guest post with us
Newsletters | Data Science Dojo
Up for a Weekly Dose of Data Science?

Subscribe to our weekly newsletter & stay up-to-date with current data science news, blogs, and resources.

Data Science Dojo | data science for everyone

Discover more from Data Science Dojo

Subscribe to get the latest updates on AI, Data Science, LLMs, and Machine Learning.