AI Technology: Empowerment Versus Ethics

Share it!Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInShare on StumbleUpon

Artificial intelligence or AI technology has the capability to make complex calculations, solve problems and improve inefficiencies much faster than the human mind.

However, many engineers are voicing concerns about this accelerated growth and calling for universal ethical standards that would be built into machine intelligence.

Without built-in ethics as a safeguard, they say the technology could potentially cause serious harm to humans in the course of completing their intended tasks.

Developing Common Ethical Standards

One of the most widely-discussed applications of new AI technology is self-driving cars.

How should the robot controlling a vehicle respond in the event of an unavoidable crash? What if casualties will occur in every possible situation. How should the machine decide who will live and who will die?

The MIT Media Lab has explored this issue by polling the public online to gather their views on what ethical decision the machine should make.

Crowdsourcing initiatives like the Media Lab are meant to develop a common set of ethical standards that reflect the values of society as a whole. Once developed, coding these standards into AI would prevent robots from causing undue harm to humans.

Subscribe to Productivity Bytes:



The Empowerment Approach

Some in the robotics engineering field oppose building ethical standards into AI technology has some opponents.

Researchers at the University of Hertfordshire believe they have developed a better solution for preventing robots from causing humans harm.

According to researcher Christoph Salge, “Empowerment means being in a state where you have the greatest potential influence on the world you can perceive. So, for a simple robot, this might be getting safely back to its power station, and not getting stuck, which would limit its options for movement.”

In this approach, engineers code the robots to maximize their empowerment as well as human empowerment, resulting in actions within certain parameters designed to keep people safe.

AI Technology Empowerment Versus Ethics

Differences Between Empowerment and Ethics

The University of Hertfordshire team believes the empowerment concept is a more adequate safeguard for AI than developing common ethical standards and implementing them.

This is mostly due to the difficulty of coding the many complex standards required to successfully cover all potential ethical questions a machine may confront. The approach also creates a different definition of the nebulous ethical concepts of “good” behavior and “harmful” behavior.

By defining good behavior as that which maximizes options for both robots and humans, the empowerment approach creates possible robot responses that apply to a wide variety of scenarios.

You could view the empowerment approach as an example of machine learning versus the general concept of artificial intelligence.

Machine learning refers to the practice of giving a machine access to data and allowing it to use the data to learn for itself as a human would. Christoph Salge says of empowerment, “For a more futuristic, human-like robot this would not just include movement, but could incorporate a variety of parameters, resulting in more human-like drives.”

The continued development of machine learning and “human-like” robots will lead to more exciting advances in automation.

These advances could not only hold the key to a more efficient version of everyday life, but also to solving persistent global challenges such as disease, poverty, traffic and environmental issues.

However, the coming age of AI also raises major ethical concerns.  How should humans and robots relate to each other? How much should the line between humans and robots blur, with new biotech technologies gaining popularity?

Another contentious issue under debate is who should have the final say in these ethical decisions—Silicon Valley developers, the general public or the government? Tesla and SpaceX CEO Elon Musk has said that AI is “potentially more dangerous than nukes.”

This kind of a warning from a leader of the tech industry highlights the importance of finding the right answers to ethical concerns about AI before mass implementation goes further.

 

If you enjoyed this post, you’ll also like these:


Share it!Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInShare on StumbleUpon
The following two tabs change content below.
Kayla Matthews is the editor of Productivity Bytes and a regular contributor to VentureBeat, Motherboard, MakeUseOf and Inc.com. Follow her on Twitter to read her latest posts.