I'm Sure This Will Work Out Great

When we talk about morality, we talk about reason, about the experience of pleasure or pain, and about the virtues. People who make robots appear to think that that morality comes down to a combination of culture and guilt.
Rosa views AI as a child, a blank slate onto which basic values can be inscribed, and which will, in time, be able to apply those principles in unforeseen scenarios. The logic is sound. Humans acquire an intuitive sense of what’s ethically acceptable by watching how others behave (albeit with the danger that we may learn bad behaviour when presented with the wrong role models).

GoodAI polices the acquisition of values by providing a digital mentor, and then slowly ramps up the complexity of situations in which the AI must make decisions. Parents don’t just let their children wander into a road, Rosa argues. Instead they introduce them to traffic slowly. “In the same way we expose the AI to increasingly complex environments where it can build upon previously learned knowledge and receive feedback from our team.”...

To help robots and their creators navigate such questions on the battlefield, Arkin has been working on a model that differs from that of GoodAI. The “ethical adapter”, as it’s known, seeks to simulate human emotions, rather than emulate human behaviour, in order to help robots to learn from their mistakes. His system allows a robot to experience something similar to human guilt. “Guilt is a mechanism that discourages us from repeating a particular behaviour,” he explains. It is, therefore, a useful learning tool, not only in humans, but also in robots.

“Imagine an agent is in the field and conducts a battle damage assessment both before and after firing a weapon,” explains Arkin. “If the battle damage has been exceeded by a significant proportion, the agent experiences something analogous to guilt.” The sense of guilt increases each time, for example, there’s more collateral damage than was expected. “At a certain threshold the agent will stop using a particular weapon system. Then, beyond that, it will stop using weapons systems altogether.”
I'm sure you'll have a lot of success getting that military contract you're after with a robot that will teach itself to stop using its weapons systems in the middle of combat.

There seems to be a complete lack of awareness that morality isn't just what you're taught, plus what you feel. The closest thing they get to admitting that moral principles exist is to run them down as a source of moral norms, because they don't change with the culture. Moral relativity isn't just the assumption, it's assumed to be morally good.

If that's true, of course, then there's at least one thing that is good in and of itself. What makes it good? When you AI makers start to grapple with that question, you'll begin to figure out why the games you're playing are not adequate.

2 comments:

Eric Blair said...

Craptastic algorithm there. Microsoft tried something like this already and it failed spectacularly.

https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/

Ymar Sakar said...

Google recently bought up 5 military robotics manufacturers. Good days for cybernetic immortality and the mark of secular power, novus ordo seclorum

2045 is the projected day for that.