Wednesday 13 April 2016

Can artificial informational agents ever have moral authority?

And, while I'm on the topic of Information Ethics (see this morning's post)...
Will Robots Ever Have Moral Authority?

Robots build cars, clean carpets, and answer phones, but would you trust one to decide how you should be treated in a rest home or a hospital?

....Even if we could come up with robots who could write brilliant Supreme Court decisions, there would be a basic problem with putting black robes on a robot and seating it on the bench. As most people will still agree, there is a fundamental difference in kind between humans and robots. To avoid getting into deep philosophical waters at this point, I will simply say that it's a question of authority. Authority, in the sense I'm using it, can only vest in human beings. So while robots and computers might be excellent moral advisers to humans, by the nature of the case it must be humans who will always have moral authority and who make moral decisions.

If someone installs a moral-reasoning robot in a rest home and lets it loose with the patients, you might claim that the robot has authority in the situation. But if you start thinking like a civil trial lawyer and ask who is ultimately responsible for the actions of the robot, you will realize that if anything goes seriously wrong, the cops aren't going to haul the robot off to jail. No, they will come after the robot's operators and owners and programmers—the human beings, in other words, who installed the robot as their tool, but who are still morally responsible for its actions.

People can try to abdicate moral responsibility to machines, but that doesn't make them any less responsible. ...

Kaydee, Engineering Ethics blog post, 11 April 2016
Kaydee argues that a consequence of this is a loss of moral authority
Turning one's entire decision-making process over to a machine does not mean that the machine has moral authority. It means that you and the machine's makers now share whatever moral authority remains in the situation, which may not be much.

I say not much may remain of moral authority, because moral authority can be destroyed....

As Anglican priest Victor Austin shows in his book Up With Authority, authority inheres only in persons. While we may speak colloquially about the authority of the law or the authority of a book, it is a live lawyer or expert who actually makes moral decisions where moral authority is called for. Patrick Lin, one of the ethics authorities cited in the Quartz article, realizes this and says that robot ethics is really just an exercise in looking at our own ethical attitudes in the mirror of robotics, so to speak. And in saying this, he shows that the dream of relieving ourselves of ethical responsibility by handing over difficult ethical decisions to robots is just that—a dream.
I would suggest that whatever this says about robots applies equally, in the language of Information Ethics, to any artificial informational agent, including a drone but also a corporation.

No comments: