Friday, 8 July 2016

Self-driving cars - fatal Tesla car crash

Fatal Tesla Self-Driving Car Crash Reminds Us That Robots Aren't Perfect: The first fatal crash involving Tesla's Autopilot system highlights the contradictory expectations of vehicle autonomy
On 7 May, a Tesla Model S was involved in a fatal accident in Florida. At the time of the accident, the vehicle was driving itself, using its Autopilot system. The system didn’t stop for a tractor-trailer attempting to turn across a divided highway, and the Tesla collided with the trailer. In a statement, Tesla Motors said this is the “first known fatality in just over 130 million miles [210 million km] where Autopilot was activated” and suggested that this ratio makes the Autopilot safer than an average vehicle
And discussed by Kaydee in the Engineering Ethics Blog:
By all accounts, Brown [the 'driver' of the car, Joshua Brown] was a generous, enthusiastic risk-taker (his specialty when he was in the military was disarming weapons, according to a New York Times report), and hands-free driving went against the explicit instructions Tesla provides for the autopilot feature. But Tesla owners do it all the time, apparently, and until May 7, Mr. Brown had gotten away with it. ...
Still, telling drivers how great a self-driving feature is, and then expecting them to pay constant attention as though the car were a driver's ed student and you were the instructor, is sending a mixed message.
Kaydee makes an interesting comparison with the first recorded steam-locomotive railway fatality which was:
...that of the English politician William Huskisson, who attended the opening ceremonies of the Liverpool and Manchester Railway on Sept. 15, 1830, which featured inventor George Stephenson's locomotive the Rocket. Wanting to shake the hand of his former political enemy the Duke of Wellington, Huskisson walked over to the Duke's railway carriage, then saw that the Rocket was bearing down on him on a parallel track. He panicked, tried to climb onto the carriage, and fell back onto the track, where the locomotive ran over his leg and caused injuries that were ultimately fatal. Passengers had been warned to stay inside the train, but many paid no attention.
If Huskisson's death had been mysterious and incomprehensible, it might have led to a wider fear of railways in general. But everyone who learned of it took away the useful lesson that hanging around in front of oncoming steam locomotives wasn't a good idea, and railways became an essential feature of modern life. Nevertheless, every accident can teach engineers and the rest of us useful lessons in how to prevent the next one, and the same is true in Mr. Brown's sad case.

Huskisson's accident - source: http://www.kidderminstershuttle.co.uk/news/regional/11805260.The_Walk__Under_the_shadow_of_death/
The particular interest for this blog, though, is the information ethics question of the attribution of responsibility for the accident - and whether the fact that it was self-driving makes any difference. In The Ethics of Information Floridi uses the distinction between moral accountability and moral responsibility, and maybe in this case the car is accountable but either the driver or Tesla (or both) are responsible, though I'm not whether that really contributes anything useful.

Tuesday, 24 May 2016

The difference that [which] makes a difference


The DTMD reseach group takes its name (The Difference That Makes a Difference) from Gregory Bateson's 'definition' of information, for which we* normally reference "Steps to an Ecology of Mind".  (Though actually he calls it Difference which makes a difference in Steps - he does use 'that' elsewhere).

* 'We' being members of the DTMD group, especially Magnus Ramage who introduced me to Bateson and especially to the DTMD definition.

I was checking a reference just now, and thought it would be useful to record what exactly he says about the definition.  Here, for reference, are all the instances of the phrase in Steps, with some of the surrounding discussion.

Sources: 
Gregory Bateson Steps to an Ecology of Mind.Collected essays in anthropology, psychiatry, evolution, and epistemology

I've checked the page numbers for two different printings:
 
1972 International Textbook Company Ltd, Aylesbury, UK. ISBN 0700201807. Copyright Chandler Publishing Company 1972

1987 reprint, Jason Aronson Inc. Northvale, New Jersey, London Copyright ® 1972, 1987 by Jason Aronson Inc. ISBN 0-87668-950-0 Downloaded from http://www.edtechpost.ca/readings/Gregory%20Bateson%20-%20Ecology%20of%20Mind.pdf 24/05/2016


1              Chapter “Double Bind, 1969”

“This paper was given in August, 1969, at a Symposium on the Double Bind; Chairman, Dr. Robert Ryder; sponsored by the American Psychological Association. It was prepared under Career Development Award (MH-21,931) of the National Institute of Mental Health.”

In any case, it is nonsense to say that a man was frightened by a lion, because a lion is not an idea. The man makes an idea of the lion.

The explanatory world of substance can invoke no differences and no ideas but only forces and impacts. And, per contra, the world of form and communication invokes no things, forces, or impacts but only differences and ideas. (A difference which makes a difference is an idea. It is a "bit," a unit of information.)

p276 (1987), p271-2 (1972)

2              Chapter “The Cybernetics of "Self": A Theory of Alcoholism”

"This article appeared in Psychiatry, Vol. 34, No. 1, pp. 1-18, 1971. Copyright © 1971 by the William Alanson White Psychiatric Foundation. Reprinted by permission of Psychiatry Section headed “The Epistemology of Cybernetics”"

A "bit" of information is definable as a difference which makes a difference.
p321 (1987), p315 (1972)

More correctly, we should spell the matter out as: (differences in tree) - (differences in retina) - (differences in brain) - (differences in muscles) -(differences in movement of axe) -(differences in tree), etc. What is transmitted around the circuit is transforms of differences. And, as noted above, a difference which makes a difference is an idea or unit of information.

p323 (1987), p317-8 (1972)

3              Chapter “A re-examination of “Bateson’s Rule*”, section “The problem redefined”

*”This essay has been accepted for publication in the Journal of Genetics, and is here reproduced with the permission of that journal”

The technical term "information" may be succinctly de-fined as any difference which makes a difference in some later event. This definition is fundamental for all analysis of cybernetic systems and organization. The definition links such analysis to the rest of science, where the causes of events are commonly not differences but forces, impacts, and the like. The link is classically exemplified by the heat engine, where available energy (i.e., negative entropy) is a function of a difference between two temperatures. In this classical instance, "information" and "negative entropy" overlap.

p386 (1987), p381 (1972)

4              Chapter “Form, Substance, and Difference”. 

“This was the Nineteenth Annual Korzybski Memorial Lecture, delivered January 9, 1970, under the auspices of the Institute of General Semantics. It is here re-printed from the General Semantics Bulletin, No. 37, 1970, by permission of the Institute of General Semantics.” 

But what is a difference? A difference is a very peculiar and obscure concept. It is certainly not a thing or an event. This piece of paper is different from the wood of this lectern. There are many differences between them—of color, texture, shape, etc. But if we start to ask about the localization of those differences, we get into trouble. Obviously the difference between the paper and the wood is not in the paper; it is obviously not in the wood; it is obviously not in the space between them, and it is obviously not in the time between them. (Difference which occurs across time is what we call "change.")

A difference, then, is an abstract matter.

p458 (1987), p457-8 (1972)

I suggest that Kant's statement can be modified to say that there is an infinite number of differences around and within the piece of chalk. There are differences between the chalk and the rest of the universe, between the chalk and the sun or the moon. And within the piece of chalk, there is for every molecule an infinite number of differences between its location and the locations in which it might have been. Of this infinitude, we select a very limited number, which be-come information. In fact, what we mean by information—the elementary unit of information—is a difference which makes a difference, and it is able to make a difference because the neural pathways along which it travels and is continually transformed are themselves provided with energy. The path-ways are ready to be triggered. We may even say that the question is already implicit in them. 

p460 (1987), p459 (1972)

[Carl Jung in Septem Sermones ad Mortuos, Seven Sermons to the Dead] points out that there are two worlds. We might call them two worlds of explanation. He names them the pleroma and the creatura, these being Gnostic terms. The pleroma is the world in which events are caused by forces and impacts and in which there are no "distinctions." Or, as I would say, no "differences." In the creatura, effects are brought about precisely by difference. In fact, this is the same old dichotomy between mind and substance. 

We can study and describe the pleroma, but always the distinctions which we draw are attributed by us to the pleroma. The pleroma knows nothing of difference and distinction; it contains no "ideas" in the sense in which I am using the word. When we study and describe the creatura, we must correctly identify those differences which are effective within it.

I suggest that "pleroma" and "creatura" are words which we could usefully adopt, and it is therefore worthwhile to look at the bridges which exist between these two "worlds." It is an oversimplification to say that the "hard sciences" deal only with the pleroma and that the sciences of the mind deal only with the creatura. There is more to it than that. 

First, consider the relation between energy and negative entropy. The classical Carnot heat engine consists of a cylinder of gas with a piston. This cylinder is alternately placed in contact with a container of hot gas and with a container of cold gas. The gas in the cylinder alternately expands and contracts as it is heated or cooled by the hot and cold sources. The piston is thus driven up and down. 

But with each cycle of the engine, the difference between the temperature of the hot source and that of the cold source is reduced. When this difference becomes zero, the engine will stop. 

The physicist, describing the pleroma, will write equations to translate the temperature difference into "available energy," which he will call "negative entropy," and will go on from there.

The analyst of the creatura will note that the whole system is a sense organ which is triggered by temperature difference. He will call this difference which makes a difference "information" or "negative entropy." For him, this is only a special case in which the effective difference happens to be a matter of energetics. He is equally interested in all differences which can activate some sense organ. For him, any such difference is "negative entropy."

p462-3 (1987), p461-463

Wednesday, 18 May 2016

Studentships to study information - deadline extended

The deadline for applications to apply for studentships in the Computing and Communications Department at the Open University has been extended to the end of May. I wrote about these a few weeks back,

Please spread the word. It amazes me that we (the Department in general, not just my area) don't get more applications. A studentship is not a bad deal!



Monday, 18 April 2016

Self-driving cars - information ethics again

From IEE Spectrum
Self-Driving Cars Will Be Ready Before Our Laws Are

It is the year 2023, and for the first time, a self-driving car navigating city streets strikes and kills a pedestrian. A lawsuit is sure to follow. But exactly what laws will apply? Nobody knows. [...]

The solution to the lawsuit problem is actually pretty simple. To level the playing field between human drivers and computer drivers, we should simply treat them equally. Instead of applying design-defect laws to computer drivers, use ordinary negligence laws. That is, a computer driver should be held liable only if a human driver who took the same actions in the same circumstances would be held liable. The circumstances include the position and velocity of the vehicles, weather conditions, and so on. The “mind” of the computer driver need not be examined any more than a human’s mind should be. The robo-driver’s private “thoughts” (in the form of computer code) need not be parsed. Only its conduct need be considered. [...]

For example, a computer driver that runs a red light and causes an accident would be found liable. Damages imposed on the carmaker (which is responsible for the computer driver’s actions) would be equal to the damages that would be imposed on a human driver.
My emphasis at the end there.

Wednesday, 13 April 2016

Can artificial informational agents ever have moral authority?

And, while I'm on the topic of Information Ethics (see this morning's post)...
Will Robots Ever Have Moral Authority?

Robots build cars, clean carpets, and answer phones, but would you trust one to decide how you should be treated in a rest home or a hospital?

....Even if we could come up with robots who could write brilliant Supreme Court decisions, there would be a basic problem with putting black robes on a robot and seating it on the bench. As most people will still agree, there is a fundamental difference in kind between humans and robots. To avoid getting into deep philosophical waters at this point, I will simply say that it's a question of authority. Authority, in the sense I'm using it, can only vest in human beings. So while robots and computers might be excellent moral advisers to humans, by the nature of the case it must be humans who will always have moral authority and who make moral decisions.

If someone installs a moral-reasoning robot in a rest home and lets it loose with the patients, you might claim that the robot has authority in the situation. But if you start thinking like a civil trial lawyer and ask who is ultimately responsible for the actions of the robot, you will realize that if anything goes seriously wrong, the cops aren't going to haul the robot off to jail. No, they will come after the robot's operators and owners and programmers—the human beings, in other words, who installed the robot as their tool, but who are still morally responsible for its actions.

People can try to abdicate moral responsibility to machines, but that doesn't make them any less responsible. ...

Kaydee, Engineering Ethics blog post, 11 April 2016
Kaydee argues that a consequence of this is a loss of moral authority
Turning one's entire decision-making process over to a machine does not mean that the machine has moral authority. It means that you and the machine's makers now share whatever moral authority remains in the situation, which may not be much.

I say not much may remain of moral authority, because moral authority can be destroyed....

As Anglican priest Victor Austin shows in his book Up With Authority, authority inheres only in persons. While we may speak colloquially about the authority of the law or the authority of a book, it is a live lawyer or expert who actually makes moral decisions where moral authority is called for. Patrick Lin, one of the ethics authorities cited in the Quartz article, realizes this and says that robot ethics is really just an exercise in looking at our own ethical attitudes in the mirror of robotics, so to speak. And in saying this, he shows that the dream of relieving ourselves of ethical responsibility by handing over difficult ethical decisions to robots is just that—a dream.
I would suggest that whatever this says about robots applies equally, in the language of Information Ethics, to any artificial informational agent, including a drone but also a corporation.

Information ethics and corporations

In Luciano Floridi's Information Ethics (IE), the basic ethical unit is an informational entity.
[A]ll informational entities have an intrinsic moral value, although possibly quite minimal and overridable, and hence … qualify as moral patients subject to some (possibly equally minimal) degree of moral respect

(Luciano Floridi, “The Ethics of Information”, OUP, Oxford 2013, p109)
Some informational entities are also agents and potentially accountable and may or may not be responsible. Humans, for example, are informational agents which are accountable and responsible.

A drone is an (artificial) informational agent which is accountable, though whether it is responsible remains in dispute. Artificial agents are not restricted to technological agents but can also be social agents such as corporations.

And this gets me to the piece in The Washington Post that was the motivation for this blog post:
Corporations are people, except when it comes time to go to jail

Corporations sure like to be people, what with all the rights and privileges that people get in this country. Why, the word corporate practically MEANS body. But sometimes they don’t like it so very much: when it’s PUNISHMENT time!

Here and here are two stories on the recent Goldman deal. “’Today’s settlement is another example of the department’s resolve to hold accountable those whose illegal conduct resulted in the financial crisis of 2008,’ Benjamin C. Mizer, head of the Justice Department’s civil division, said in a statement.”

‘Those who’??? Read these stories as many times as you like, and the ‘those who’ referred to are very hard to identify. All of a sudden, the ‘who’ morphs into ‘it.’ “Goldman did not alert investors who were buying the bonds it was packaging.” “it knew that they were full of mortgages that were likely to fail.” “it sold packages of shoddy mortgages.” “Goldman Sachs repeatedly discovered problems with the mortgages it was selling to investors but didn’t tell investors.” Just try to put a pair of handcuffs on that ‘it.’...

Tom Toles, Washington Post, April 12