Thursday, 15 September 2016

Facts, data and information. But above all, the narrative.

My text for the day:
“You tell me that I need to acknowledge this because it is a fact. But facts only exist within a narrative. The fact you want me to acknowledge exists in your narrative but not in mine, so it is not the facts you want me to acknowledge, but the narrative.” [1]
What are facts? Facts might be data or information (see below), but the word is used to emphasise something to be true. Facts distinguished from fiction. Facts that are ‘out there’ and objective, independent of the observer. In some narratives [2] of information, this would make facts the data, and a surrounding narrative would turn them into information.

But, I have previously argued that data needs information. This is what I wrote there:
“It can be useful to distinguish between data and information in order to demonstrate that data is not information and that you need to extract meaning from data to get information. However, data needs information in the same way that information needs data. Data presupposes at least the potential for finding meaning and therefore information. Stuff would not be data if there was no chance of meaning ever being extracted from it. It would just be: stuff. We wouldn’t call it data”
The insight is that data and information come in a pair. The point is that ‘out there’ is a sea of, of what? Of stuff, of differences. Practically – to all intents and purposes – infinite differences which might make a difference. They are not data. They are nothing to us, literally, nothing, unless they can fit into a narrative. Or rather, unless they can be converted (by one of my trapeziums) into an entity that exists in a narrative.

So, facts-data-information are all of a kind, and only exist insofar as they take their place in a narrative.

I was expressing my skepticism of 'facts' to my family in the car one day a year or two ago, but my elder son was not having it. He is active in the fight against climate change, so deniers denying the facts of climate change are a problem. I'm 100% with him on the importance of tackling climate change and the culpability of deniers but I think the problem is the narrative, not the facts. However, that's for another time. For the moment, I want to pick up on his illustrative argument: that it was a fact that on 25th May 2015* we drove to Winslow to visit my parents. And if someone says otherwise, they are simply wrong.

*Actually, I can't remember when it was. But take it as then, for the purposes of the argument.

I'm not saying that 'anything goes'. It is not that me saying "on 25th May 2015 we drove to Watford to visit Elton John" is on a par with me saying "on 25th May 2015 we drove to Winslow to visit my parents".  The former is not true. The latter really happened. (Well, maybe, might have done, and assuming we agree on the meaning of the all the words.) But, there are almost infinitely more things that 'really happened'. All the other things we did on that day. All the other places we went to on other days. "At 10.43 and 12 seconds on 25th May 2015 I breathed in" might have 'really happened'. If I say "on 25th May 2015 we drove to Winslow to visit my parents" I am saying it for some purpose, as an entity in a narrative.

As I write this, I have in mind the 'capta' of Sue Holwell and Peter Checkland:
Data are available to us, and capta are the result of consciously selecting some data for attention, or creating some new category – such as ‘the number of golf club members living in Watford’, or becoming aware of some items of data which we begin to pay attention to. [...] Having selected, paid attention to, or created some data, thereby turning it into capta, we attribute meaning to it. ... The attribution of meaning in context converts capta into ... information. [3]
Checkland and Holwell have a hierarchy of data-capta-information (and then knowledge), but I'm taking a more extreme line which does away with the distinction between data and capta. Checkland and Holwell were exploring information is a specific context of information systems, whereas I'm considering a more absolute philosophical question. The moment we acknowledge the existence of (an item of) data, then we are paying attention to it, so there's no data that is not also capta. Maybe also I'd argue that 'knowledge' is the narrative into which information is embedded.

So here's my argument. Conventionally (figure (a) below), we envisage a limited number of 'facts', around which we build a narrative. Dispute is around specific facts. You and I disagree over a fact, and that fact changes our narratives. The facts are the objective entities out there in the world, and there is a limited number of them, so our narrative has to fit in with this valuable resource.

Instead (figure (b)), I'm arguing that the narrative determines the facts. Not whether we went to Watford or Winslow on 25th May 2016, but whether we went to Winslow or I breathed in and out at 10.43 and 12 seconds on 25th May 2016. Facts are not, in this narrative, the fixed framework around which we build a narrative. The narrative provides the framework which determines the entities which can exist within the narrative.

In figure (b), I've drawn the facts as fuzzy to indicate that they are, sort of, subservient to the narrative, but actually in a sense they aren't really any fuzzier than in (a). They are still entities within the narrative.

Finally, remember what I say about this blog: it is always work-in-progress!

1. Source: me, 14/9/2016, in conversation with Magnus Ramage. But Magnus is not the ‘you’ above: far from it. The ‘you’ was a hypothetical third-party. It was a discussion with Magnus about the nature of facts, information and the role of narrative and the ideas herewith presented owe a lot to Magnus’s insights.
2. This gets dangerously self-referential. I am writing a narrative about narratives and information.
3. Holwell, Sue (2011) Fundamentals of Information: purposeful activity, meaning and conceptualisation. in Perspectives on Information (Magnus Ramage and David Chapman, Eds.) Routledge, New York, 2011. pp 65-76.

Friday, 8 July 2016

Self-driving cars - fatal Tesla car crash

Fatal Tesla Self-Driving Car Crash Reminds Us That Robots Aren't Perfect: The first fatal crash involving Tesla's Autopilot system highlights the contradictory expectations of vehicle autonomy
On 7 May, a Tesla Model S was involved in a fatal accident in Florida. At the time of the accident, the vehicle was driving itself, using its Autopilot system. The system didn’t stop for a tractor-trailer attempting to turn across a divided highway, and the Tesla collided with the trailer. In a statement, Tesla Motors said this is the “first known fatality in just over 130 million miles [210 million km] where Autopilot was activated” and suggested that this ratio makes the Autopilot safer than an average vehicle
And discussed by Kaydee in the Engineering Ethics Blog:
By all accounts, Brown [the 'driver' of the car, Joshua Brown] was a generous, enthusiastic risk-taker (his specialty when he was in the military was disarming weapons, according to a New York Times report), and hands-free driving went against the explicit instructions Tesla provides for the autopilot feature. But Tesla owners do it all the time, apparently, and until May 7, Mr. Brown had gotten away with it. ...
Still, telling drivers how great a self-driving feature is, and then expecting them to pay constant attention as though the car were a driver's ed student and you were the instructor, is sending a mixed message.
Kaydee makes an interesting comparison with the first recorded steam-locomotive railway fatality which was:
...that of the English politician William Huskisson, who attended the opening ceremonies of the Liverpool and Manchester Railway on Sept. 15, 1830, which featured inventor George Stephenson's locomotive the Rocket. Wanting to shake the hand of his former political enemy the Duke of Wellington, Huskisson walked over to the Duke's railway carriage, then saw that the Rocket was bearing down on him on a parallel track. He panicked, tried to climb onto the carriage, and fell back onto the track, where the locomotive ran over his leg and caused injuries that were ultimately fatal. Passengers had been warned to stay inside the train, but many paid no attention.
If Huskisson's death had been mysterious and incomprehensible, it might have led to a wider fear of railways in general. But everyone who learned of it took away the useful lesson that hanging around in front of oncoming steam locomotives wasn't a good idea, and railways became an essential feature of modern life. Nevertheless, every accident can teach engineers and the rest of us useful lessons in how to prevent the next one, and the same is true in Mr. Brown's sad case.

Huskisson's accident - source:
The particular interest for this blog, though, is the information ethics question of the attribution of responsibility for the accident - and whether the fact that it was self-driving makes any difference. In The Ethics of Information Floridi uses the distinction between moral accountability and moral responsibility, and maybe in this case the car is accountable but either the driver or Tesla (or both) are responsible, though I'm not whether that really contributes anything useful.

Tuesday, 24 May 2016

The difference that [which] makes a difference

The DTMD reseach group takes its name (The Difference That Makes a Difference) from Gregory Bateson's 'definition' of information, for which we* normally reference "Steps to an Ecology of Mind".  (Though actually he calls it Difference which makes a difference in Steps - he does use 'that' elsewhere).

* 'We' being members of the DTMD group, especially Magnus Ramage who introduced me to Bateson and especially to the DTMD definition.

I was checking a reference just now, and thought it would be useful to record what exactly he says about the definition.  Here, for reference, are all the instances of the phrase in Steps, with some of the surrounding discussion.

Gregory Bateson Steps to an Ecology of Mind.Collected essays in anthropology, psychiatry, evolution, and epistemology

I've checked the page numbers for two different printings:
1972 International Textbook Company Ltd, Aylesbury, UK. ISBN 0700201807. Copyright Chandler Publishing Company 1972

1987 reprint, Jason Aronson Inc. Northvale, New Jersey, London Copyright ® 1972, 1987 by Jason Aronson Inc. ISBN 0-87668-950-0 Downloaded from 24/05/2016

1              Chapter “Double Bind, 1969”

“This paper was given in August, 1969, at a Symposium on the Double Bind; Chairman, Dr. Robert Ryder; sponsored by the American Psychological Association. It was prepared under Career Development Award (MH-21,931) of the National Institute of Mental Health.”

In any case, it is nonsense to say that a man was frightened by a lion, because a lion is not an idea. The man makes an idea of the lion.

The explanatory world of substance can invoke no differences and no ideas but only forces and impacts. And, per contra, the world of form and communication invokes no things, forces, or impacts but only differences and ideas. (A difference which makes a difference is an idea. It is a "bit," a unit of information.)

p276 (1987), p271-2 (1972)

2              Chapter “The Cybernetics of "Self": A Theory of Alcoholism”

"This article appeared in Psychiatry, Vol. 34, No. 1, pp. 1-18, 1971. Copyright © 1971 by the William Alanson White Psychiatric Foundation. Reprinted by permission of Psychiatry Section headed “The Epistemology of Cybernetics”"

A "bit" of information is definable as a difference which makes a difference.
p321 (1987), p315 (1972)

More correctly, we should spell the matter out as: (differences in tree) - (differences in retina) - (differences in brain) - (differences in muscles) -(differences in movement of axe) -(differences in tree), etc. What is transmitted around the circuit is transforms of differences. And, as noted above, a difference which makes a difference is an idea or unit of information.

p323 (1987), p317-8 (1972)

3              Chapter “A re-examination of “Bateson’s Rule*”, section “The problem redefined”

*”This essay has been accepted for publication in the Journal of Genetics, and is here reproduced with the permission of that journal”

The technical term "information" may be succinctly de-fined as any difference which makes a difference in some later event. This definition is fundamental for all analysis of cybernetic systems and organization. The definition links such analysis to the rest of science, where the causes of events are commonly not differences but forces, impacts, and the like. The link is classically exemplified by the heat engine, where available energy (i.e., negative entropy) is a function of a difference between two temperatures. In this classical instance, "information" and "negative entropy" overlap.

p386 (1987), p381 (1972)

4              Chapter “Form, Substance, and Difference”. 

“This was the Nineteenth Annual Korzybski Memorial Lecture, delivered January 9, 1970, under the auspices of the Institute of General Semantics. It is here re-printed from the General Semantics Bulletin, No. 37, 1970, by permission of the Institute of General Semantics.” 

But what is a difference? A difference is a very peculiar and obscure concept. It is certainly not a thing or an event. This piece of paper is different from the wood of this lectern. There are many differences between them—of color, texture, shape, etc. But if we start to ask about the localization of those differences, we get into trouble. Obviously the difference between the paper and the wood is not in the paper; it is obviously not in the wood; it is obviously not in the space between them, and it is obviously not in the time between them. (Difference which occurs across time is what we call "change.")

A difference, then, is an abstract matter.

p458 (1987), p457-8 (1972)

I suggest that Kant's statement can be modified to say that there is an infinite number of differences around and within the piece of chalk. There are differences between the chalk and the rest of the universe, between the chalk and the sun or the moon. And within the piece of chalk, there is for every molecule an infinite number of differences between its location and the locations in which it might have been. Of this infinitude, we select a very limited number, which be-come information. In fact, what we mean by information—the elementary unit of information—is a difference which makes a difference, and it is able to make a difference because the neural pathways along which it travels and is continually transformed are themselves provided with energy. The path-ways are ready to be triggered. We may even say that the question is already implicit in them. 

p460 (1987), p459 (1972)

[Carl Jung in Septem Sermones ad Mortuos, Seven Sermons to the Dead] points out that there are two worlds. We might call them two worlds of explanation. He names them the pleroma and the creatura, these being Gnostic terms. The pleroma is the world in which events are caused by forces and impacts and in which there are no "distinctions." Or, as I would say, no "differences." In the creatura, effects are brought about precisely by difference. In fact, this is the same old dichotomy between mind and substance. 

We can study and describe the pleroma, but always the distinctions which we draw are attributed by us to the pleroma. The pleroma knows nothing of difference and distinction; it contains no "ideas" in the sense in which I am using the word. When we study and describe the creatura, we must correctly identify those differences which are effective within it.

I suggest that "pleroma" and "creatura" are words which we could usefully adopt, and it is therefore worthwhile to look at the bridges which exist between these two "worlds." It is an oversimplification to say that the "hard sciences" deal only with the pleroma and that the sciences of the mind deal only with the creatura. There is more to it than that. 

First, consider the relation between energy and negative entropy. The classical Carnot heat engine consists of a cylinder of gas with a piston. This cylinder is alternately placed in contact with a container of hot gas and with a container of cold gas. The gas in the cylinder alternately expands and contracts as it is heated or cooled by the hot and cold sources. The piston is thus driven up and down. 

But with each cycle of the engine, the difference between the temperature of the hot source and that of the cold source is reduced. When this difference becomes zero, the engine will stop. 

The physicist, describing the pleroma, will write equations to translate the temperature difference into "available energy," which he will call "negative entropy," and will go on from there.

The analyst of the creatura will note that the whole system is a sense organ which is triggered by temperature difference. He will call this difference which makes a difference "information" or "negative entropy." For him, this is only a special case in which the effective difference happens to be a matter of energetics. He is equally interested in all differences which can activate some sense organ. For him, any such difference is "negative entropy."

p462-3 (1987), p461-463

Wednesday, 18 May 2016

Studentships to study information - deadline extended

The deadline for applications to apply for studentships in the Computing and Communications Department at the Open University has been extended to the end of May. I wrote about these a few weeks back,

Please spread the word. It amazes me that we (the Department in general, not just my area) don't get more applications. A studentship is not a bad deal!

Monday, 18 April 2016

Self-driving cars - information ethics again

From IEE Spectrum
Self-Driving Cars Will Be Ready Before Our Laws Are

It is the year 2023, and for the first time, a self-driving car navigating city streets strikes and kills a pedestrian. A lawsuit is sure to follow. But exactly what laws will apply? Nobody knows. [...]

The solution to the lawsuit problem is actually pretty simple. To level the playing field between human drivers and computer drivers, we should simply treat them equally. Instead of applying design-defect laws to computer drivers, use ordinary negligence laws. That is, a computer driver should be held liable only if a human driver who took the same actions in the same circumstances would be held liable. The circumstances include the position and velocity of the vehicles, weather conditions, and so on. The “mind” of the computer driver need not be examined any more than a human’s mind should be. The robo-driver’s private “thoughts” (in the form of computer code) need not be parsed. Only its conduct need be considered. [...]

For example, a computer driver that runs a red light and causes an accident would be found liable. Damages imposed on the carmaker (which is responsible for the computer driver’s actions) would be equal to the damages that would be imposed on a human driver.
My emphasis at the end there.

Wednesday, 13 April 2016

Can artificial informational agents ever have moral authority?

And, while I'm on the topic of Information Ethics (see this morning's post)...
Will Robots Ever Have Moral Authority?

Robots build cars, clean carpets, and answer phones, but would you trust one to decide how you should be treated in a rest home or a hospital?

....Even if we could come up with robots who could write brilliant Supreme Court decisions, there would be a basic problem with putting black robes on a robot and seating it on the bench. As most people will still agree, there is a fundamental difference in kind between humans and robots. To avoid getting into deep philosophical waters at this point, I will simply say that it's a question of authority. Authority, in the sense I'm using it, can only vest in human beings. So while robots and computers might be excellent moral advisers to humans, by the nature of the case it must be humans who will always have moral authority and who make moral decisions.

If someone installs a moral-reasoning robot in a rest home and lets it loose with the patients, you might claim that the robot has authority in the situation. But if you start thinking like a civil trial lawyer and ask who is ultimately responsible for the actions of the robot, you will realize that if anything goes seriously wrong, the cops aren't going to haul the robot off to jail. No, they will come after the robot's operators and owners and programmers—the human beings, in other words, who installed the robot as their tool, but who are still morally responsible for its actions.

People can try to abdicate moral responsibility to machines, but that doesn't make them any less responsible. ...

Kaydee, Engineering Ethics blog post, 11 April 2016
Kaydee argues that a consequence of this is a loss of moral authority
Turning one's entire decision-making process over to a machine does not mean that the machine has moral authority. It means that you and the machine's makers now share whatever moral authority remains in the situation, which may not be much.

I say not much may remain of moral authority, because moral authority can be destroyed....

As Anglican priest Victor Austin shows in his book Up With Authority, authority inheres only in persons. While we may speak colloquially about the authority of the law or the authority of a book, it is a live lawyer or expert who actually makes moral decisions where moral authority is called for. Patrick Lin, one of the ethics authorities cited in the Quartz article, realizes this and says that robot ethics is really just an exercise in looking at our own ethical attitudes in the mirror of robotics, so to speak. And in saying this, he shows that the dream of relieving ourselves of ethical responsibility by handing over difficult ethical decisions to robots is just that—a dream.
I would suggest that whatever this says about robots applies equally, in the language of Information Ethics, to any artificial informational agent, including a drone but also a corporation.