Tuesday, June 14, 2016

Is AI Helpful or is it Merely the Opinion of Millennials in California


McCarthy defines AI, Artificial Intelligence, thus:

Q. What is artificial intelligence?

A. It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

Q. Yes, but what is intelligence?

A. Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.

Q. Isn't there a solid definition of intelligence that doesn't depend on relating it to human intelligence?

A. Not yet. The problem is that we cannot yet characterize in general what kinds of computational procedures we want to call intelligent. We understand some of the mechanisms of intelligence and not others.

Now in contrast my definition of AI is as follows:

If A=B
            Then C
            Else D
End

That’s is, no more no less. But the key question is; what is A, B, C, D? You see we are dealing with computers and we must use numbers, namely bits. Therefore, ultimately each is some binary number in a memory location. They are to reflect some reality. Thus the issue is:

1. Who selects the reality?
2. What measures of the reality are used?
3. What weights on the measures are employed?
4. How does one relate the measure of one's perceived reality with the actual reality, whatever that is?

One can see from the above that a great deal of human judgement is used. That is even the case if we "adaptively" change weights and measures. For example, one can use a first level system, namely one where the programmer assigns weights and measures. Namely we measure x and we weight it by y to generate z which we call A. Or we could use an adaptive system. We all like adaptive systems because they allegedly adapt to reality. But they are ultimately just the first level system pushed down one level. The adaptive system adapts its variable but by another selected measure called w. That is, we look at x and y overs some data space and weight it adaptively by w to get z which we no call A. This of course can be carried on forever but we still have some human making some value judgement somewhere. Thus that human value judgement stays with us forever! It can become immortal.


Consider a case from NEJM a few years ago. The answer was "rabies". From a Bayesian perspective, its probability of ever occurring would have been zero, and the diagnosis is not definitive until autopsy with the identification of Negri bodies in the brain. The question of how does one develop an algorithm, an AI procedure if you will, to identify something that would generally have zero probability until after death is problematic. Obviously there are many A, B, C, D, and they may operate sequentially or in parallel. Furthermore, they may also adapt, namely the weights that map diagnostic variables into some binary number may change.

Consider the initial presentation in the NEJM article:

The patient had been well until 4 days before admission, when aching developed in the left elbow, which improved with ibuprofen. The next day, right-elbow discomfort occurred, and he had decreased appetite. Two days before admission, he noted difficulty forming words, mild light-headedness, and mild recurrent pain in both elbows. An attempt to drink a glass of water precipitated a gagging sensation. He had difficulty breathing and could not swallow the water. The choking sensation resolved when he spat out the water, but it recurred with subsequent attempts. He stopped drinking liquids and became increasingly anxious. One day before admission, he was unable to shower because of increased anxiety and noted intermittent decreased fluency in his speech and pruritus at the nape of his neck. He was concerned that he was having a stroke, and he drove to the emergency department at a local hospital.

Now if one were in a region where rabies was pandemic one would immediately think of rabies. But in Massachusetts where there had not been a case for 80 or more years that would be the last thought. Thus how would one "program" this decision.

The added results were as follows:

On examination, the patient appeared anxious, with dry mucous membranes. The blood pressure was 171/80 mm Hg, the pulse 86 beats per minute, the temperature 36.4°C, the respiratory rate 16 breaths per minute, and the oxygen saturation 98% while he was breathing ambient air. Other findings included ptosis of the right eyelid, mild facial twitching, postural hand tremors, and dysmetria on finger–nose–finger and heel-to-shin testing, without truncal ataxia. Deep tendon reflexes were symmetrically hyperactive throughout; plantar reflexes were flexor. There was mild difficulty with tandem walking. The patient’s speech was rushed and fluent, except for occasional slurred words and pauses for word finding; the remainder of the general and neurologic examination was normal. The hematocrit, platelet count, erythrocyte sedimentation rate, and levels of hemoglobin, C-reactive protein, and troponin T were normal, as were tests of renal and liver function

Each result has to be added to some metric and some decision point.
 
One of the biggest problems in any AI is the judgements made when developing the primal metrics. The recent discussion regarding the alleged Facebook new bias is a prime example. AI is not politically neutral. It can be and remain as such highly biased by the very means by which selection bias is made. This is the case even if adaptive learning is utilized because even then the learning algorithms are also elements of bias perforce of their developers input.

To quote Drucker, who paraphrased McLuhan;

 "Did I hear you right," asked one of the professors in the audience, "that you think that printing influenced the courses that the university taught and the role of university all together." "No sir," said McLuhan, "it did not influence; printing determined both, indeed printing determined what henceforth was going to be considered knowledge."

Thus this led to McLuhan's famous phrase that the medium is the message. Specifically, as we developed a new medium for human communications, we dramatically altered the nature of the information that was transferred and the way in which the human perceived what was "truth" and what was not. The television generation of the 1960's was a clear example of the impact of television versus film in portraying the war in Vietnam as compared to the Second World War. The perception of these two events was determined by the difference of the two media that displayed them to the pubic masses. Television allowed for a portrayal that molded more closely to the individual human's impact of the events as compared to films overview of the groups involvement's. Both media deal with the same senses but they are different enough to have determined two different outcomes of the wars. This conclusion is a McLuhanesque conclusion but is consistent with the changes that McLuhan was recounting in the 1960's in his publications.

But a corollary to McLuhan and the medium, is the use of putative AI techniques to present to humans certain facts. The AI becomes the new medium, it is the filter between the facts, whatever they are, and the perceived reality. For example, if one were to be tracking News on politics, then perhaps the AI presenter, the new medium if you will, may present you negative only facts on say Trump and positive only on Clinton. The result is strong medium reinforcement. We know that is the case in print, we clearly can see this say in The New York Times, hardly a Trump fan, but we know the sources and can weight it accordingly. However, if there is some AI engine in the background, hidden behind some curtain, written and architected by the feelings of some unknown person or persons, then how do we interpret that.

For example, we are now all told that Silicon Valley is the hub of the new entrepreneurial gestalt. However, this reality is a reality of apps, and software manipulations and social networking. In contrast we have a massive entrepreneurial present in Cambridge, here we have genomic engineering and lifesaving technology. We seem to weight a new app or social network well above a new pathway inhibitor or monoclonal antibody. Why? Perhaps because the social networks are self-reinforcing.

Thus our concern is that when we have humans developing AI algorithms say for medical diagnosis or news presentation the algorithms are inherently biased. Again in medical diagnosis, does a machine respond as a human when by misdiagnosing the patient dies. For the machine it is just another data point. For the human is can be mind altering.

Machines do not make data mistakes, humans do. Yet machines do not weight their mistakes in such a drastic fashion as humans do. Also humans are always inserting their value judgements. The result may then, as Drucker noted, be perceived as the new truth. Thus if we have some group of Millennials in California writing algorithms called AI, it really must be understood as nothing more than complex multi-layer opinion pieces, and pieces which may have long lives. Do we really want their value judgements telling us what is reality? One would hardly think so. Thus AI poses a significant danger as nothing more that propaganda from privileged protagonists.

Reference

Greer et al, Case 1-2013: A 63-Year-Old Man with Paresthesias and Difficulty Swallowing, NEJM, 2013;368:172-80.

Drucker, Peter F., Adventures of a Bystander, Harper Row (New York), 1979.