Tuesday, April 17, 2012

Mathematics, Models, Reality and Gnostics


Using mathematical models to understand and predict physical phenomenon has been around for a few centuries. The physical sciences is based upon them and engineering cannot function without them. However, there are other applications where we find that certain limitations may exist. Let me first start with a quote from Cassirer, The Philosophy of the Enlightenment, pp 108-109:

A survey of the special problems of eighteenth century epistemology and psychology shows that in all their variety and inner diversity they are grouped around a common center. The investigation of individual problems in all their abundance and apparent dispersion comes back again and again to a general theoretical problem in which all the threads of the study unite This is the problem which Molyneux  first formulated in his Optics, and which soon awakened the greatest philosophical interest. 

Is the experience derived from one field of sense perception a sufficient basis on which to construct another field of perception that is of qualitatively different content and of a specifically different structure? Is there an inner connection which permits us to make a direct    transition from one such field to another, from the world of touch, for instance to that of vision? Will a person born blind, who has acquired an exact knowledge of certain corporeal forms by means of experience and so can distinguish accurately among them, have the same power to distinguish objects if, as a result of a successful operation, he gains possession of his visual faculties, and is required to judge concerning these forms on the basis of purely optical data without the aid of the sense of touch?

The point, perhaps one should be cautious in employing techniques which work well in one field but may have limits in others. Let me consider two recent efforts.

In Science  there is a recent article on the use of models in understanding the operation of genetic pathways. Working in the field at this time I can fully understand the attraction. Yet I also understand the complexity and potential for misuse. The author states:

Four hundred years ago, Galileo observed that “Nature’s great book is written in mathematical language.” Since that time, physical phenomena have been described by mathematical equations, yet biology has remained qualitative. A possible explanation is that complex behavior in physics emerges from relatively simple interactions between many copies of few elements, whereas biological complexity results from nonlinear interactions of many heterogeneous species. In this sense, biological systems are similar to engineered machines: Inventories of both airplane parts and animal cell proteins consist of tens of thousands entries; cell interactomes look similar to machine blueprints; and performances of both engineering and biological structures are characterized by robustness and noise resistance. This analogy has limitations: Biological systems are built from stochastic and unreliable parts; are evolved rather than designed; and are subject to reverse, not direct, engineering. Nevertheless, in the last two decades, the mathematics usually applied to engineering and physics has been often used in cell biological studies where quantitative models serve as a guide for failing intuition.

Here I would agree and disagree. Mathematical models are embodiments of intuition, of understanding, not surrogates for them. The problem is that this is a very difficult problem. In addition there is “noise” in these systems which create uncertainty. Do we model the noise as random or is it necessary to understand its dynamics as well and push the noise level lower. Thus is miRNA a noise element or an element we must account for in detail.

The author continues:

The foundation for this surge was laid by two seminal papers that appeared 60 years ago. One was the biologically abstract and mathematically simple manuscript by Alan Turing proposing that a pattern can emerge in an initially homogeneous mixture of two chemicals. Turing used two linear partial differential equations (PDEs) with few parameters to demonstrate that two chemicals, a slowly diffusing “activator” and a rapidly diffusing “inhibitor,” could concentrate in different regions of space. Untested and unsubstantiated at the time, this conceptual model has served as a basis for many studies of polarity, chemotaxis, and development. Another work by Hodgkin and Huxley was mathematically complex, grounded in experimental data and very detailed: Many ordinary differential equations (ODEs) with many parameters and nonlinearities were used to describe ion currents through voltage-gated channels in the axon membrane. The parameters and nonlinearities were measured, and the model reproduced the observed electric bursts in nerve cells, which revolutionized our understanding of excitable systems.

Here I have a true concern. The Turing paper is a true classic. It was intuition to the extreme. I have used it in modeling plant color patterning, and I am currently using it as a means to understand stem cells in melanoma. The problem is that despite the metaphor provided by Turing it may or may not be the correct one and the understanding to assure ourselves is still a bit distant. Let me consider cancer at a high level. It is characterized by:

1. Loss of control of the cell cycle. Namely we have ligands, then receptors, then pathways and then promoters, the cyclins, and then the cycle, and on and on. Control of the cell cycle will stop the multiplication of the malignant cell.

2. Loss of Location: Loss of E-cadherin attachment capacity results in melanocytes going where they should not. Why and is this a Turing effect?

3. Stem Cells: Are there some collection of control cells, the stem cells, which send out in a Turing like fashion in space control signals. If one removes the stem cell do the others die off? I have observed some of this in prostate neoplasia but it is at best anecdotal.

4. Mutations:How do they occur and why?

The authors continue:

These two papers symbolize the opposite ends of “modeling space”. It is tempting to pronounce that we will be describing cells in ever more accurate terms and minute detail, moving from focused and conceptual (like ODEs describing three-node motifs in regulatory networks) to accurate and broad models, perhaps ending with a “whole-cell model” that completely recapitulates cell behavior on a computer, substitutes for wet laboratory experiments and makes personalized medicine possible. This is an appealing, if distant, goal. Meanwhile, this view subtly puts broad models above focused ones and suggests that there is a modeling “Road to Valhalla.”

I doubt that we are near that “Road” yet but there is much superb work being done. If I had to bet I would bet on Turing. I have seen it function in plant patterning with secondary pathways, perhaps in cancer cells as well.

But the key question is: Are these models and methods reliable for this domain of knowledge. Have we managed to challenge the Cassirer model? I think they are worthwhile. I think they will function quite well but not as simply as many think. After all the control system for a B-2 bomber may be as complex as the pathways of a single cell organism, we just do not yet know enough. But it is worth a try.

Now to the other extreme, macroeconomics. A recent book, The Assumptions Economists Make, by Schlefer, is an interesting contribution to understanding the world of macroeconomists, from the perspective of an outsider. It also demonstrates the use and gross misuse of models, unlike the discussion above.

Let me start by commenting on a paper by Mankiw and Weinzierl, An Exploration of Optimal Stabilization Policy, which states in an opening set of assumptions (modified):

The economy is populated by a large number of identical households. The representative household has the following objective function:

max{u(C1)+v(G1)+b[u(C2)+v(G2)]}

where C, is consumption in period t, G is government purchases, and b is the discount factor. Households choose consumption but take government purchases as given.

Households derive all their income from their ownership of firms. Each household's consumption choices are limited by a present-value budget constraint:

P1(I1-T1-C1)+ P2(I2-T2-C2)/(1+i1)=0

where P is the price level, I is profits of the firm, T is tax payments, and i is the nominal interest rate between the first and second periods. Implicit in this budget constraint is the assumption of a bond market in which house­holds can borrow or lend at the market interest rate.

Just what does this model have to do with reality? Why are households identical, is it not the real issue that they differ, and that their difference changes in time. and what objective function, is there not a psychological element as well and that often the choices are not logical or consistent. And what household owes its income from owning firms, very few. Thus, this statement, typical of almost all, assumes a world not in evidence. In contrast to my biological pathways, which we struggle so hard to understand with facts, the economists can easily say, "assume a spherical elephant". Yet none exists.

Unlike the problems in the world of genomics where we do not assume but base our models on facts, like science and engineering in general, this paper and this statement is the typical example of what Schlefer discusses, namely macroeconomists using equations to justify the total lack of reality. Schlefer goes through many of the absurd assumptions made by economists in their models and then he correctly articulates the arrogance many have in stating that they have knowledge that others lack. They have become the Gnostics of the twenty first century.