reality by keralaguest


									Transcript of Realism, Anti-Realism and Non-Realism 29/6/04

This week I want to try to make sense of what was discussed in the
previous weeks, while looking a little closer at what the underlying reality
behind our scientific descriptions might be, assuming such a thing exists.

In the first week I briefly explored the metaphysical basis of concepts
such as spacetime, matter and energy, and in the second looked more
closely at causation, law and determinism. One important thing
noticeable about these was the many current difficulties concerning their
exact definition, either ontologically or analytically, and also their
conceptual interdependency. Energy and force for instance being defined
in terms of matter undergoing change, while matter is definable in terms
of force or energy. Similarly space traditionally described as holding, or
dividing, material objects, and so defined in terms of matter, is today
almost seen as the precursor of matter, itself definable as a series of
events in spacetime. This interdefinition with no reference to a
foundational reality is taken by many (myself included) to indicate that
we are dealing with an internally defined conceptual system rather than a
reflection of an external reality. But before I look at this possibility, what
are the alternatives?

The orthodox conservative view perhaps is still Realism. That our
perceptions reveal an external reality of somekind. Though by this I do
not mean so called naive Realism, the view that the everyday reality is
what it appears to be, but rather that the reality revealed by science is real.
But if this is the case how can we account for the difficulty in defining
this world and the interdefinability of its
constituents? One way to do the latter is through cofoundationalism. The
idea that, for instance, both matter and energy are jointly real and so
definable in terms of each other. This is an attractive option, however it
raises problems of temporal evolution, what comes first matter or energy?
If we choose either we lose the ability to define it in terms of its partner.
If we choose both we enter into a difficult conceptual problem, are they
identical and if so why are they apparently different, and if they are
different what correlates their joint appearance? One way out of this
might be to define matter and energy jointly emerging from spacetime.
However relative spacetime is traditionally conceived of in terms of
matter so we lose the possibility to define it before matter
exists. We can of course describe spacetime purely mathematically and
account for matter, energy or their combination emerging from this
spontaneously, but these descriptions seem to require energy as a joint
foundation with spacetime, so we just move the problem back. Some
modern theories attempt to reduce everything to energy and particles of
space but this is not really helpful as it is still cofoundational. Although it
does perhaps help explain general causal patterns, or laws, as the arbitrary
structure adopted by these ‘bits of spacetime’ in the overall universe. All
these various theories are problematic while a single foundation remains
One possibility open to the Realist is to unify these foundational items in
some way. Most likely through a field theory that closely relates a
universal field of force with the structure of spacetime, without reference
to material particles or waves as such.
This also has the benefit of overcoming some of the problems of
Quantum Mechanics and unifying it with Relativity theory.
Note that this also implies that our everyday perception of a pluralistic
world of individual objects is largely illusory and so goes a long way
down the conceptual scheme route. Though of course maintaining a close
correlation between this scheme and reality.
This has not yet been achieved however and poses a number of problems,
not least of which is the complexity of the system and the difficulty of
computing a whole universe in a single mathematical field description.
Complex systems of this kind can often only be represented as a
simulation rather than a normal scientific description, because the
mathematical formalism is simply too complex to compute. But in this
case how can the entire universe be simulated in a system that is itself
part of that universe? It seems only local descriptions may be possible
and so we have to return to a non-holistic solution, at least for the sake of
conceptual simplicity.
Any talk of the overall foundation of the universe would thus be
indescribable and so mere metaphor held true through faith. Or in
other words scientific Realism becomes religious belief.

A more common move by the Realist however is to argue that the
problems outlined earlier (indefinability, QM contradictions etc) are in
fact pseudo-problems based on our current lack of understanding of the
world. The sceptic is being too pessimistic, the Realist often claims. But
for many this entails an awful lot of faith on the part of the Realist, as it
assumes that all the many logical and ontological problems in the
philosophy of physics will one day be solvable, and even that quantum
mechanics will eventually
be superseded by a new more coherent and less problematic theory. But
this seems to require the kind of impossible conservative optimism, akin
to that found in the Church at the onset of the Enlightenment.
Furthermore even if current physics was superseded, who is to say that
the next, or even the ultimate, theory will not be even more problematic
for Realists. Unless physics is benign enough to be completed only in a
tidy, conceivable little theory attuned to Realist prejudice.

Another possible solution is the opposite to Realism, that of Anti-
Realism. This term has two meanings however, in the States it simply
means a scepticism towards Realism, and refers to views such as
Instrumentalism (the idea that objects like atoms do
not really exist, but are merely useful working models for the description
of reality) and related Neo-Kantianism (the idea that phenomena are
conceptually moulded percepts, that only indirectly correlate with reality,
rather than being direct representations). This view, which I shall refer to
instead as Non Realism, agrees with Realists that there is an independent
(or at least semi-independent) external reality, unresponsive to our
desires, but denies the Realist’s claims that we can know it, as it really is,
our conceptual schemes. Thus for instance while the Atomist theory as a
whole correlates with reality as a whole (to the extent of predictability)
the units of the theory, atoms, do not correlate to anything in reality, they
are just conceptual tools existing in theory and our imagination. It is thus
not a true Anti-Realist position. But I shall return to this shortly.

First I want to look at a completely different use of the term Anti-
Realism, its use in the UK by philosophers like Michael Dummett, who
coined the term. This view, almost the opposite of Realism, and others
like it, is basically an Idealist position which states that we can really
know of the world through our conceptual schemes, simply because the
world is really nothing but a conceptual scheme. Note Anti-Realism is a
bit of a misnomer here as the exact opposite of Realism would be a reality
dependent on our minds that we could not know, rather that a mind
dependent reality that we can know by definition, but this would be an
Dummett’s thesis is not purely Idealist or Phenomenalist (a Neo-
Kantianism of concept-percept without the underlying reality), but
approximates to it because, like some interpretations of Quantum Theory,
it posits situations in which a description can be neither true or false, until
a positive assertion is made. Thus it is language, according to this reading
of Dummett’s theory, that ultimately shapes the material world. Many
philosophers equate this with Idealism, the thesis that only minds and
ideas really exist. The Idealist tradition ultimately traces itself back to
Bishop Berkley who claimed existence was identical to perception. This
primitive form of Anti-Realism is easily refutable however, through the
appeal to a stable shared reality, if my perception shapes my world and
your perception shapes your world how can we both share the same
reality. Such a thesis seems to lead to solipsism. More worryingly what is
it that preserves the continuity of the world when no one is observing it.
The answer to both these questions for Berkley was God, as the ultimate
observer. It should be noted that exactly the same problem may emerge
for the Princeton Interpretation of Quantum Mechanics, which states that
the universe is an observer created one, as it is observation that is said to
collapse the QM probability function. However this whole argument
seems to be a reductio ad absurdum, and not least for its use of an
observing divinity as the ultimate source of reality (what if he nodded
off!). In contrast however Phenomenology and Dummett’s Anti-Realism
sidestep the God hypothesis by using inherent mental categories and/or
arbitrary shared language as the guarantee of a shared stable order. In a
conventional reading, this just seems to move the problem on to another
area however, in that it fails to answer the obvious question of how
language or mental categorisation achieve a stable order without any
foundation themselves. The only answer seems to be somekind of
structured disembodied consciousness communicating telepathically,
which not only seems to many as absurd as the God hypothesis, but more
importantly also flies in the face of our everyday experience of a material
reality (and rejects any notion of sensual existence). We know that an
external world exists simply by the fact that it sometimes resists our
actions in it. This table exists because it offers a resistance to any of our
attempts to exert a force on it. It is simply real in this sense. For this
reason most philosophers regard Anti-Realism as absurd, and not a
serious challenge to scientific Realism.

But things do not end there. It is now time to turn to what I’ve called Non
Realism. As previously stated Non-Realism basically encompasses all
those views that can be categorised as Neo-Kantian. The notion that the
world of phenomena we experience, and the theoretical models of the
world we generate, are not precise reflections of an external reality, but
are rooted in conceptual schemes that only indirectly match a reality,
which we can never perceive or conceive of in its pure form. This offers a
simple solution to the philosophical problems we currently have in this
area as it explains the interdefinable nature of all the elements of the
scientific model. This kind of interdefinability is exactly what we should
expect of an internally coherent conceptual scheme, the truth of which is
independent of an external reality and entirely internal. Logic itself thus
becomes a model of the structure of our minds rather than of the world.
While at the same time it does not deny the obvious existence of an
external reality. Non Realism
can thus combine the best ideas of Realism and Anti-Realism.
This view has a few problems associated with it of course, one is if logic
is merely mental structure what does this say about the way mathematics
relates to the world? Traditionally maths, like logic was seen as the ideal
model of the world and the most authentic representation of it. This also
has bearings on the mathematical models of QM. On the Non Realist
account though mathematical models are just that, more conceptual
schemes by which we understand the world, rather than the way the
world is. This view is actually supported by developments within
mathematics itself however. Goedel’s Theorem is widely regarded to
demonstrate that no mathematical model (or at least one involving
arithmetic) can ever be complete and entirely self-consistent. Something
is always left out of the equation. This can be best understood in terms of
Set Theory, which has its own classic paradox, the problem of self
reference, exemplified in the example of the library with two catalogues,
one listing all the books that refer to themselves and the other listing all
the books that don’t refer to themselves. The paradox being that while the
first catalogue can be included in itself without contradiction, the second
catalogue cannot be listed in either itself or the first catalogue without
contradiction. It can not be included in the overall system. Goedel’s
Theorem is similar in that some elements of any arithmetic system cannot
be defined within that system, it is thus incompletable. At first this was
thought to be a minor problem with a small amount of indefinability.
more recent studies have shown mathematics is more like a sea of
indefinables with a few large islands of coherence. Thus if maths
is a reflection of reality it must be a very strange reality. What is more
likely is that mathematics is a conceptual scheme that imperfectly
represents reality.
Some have argued that if all we have is an internally consistent
conceptual scheme that is not a reflection of reality then there is an
unbridgeable gap between this scheme and reality, in fact we have no
reason to surmise this reality exists at all. But this degeneration into
Idealism is not necessary, as the very fact of a conceptual scheme
successfully applied to a world entails the existence of that world. What’s
more the fact that our conceptual models of Causation, for example, fail
to capture what we intuit to be Causation, tends to indicate that Causation
itself is something other than a set of ideas in a conceptual scheme, and
so refutes any Idealist account of Causation. What it really is though does
raise problems. As does the status of ‘quantum logic’, incidentally, a
bizarre form of logic designed to describe the world of quantum reality.
Some philosophers argue that this is the true logic of thought (and only
approximated by classical logic), which intuitively seems wrong, given
quantum weirdness, while others regard it as the basis
of the only true description of the world. Though this flies in the face of
the view of maths and logic as conceptual schemes just outlined.
Although, given the discontinuities and ambiguities of quantum reality,
an incomplete patchwork of bizarre logical description might just be a
true representation of reality after all!

This leads us back of course to Niels Bohr’s Complementarity theory,
arguably one of the few interpretations of QM to survive a rigorous
critique. As we saw last week Complementarity describes quantum reality
in terms of various mutually incompatible but jointly necessary
superposition descriptions, that is, wave – particle descriptions,
momentum – position descriptions, and so on. Something which only
seems plausible under a Neo-Kantian philosophy, which attempts to
describe the world in terms of a logically incompletable conceptual
scheme (ala Goedel), that can only be expressed in partially complete but
logically incompatible descriptions. The only complication here being
Bohr’s claim that a wave and a particle description could not be applied
to the same experiment, or a contradiction in a single description would
result (as a wave is by definition extended in space and a particle can only
be in one place), while recent experiments seem to have demonstrated
simultaneous wave and particle nature. What this means is unclear,
though arguably it could demonstrate either an illogicality to reality, as
measured in experiment (or rather the failure of our mental categories to
match reality in atypical situations), and/or perhaps the need for a true
objective logic of a totally different kind. But either way the actual reality
of the situation could not be described coherently as either a wave or a
particle (and arguably neither as anything else in practise) using
conventional reason, which further confirms the idea that we are dealing
primarily in conceptual schemes when we describe the world. While of
course also showing that a reality underlies this conception which cannot
in itself
be conceptually represented. Note further experiments have denied a
coherent underlying reality of hidden variables and shown
that the superpositions are real and prone to objective probabilities, as
well as conceptual and prone to subjective probabilities (a controversy
even in Copenhagenism till then). This would seem to indicate there was
an odd parallel between the nature of our conceptual models and the
structure of reality, though not one compatible with conventional

Obviously Non Realism opens up problems regarding the nature of truth.
But these are not necessarily serious ones, for instance van Fraassen
argues for the concept of Empirical Adequacy replacing classical notions
of truth (at least in scientific contexts). Here
we do not say something is ‘true’, but simply that it matches the data
adequately. Bearing in mind several interpretations of data
can match the same set of experimental results with equal adequacy. If
some of these interpretative descriptions are equally ‘true’
(in the traditional sence), or rather are sustainable and irrefutable (in the
scientific sence), but have a tendency to be incomplete,
then we return immediately to Complementarity.

Despite the attractions of this view there are some serious problems with
a Neo-Kantian Non Realism. The most serious being the actual
connection between the conceptual scheme and reality. We seem doomed
to an even deeper form of Cartesian dualism if we cannot account for the
overlap of reality and our models of it. This remains a very difficult
problem, for if as Kant claims we cannot experience the world without a
conceptual scheme then how do we discover a correspondence between
concept and reality at all? The fact that concepts are necessary for
perception also seems true from studies of physical perception in different
cultures. So how does our conceptual world come to match reality close
enough to allow us to survive in the world if it is only produced by
random Darwinistic natural selection? It would seem incredibly
improbable that such a conceptual scheme would emerge this way.

One bold solution is through the philosophy of a philosopher who
followed in Kant’s footsteps and claimed to have superseded him, G W
Hegel. Hegel argued that all progress was dialectical, in that two
opposing notions could be closely related though their mutual opposition
but interdependence. It follows from this that our conceptual scheme of
the world may be in dialectical relation with the reality beyond this
scheme. But what could this mean in practise, how can a detached
conceptual scheme get to grips with reality in the first place to even begin
a dialectical relation? I don’t intend to go into the complexities of
Hegelianism here but a few more basic observations might be useful.

For one Hegel maintained that the our concepts and reality while not in a
relation of mirror and object are none the less closely entwined, that is
they are mutually dependent on each other. Ideally most neo-Hegelians
would like this interdependency to be a
material one with a conventional epistemology, though it is not
impossible that a form of participatory epistemology is involved, that is
that our knowledge of the world is not passive but active, and modifies it
in someway. What this means is difficult to grasp, it is not Idealism, as it
is not saying that only minds and ideas exist, so it is not an Anti-Realist
stance, although it is obviously not a Realist one either. It seems to
suggest that reality is only independent of our perception up to a point
and remains in some sense incomplete until we have knowledge of it.
This is very close to some interpretations of QM, though it should be
emphasised this is not the same as the observer effect put forward by
Wheeler (whether this is interpreted as a form of Idealism or not) as we
are not dealing here with individual observation but with shared
knowledge. This may sound odd, but there are empirical indications that
it could be true, and not all of these are from QM.

At the turn of the last century it was conventionally thought that glycerine
could not crystallise, however when a piece of crystalline glycerine was
discovered (there are various accounts of how this happened) and the
news spread, it is claimed glycerine began to be crystallised by applying
deterministic methods that had previously failed. Sceptics argued that
microscopic fragments from sample glycerine crystals sent to the labs had
contaminated the results, seeding crystal formation, and where no
samples were sent crystals were said to have been carried in the beards of
visiting scientists. However when glycerine allegedly crystallised
spontaneously in ‘sealed jars’ something odd seemed to be happening.
The records of these events are to be in the scientific literature of the
period, though unfortunately the details are a little too vague to reach
definite conclusions. One more interesting sceptical account argued that
the crystallisation of glycerine was always possible but had a very low
probability, and all that had happened was that an improbability occurred
at this time, akin to a long run of heads in a series of coin flips. There
being no mystery. But this only works if we see the initial discovery of
crystalline glycerine as a separate coincidence (unless this itself
spontaneously appeared). Oddly this explanation may support a highly
controversial claim which I shall now explore.

It should be remembered that there are outstanding problems with QM
even if we take a Neo-Kantian view. There seems to be a real
experimentally demonstratable reality beyond our conceptual schemes
that indicates that superposition is real. That the duality of quantum states
is not just a matter of description of a complex reality, in which our
normal conceptual categories don’t apply, it is a real state of an
ambivalent quantum reality. That is our contradictory descriptions do to a
certain extent match a ambiguous reality in some sence. But as we saw
last week objective superposition is hard to explain. One solution
however is Decoherence theory.
To recount from last week, one interpretation of Decoherence theory
suggests a simple quantum system (ie a coupled electron pair) requires
little information to represent it, while a large quantum system (such as
Schrödinger’s cat) requires an enormous amount of information to
represent it (three times the number of its particles at least). The latter is
thus inaccessible to a limited representational information system (the
human mind). Thus to human observers the world appears classical, but is
really quantum mechanical. Measurement connects a simple system to the
larger world making it appear classical. This is sometimes used in
Copenhagen interpretations to explain probability wave collapse, even
though the wave function doesn’t actually change, it just appears to.
Similarly it is used to explain why we only perceive one history in a State
Relative, ‘Many Histories’ interpretation. But if this is so, and all states
remain real they can still interact, so why isn’t reality weirder!? A
speculative answer may be that the information we have about the world
actually changes how it behaves. Or more precisely modifies the
probabilities about how it will behave. Thus we have a perfect
explanation for the glycerine effect which involves no coincidences. The
information that glycerine could crystallise, when shared by a critical
mass of scientists, actually changed the probabilities of its spontaneous
crystallisation. This might sound bizarre, but experiments do seem to
indicate the incredible fact that random systems can be influenced to a
small extent by the human mind. Though these experiments are ongoing.
If this turns out to be true then we have a real mechanism for a
participatory epistemology akin to that proposed by Hegel. Although the
conceptual details would have to be fleshed out to say the least.
Furthermore given the theories of David Chalmers in Philosophy of
Mind, that all systems that hold information (even mechanical ones) are
by his definition ‘conscious’, and so are bearers of knowledge in some
sense, it might also be possible that a participatory epistemology is
possible even without a human mind being present, thus avoiding obvious
problems with an account of the physical evolution of the universe. While
this remains a highly speculative and controversial thesis, given the
absurdity of Anti-Realism and the almost ridiculous optimism of Realism,
this form of Non Realism remains a serious option. Though it is also quite
possible that some yet unknown less radical or hybrid position is

Finally I’d like to apply this last thesis to the topic of free will I raised last
week. Then I stated my belief that moral choice
and personal responsibility depended on the existence of free will to be
part of a coherent ethical theory. I further observed that deterministic
accounts of science and deterministic interpretations of QM excluded the
possibility of the kind of free will required
by a theory of this sort. And so declared my preference for indeterminate
interpretations of QM, such as the Copenhagen Interpretation, with QM
corresponding to the true physics of the world (as it does with
Decoherence theory). However while this does not exclude free will it
does not make it possible either. The orthodox account of the
Copenhagen Interpretation merely states that the results of measurement
are determined probabilistically, with the actual result being effectively
random. Not the kind of background theory to make free choice a
possibility. However as we have seen there is a case to be made for the
modulation of probability through knowledge or information, and
arguably this process is part of the Decoherence effect that creates the
appearance of classical physics. If the human mind was one of the
information systems that could modulate probability in this way the
possibility that the results of measurement are not random after all, but
based on the conscious phase shifting of the wave function by the
modulation of its probability content. Thus a conscious decision process
could be taking place within the brain, making free will a component in a
wider deterministic system. While this is a speculative idea, it has to be
taken seriously due to its arguably currently closer correspondence with
empirical reality. It remains however only a sketch of a solution and
would have to be developed much further to make it a viable thesis. In
particular a mode of interaction between knowledge or information and
material reality would have to be devised. One way might be to combine
Ernst Mach’s version of a neutral monism, or Chalmer’s aspect dualism
(a broadly equivalent thesis of the mutually irreducible ‘mental’ /
‘physical’ properties of matter), with the latter’s theory that
consciousness is basically a physical system storing information, where
the information has a conditioning effect on the conscious aspect of the
dualism, and further conditions the material aspect of the dualism.
Something also understandable in terms of probability, which currently
has two definitions, subjective or epistemic probability (the degree of
certainty in our knowledge) and
objective or ontic probability (the likelihood of an event occurring).
Conventionally viewed as two separate phenomena, they could also be
viewed as two aspects of one phenomena under aspect dualism. A
definition of will within this context would also have to be found, perhaps
as an assertive proposition ( ‘I will …..’), within the context of a
propositional theory of knowledge. This might form a domain of local
knowledge able to modulate probabilities in the immediate environment
of the subject (the human brain for instance) and thus make the
indeterminate wave function collapse a subject determinate collapse,
constituting a form of agent causation within a wider deterministic
framework. For such a concept of local knowledge to make sense
however it would not only have to be more closely related to an act of
volition, but also the neural processes associated with that act of volition.
There would also have to be way of describing the interaction of global
knowledge (shared beliefs about the world) with local knowledge, while
at the same time maintaining the semi-autonomy of the ‘laws of nature’.
Such a task may be daunting but necessary.

To top