Volume 21_ Number 1.pdf

Document Sample
Volume 21_ Number 1.pdf Powered By Docstoc
					        Volume 21, Number 1

I       Editorial

            3    Inference and Scientific Exploration                              Mike1 Aickin
        Research Article
           13    Linking String and Membrane Theory to Quantum Mechanics           M . G. Hocking
                 and Special Relativity Equations, Avoiding Any Special
                 Relativity Assumptions
           27    Response of an REG-Driven Robot to Operator Intention             R. G. Jahn
                                                                                   B. J. Dunne
                                                                                   D. J. Acunzo
                                                                                   E. S. Hoeger
           47     Time-Series Power Spectrum Analysis of Performance in Free       Peter A. Sturrock
                  Response Anomalous Cognition Experiments                         S. James P. Spottkwoode
    I      67     A Methodology for Studying Various Interpretations of the        Marko A. Rodriguez
                  N,N-dimethyltryptamine-InducedAlternate Reality
           85     Comment on "A Methodology for Studying Various Inter-            Rick Strassman
/                 pretations of the N,N-dimethyltryptamine-InducedAlternate
I                 Reality"
           88     Comment on "A Methodology for Studying Various Inter-            Anonymous
                  pretations of the N,N-dimethyltryptamine-Induced Alternate
                  Reality "
           89     An Experimental Test of Instrumental Transcommunication          Imants Barus's
           99     An Analysis of Contextual Variables and the Incidence of         Devin Blair Terhune
                  Photographic Anomalies at an Alleged Haunt and a Control         Annalisa Ventola
                  Site                                                             James Houran
          121     Comment on "An Analysis of Contextual Variables and the          Anonymous
                  Incidence of Photographic Anomalies at an Alleged Haunt and
                  a Control Site"
          123      The Function of Book Reviews in Anomalistics                    Gerd H. Hovelmann
          135      Ockham's Razor and Its Improper Use                             Dieter Gernert
          141      Science: Past, Present, and Future                              Henry H. Bauer
        Letters to the Editor
          157      Swords's Caution to Coleman                                     Michael D. Swords
          159      Coleman's Response to Swords                                    Peter F. Coleman
        Book Reviews
          163    Psicologia Psicoanalisi Parapsicologia, by G. Caratelli; Del      Carlos S. Alvarado
                 Hipnotismo a Freud, by J.M. Lopez Pinero; The Bifurcation of
                 the Self, by R.W. Rieber; Premiers Ecrits Psychologiques, by P.
                 Janet; L'Hypnose: Charcot Face a Bernheim, by S. Nicolas;
                 Hypnotisme, by E. E. Azam
          174    A Casebook of Otherworldly Music: Vol. 1 of Paranormal            Analisa Ventola
                 Analisa Ventola Music Experiences; and A Psychic Study of the
                 Music of the Spheres: Vol. 2 of Paranormal Music Experiences,
                 by D. Scott Rogo
          Poltergeists: Examining Mysteries of the Paranormal, by      Bryan J. Williams
          Michael Clarkson
          Ghost Hunters: William James and the Search for Scientific   Michael Schmicker
          Proof of Life After Death, by Deborah Blum
          Leaps in the Dark, by John Waller                            Henry H . Bauer
          Quantum Enigma: Physics Encounters Consciousness, by Bruce   Richard Conn Henry
          Rosenblum and Fred Kuttner
          The Cult of Personality, by Annie Murphy Paul
          The Hundred Year Lie, by Randall Fitzgerald
          Sasquatch: Legend Meets Science, by Jeff Meldrum
          Further Books of Note
          Articles of Interest
          DVD/CD Reviews
                                                                       Tana Dineen
                                                                       Joel M . Kauffman
                                                                       John Bindernagel
          Journal Reviews
          Letter to the Book Review Editor
SSE News
  227    26th Annual SSE Meeting Announcement
  228   Society for Scientific Exploration Officers and Council
1                          A Publication of the Society for Scientific Exploration


    Volume 21, Number 2                                                                             2007
      229     Editorial                                                        Henry H. Bauer
      23 1 Inference and Scientific Exploration                                Mikel Aickin
    Dinsdale Award Lecture
      241     The Role of Anomalies in Scientific Exploration                  Peter A. Sturrock
    Research Article
      261     The Yantra Experiment                                            Y. H. Dobyns
                                                                               J. C. Valentino
                                                                               B. J. Dunne
                                                                               R. G. Jahn
      281     An Empirical Study of Some Astrological Factors in Relation to   Suzel Fuzeau-Braesch
              Dog Behaviour Differences by Statistical Analysis and Com-       Jean-Baptiste Denis
              pared with Human Characteristics
      295     Exploratory Study: The Random Number Generator and               Lynne I. Mason
              Group Meditation                                                 Robert P. Patterson
                                                                               Dean I. Radin
      318      Commentary: Comments on Mason, Patterson and Radin              Mikel Aickin
      325      Statistical Consequences of Data Selection                      Y. H. Dobyns
      354      Commentary: Comments from Mikel Aickin                          Mikel Aickin
    Letters to the Editor
      353      The Wave Function Really Is a Wave                              Robert D. Klauber
      357      Hocking's Response to Klauber                                   M . G. Hocking
      361    In Memoriam: George Sassoon, 1936-2006                            Ronald N. Bracewell
    Review Essay
      365    Stagnant Science: Why Are There No AIDS Vaccines?                 Henry H. Bauer
      373    Earthquake Prediction, Kooks, and Syzygy: A Review of The         David Deming
             Man Who Predicts Earthquakes
      383    Review of The Man Who Predicts Earthquakes: Jim Berkland,         Patrick McClellan
             Maverick Geologist: How His Quake Warnings Can Save Lives
             by Cal Orey
    Book Reviews
      397    Trusting the Subject (Vols. 1 and 2), by Anthony Jack and         Etzel Cardeiia
             Andreas Roepstorff
      402    Miracles: A Parascientific Inquiry into Wondrous Phenomena;       Analisa Ventola
             The Search for Yesterday: A Critical Examination of the
             Evidence for Reincarnation; and Our Psychic Potentials, by D.
             Scott Rogo
      405    Tectonic Consequences of the Earth's Rotation, by Robert C.       Karsten Storetvedt
      413    Unstoppable Global Warming Every 1,500 Years, by S. Fred          Joel Kauffman
             Singer and Dennis T. Avery
      416    The End of the Certain World, by Nancy Thorndike                  Karl Gustafson
      422    Science Sold Out: Does HIV Really Cause AIDS?, by Rebecca         Henry H. Bauer
  427     The Cult of Pharmacology, by Richard DeGrandpre                   Joel Kauffman
  430     The Era of Choice: The Ability to Choose and Its Transformation   Ronald Howard
          of Contemporary Life, by Edward C. Rosenthal
  431     Further Books of Note
  433     Articles of Interest
  434     Letters to the Book Review Editor                                 Halton Arp
SSE News
  435    Society for Scientific Exploration Officers and Council
                       A Publication of the Society for Scientific Exploration

Volume 21, Number 3

  437     Editorial                                                       Henry H . Bauer
  443    Inference and Scientific Exploration                             Mike1 Aickin
Research Article
  449    Dependence of Anomalous REG Performance on Run Length            Robert G . Jahn
                                                                          York H . Dobyns
  473     Dependence of Anomalous REG Performance on Elemental            Robert G. Jahn
          Binary Probability                                              John C. Valentino
  501     Effect of Belief on Psi Performance in a Card Guessing Task     Kevin Walsh
                                                                          Garret Moddel
  511     An Automated Online Telepathy Test                              Rupert Sheldrake
                                                                          Michael Lambert
  523      Three Logical Proofs: The Five-Dimensional Reality of          James Edward Beichler
Review Article
  543      Children Who Claim to Remember Previous Lives:                 Jim B. Tucker
           Past, Present, and Future Research
  553      Memory and Precognition                                        Jon Taylor
Letters to the Editor
  572      What is Potential? Arp Responds to Ibison                      Halton Arp
  572      Response to Halton Arp's Comments and Tom Van Flandern's       Michael Ibison
           Published Position on the Reality of the Electromagnetic and
           Gravitational Potentials
  576      More About Crop Circles                                        Henry H. Bauer
  576      Haselhoff Responds to "Balls of Light: The Questionable        Eltjo H. Haselhoff
           Science of Crop Circles"
  580      Grassi, Cocheo, and Russo's Reply                              Francesco Grassi
                                                                          Claudio Cocheo
                                                                          Paolo Russo
  583    Comments on Dieter Gernert's Paper "Ockham's Razor and Its       Alan H . Batten
         Improper Use"
  585    Alvarado Comments on "The Function of Book Reviews in            Carlos S. Alvarado
  587    Functions of Book Reviews: A Response to Carlos S. Alvarado      Gerd H. Hovelmann
  590    Concerning the Book Review for Not Even Wrong and The            Robert Davis
         Trouble with Physics
  590    Henry's Response to Davis                                        Richard Conn Henry
  592    A Proposed Idea                                                  Michael Levin
Review Essay
  595    AIDS, Cancer and Arthritis: A New Perspective                    Neville Hodgkinson
  607    Online Historical Materials about Psychic Phenomena              Carlos S. Alvarado
Book Reviews
  617    In the Grip of the Distant Universe: The Science of Inertia, by    Amitabha Ghosh
         Peter Graneau and Neal Graneau
  619    The Revenge of Gaia: Earth's Climate Crisis & the Fate of          Joel M . Kaufman
         Humanity, by James Lovelock
  622    The Mindful Universe: Quantum Mechanics and the Participat-        Imants BaruSs
         ing Observer, by Henry P. Stapp
  624    Proceedings of the Third Psi Meeting: Real-life Implications and   Fatima Regina
         Applications of Psi (Atas do 3 Encontro Psi: ImplicaqcYes e
                                          '                                   Machado
         Aplicaqo-es da Psi), by Fabio E. Silva, Carlos F. Grubhifer,       Wellington Zangari
         Eliriam Brito, Sibele Pilato, Silviane Muniz, and Nadir M. Ganz
  628    From Shaman to Scientist: Essays on Humanity's Search for          Christine Simmonds-
         Spirits, by James Houran                                             Moore
  635    A Dark Muse: A History of the Occult, by Gary Lachman              Brenda Denzler
  637    The Culture of Fengshui in Korea: An Exploration of East           Michael Mak
         Asian Geomancy, by Hong-key Yoon
  639    Science, Society, and the Supermarket: The Opportunities and       Joel M . Kauffman
         Challenges of Nutrigenomics, by David Castle, Cheryl Cline,
         Abdallah S. Daar, Charoula Tsamis, and Peter A. Singer
  643    The Ascent of Humanity: The Age of Separation, the Age of          David Magnan
         Reunion, and the Convergence of Crises that Is Birthing the
         Transition, by Charles Eisenstein
  649    Articles of Interest
  654    Journal Review
  659     Errata
SSE News
  661    SSE Executive Committee, Officers, and Council
  662    Call for Nominations for the Tim Dinsdale Memorial Award for
                         A Publication of the Society for Scientific Exploration

Volume 21, Number 4

  663     Editorial                                                         Henry H. Bauer
  665  Inference and Scientific Exploration                                 Mike1 Aickin
Research Article
  673    Synthesis of Biologically Important Precursors on Titan            Sam H. Abbas
                                                                            Dirk Schulze- Makuch
  689     Is the Psychokinetic Effect as Found with Binary Random           Wolfgang Helfrich
          Number Generators Suitable to Account for Mind-Brain
  707     Explorations in Precognitive Dreaming                             Dale E. Graff
Review Article
  723    Climate Change Reexamined                                          Joel M. KaufJinan
  751     Franklin Wolff s Mathematical Resolution of Existential Issues    Imants BaruSs
Letters to the Editor
  757      Letters to the Editor                                            Chris Barnes
                                                                            David Deming
Review Essay
  759    From Healing to Religiosity                                        Kevin W . Chen
Book Reviews
  769    Incompleteness: The Proof and Paradox of Kurt Godel, by            G. J. Chaitin
         Rebecca Goldstein
  77 1   Irreducible Mind: Toward a Psychology for the 2 1 st Century, by   Stephen E. Braude
         Edward F. Kelly, Emily Williams Kelly, Adam Crabtree, Alan
         Gauld, Michael Grosso, and Bruce Greyson
  778    Psychics, Sensitives and Somnambules: A Biographical Dic-          Carlos S. Alvarado
         tionary with Bibliographies, by Rodger I. Anderson
  781    La Ricerca Psichica: Fatti ed Evidenze degli Studi Parapsico-      Carlos S. Alvarado
         logici, by Massimo Biondi
  783    Portals: Opening Doorways to Other Realities through the           James McClenon
         Senses, by Lynne Hume
  787    The Survival of Human Consciousness: Essays on the                 Emily Williams Kelly
         Possibility of Life after Death, edited by Lance Storm and
         Michael A. Thalbourne
  793    Reflections on the Dawn of Consciousness: Julian Jaynes's          John Smythies
         Bicameral Mind Theory Revisited, edited by M. Kuijsten
  797    Captured! The Betty and Barney Hill UFO Experience, by             Stuart Appelle
         Stanton T. Friedman and Kathleen Marden
  801    The Science of Low Energy Nuclear Reaction: A Comprehen-           Dieter Britz
         sive Compilation of Evidence and Explanations about Cold
         Fusion, by Edmund Storms
  805    The Megalithic Monuments of Britain & Ireland, by Chris   Henry H . Bauer
  808    Unnatural Phenomena: A Guide to the Bizarre Wonders of    Barry Greenwood
         North America, by Jerome Clark
  812    Reviewer Acknowledgment
Indices for Volume 21
  813      Author Index
  819      Keyword and Review Index
SSE News
  823    27th Annual SSE Meeting Announcement
  824    SSE Executive Committee, Officers, and Council
    Journal of Scientific Exploration, Vol. 21, No. 1, pp. 1-2, 2007


    More than once over the last few years, I've expressed concern at the low rate of
    manuscript submissions as well as over the growing list of solicited pieces that fail to
    eventuate. Now, for the first time in half-a-dozen years, an extraordinary number of
    pieces have all reached publishable form at the same time, so that more than half of
    them have to be held over to a later issue. Truth to tell, quite a few of these manuscripts
    have gone through one or more revisions, involving delays that were not necessarily
    under editorial control. Nevertheless, I do tender sincere apologies to the authors of the
    held-back articles, for it is unavoidably up to the editor to make the choice, which of
    the available articles to publish when.
       At any rate, this issue features an unusually broad range of topics.
       When it comes to delays in publication, the author to whom apologies are most due is
    Professor M. G. (Gwyn) Hocking, whose submission fell through some cracks during
t   a period of re-organizing duties and files. Two reviewers had made some suggestions
    that the author promptly met, and the long delay in publication is entirely our fault. As to
E   substance, several of my remarks in the last issue are pertinent here too, in particular that
    very few of the manuscripts we receive that suggest theoretical re-formulations of
    fundamental physics have been able to satisfy our reviewers; but Hoclung's suggestions
    were judged technically sound as well as interesting. Still, I can't resist recalling another
    of the points from my last editorial, namely, that while finding equations to describe
    natural phenomena accurately is certainly a worthwhile achievement, and something
    that often spurs further advances in research, it does not amount to proving the
    ontological reality of the concepts on which those equations are based.
       Among the devices used at the PEAR lab to investigate machine-mind interactions,
    one that appealed particularly to many who had seen it, live or on video, was a little
    robot controlled ostensibly by the usual impersonal generator of random signals. I'm
    delighted that in this issue we can read a full description of those experiments with this
    appealing little machine.
       The paper by Sturrock and Spottiswoode is exemplary in several ways. One is the
    willingness to subject earlier findings to further and more rigorous analysis. It turns out
    that the originally suggested explanation was not fully supported, though the observed
    periodicity is plausibly understood if psychological states influence performance at
    anomalous cognition and if psychological states are themselves influenced by
    seasonal effects. But most exemplary is that here is another instance of what spurs
    major scientific discoveries-serendipity: the authors had not been looking for signs
    of lunar modulation of anomalous cognition, yet they found just such an indication.
       Rodriguez presents an intriguing approach to judging whether experiences in a state
    of altered consciousness, induced by a psychoactive substance, are purely subjective
    in origin or whether they indicate access to an actual alternate reality; commentaries
    from both reviewers underscore the high promise of Rodriguez's approach. In
    a somewhat similar vein, though in a quite different context, Barugs describes an
    approach to judging the provenance of "electronic voice phenomena"; the preliminary
    results "could be due to chance, anomalous human-machine interaction between the
2                                       Editorial

participants and the computer, or some other influences such as those arising from
possibly existent unseen dimensions of reality". Still another attempt to find objective
correlates of anomalous phenomena is described by Terhune, Ventola, and Houran,
through studying contextual variables at a haunt site; as noted in the addendum to the
paper, this attempt suffered from difficulties so often encountered in this type of work,
that the investigators cannot always command full and continuing cooperation from
those who report anomalous experiences.
   In addition to this rich and varied set of research contributions, there are three
disparate essays:
   Dietrich Gernert, unsolicited, offered a discussion of Ockham's Razor that struck us
as right on the mark; it exposes clearly the pitfalls in applying this principle, a pitfall
into which self-styled "skeptics"-Marcello Truzzi's pseudo-skeptics-often seem
willingly to dive. I also thought that it complements nicely an essay on skepticism in
the previous issue.
   Gerd Hovelmann had shared with me a piece published in Zeitschrift fur
Anomalistik, a German-language sister of the Journal of Scientific Exploration. I
wanted to share those insights with readers of this Journal, and I was very pleased that
Hovelmann has now prepared the slightly revised English version that appears in this
issue. Readers will scarcely overlook in what a favorable light this casts the sterling
efforts of our own Book Review Editor, David Moncrief.
   I had received the text of Hovelmann's essay when I was reading a decades-old book
that brought home one of the essay's points, perhaps its chief point: anomalistics
makes inordinately heavy demands on breadth as well as depth of knowledge. The
book I had been reading was The Pleasures of Deception (by Norma Moss; 1977,
Reader's Digest Press), which offers many examples of hoaxes, practical jokes,
deceptions in warfare, and impostors. Some of the tales are priceless, illustrations of
"truth stranger than fiction"; and it reminded me that all anomalists need to be aware of
the plain fact that people lie and deceive, for all sorts of reasons but also when there
seems to be absolutely no conceivable reason for it. Concerning psychic phenomena,
sightings of Loch Ness monsters, tracks of Bigfeet, and the like, proponents will
occasionally argue that a certain event just could not have been hoaxed and moreover
that no one had any reason to perpetrate such a thing. They should read this book. No
one should think themselves safe from apparently motive-less deceptions. Also worth
reading on this score are Curtis MacDougall's classic, Hoaxes (Macmillan, 1940,
reprinted by Dover in 1958), and Gordon Stein's Encyclopedia of Hoaxes (Gale
Research, 1993).
   The third essay in this issue is a written version of a talk I gave at the Paris meeting
of the Society for Scientific Exploration-in 2003; thus I find myself squarely in the
company of authors who would have liked their articles published more promptly.

   In JSE 20 #1, David Pratt had introduced readers to "Organized opposition to plate
tectonics: the New Concepts in Global Tectonics Group". That group now has
a website, www.ncgt.org/, where all past issues of the Newsletter are available as
pdf s, and there is information about forthcoming events and how to submit articles
and subscribe to the Newsletter (old issues are openly available, but new issues can
only be accessed via password).
Journal of Scientific Exploration, Vol. 21, No. 1, pp. 3-1 1, 2007


                       Inference and Scientific Exploration

                                                MIKEL ICKIN
                                     University of Arizona, Tucson, AZ

It is obvious that inference pertains to measurements. But how good are the
measurements made in a particular experiment or set of observations? Scientists
try to construct their measurement processes to be as correct and accurate as
possible, by using scientific principles to design them, and the best technology
available. Often this is not enough, however, and there is still annoying
variability; two measurements of the "same" thing stubbornly refuse to agree.
As usual when there is variability, statistical methods come into play.

   The story starts with the idea that when we make a measurement of
something, we would really like to measure that thing-not to mismeasure it, or
to measure something else. In a perfect world we would be able to measure it
perfectly without any error. Sometimes, we do actually have a measurement
process that is so accurate we figure we can ignore any error. Even here,
however, the process often turns out to be so expensive that it is not really suited
for the kind of research that we can afford. In such a case, we try to devise
a simpler, cheaper measurement process, but then the question arises, how good
is it? The natural comparison is with the "perfect", expensive measurement, and
the natural statistical approach is to see how much agreement there is between
the two processes, when they are employed in the same measurement situations.
   For simplicity, in this column I will restrict the discussion to measurements
that are of the "Yes-No" variety. Often the idea is that some process is either
happening (Yes) or it isn't (No). In the medical world people either do, or do not,
have a certain disease or condition, so this example comes up a lot, but it also
comes up in other arenas. To compare the cheap measurement with the
expensive one, we first assemble measurement opportunities (people, in the
medical example) and classify them according to the expensive measurement.
We then further classify them according to the cheap measurement. This gives
us a 2 X 2 table, as seen in Table 1.
   In an actual study, the fractions of a sample of people would be placed into the
appropriate cells. Thus, the upper-left cell would be the fraction of the sample
                                           M. Aickin

                                           TABLE 1
                  Conventional table for assessing validity of a Cheap test


Cheap                  No                      Yes

No              (1 -    1
                       7.h                    10
                                             7. - P )
Yes             (1 - z ) ( l - q )              1P
                (1 - 7 ~ )                      71.

that was classified as 'No' by both tests, and so on for the other cells. Here I have
put a parametrization of the cell fractions (or probabilities) into the table, in order
to explain the analysis. Statisticians try to parametrize observables in such a way
that the parameters have some interpretations, so there is some actual motivation
to estimate them. In terms of my parametrization, we see that the probability of an
Expensive-Yes is n (and so it is automatic that the probability of an Expensive-
No is 1 - TC). fraction of the Expensive-Yeses that are also Cheap-Yeses is p.
This is called the sensitivity of the cheap test. We want it to be high (as close to 1
as possible) since it represents those "true" 'Yes' situations that we will correctly
detect with the cheap test. Next, we look under the Expensive-No column and
count the fraction of that column who were Cheap-No. This is q, which is called
the specificity of the cheap test. We also want this to be as close to 1 as possible,
since it represents those "true" 'no' situations that we correctly detect with the
cheap test. An analysis like this is called a validity analysis. The idea is that the
expensive test is as close to the truth as we can expect to get, and to the extent that
the cheap test has high sensitivity and high specificity, it reproduces the results of
the expensive test, and is therefore valid. We can then be reasonably sure that it
measures whatever the expensive test does.
   The usual way of doing a validity analysis is to sample the Expensive-Noes and
give them the cheap test, then sample the Expensive-Yeses and give them the
cheap test. This allows us to estimate the sensitivity and specificity of the cheap
test, although it does not allow us to estimate TC (because we decided how many
Expensive Yeses and Noes to use, rather than sampling them). There is, however,
a lurking problem. Suppose that someone has been found to be Cheap-Yes. Is
there much reason to believe that they are Expensive-Yes? If what we are trying
to detect is something threatening (like an incipient disease condition), then
clearly this is an important question. The presumption is usually that a highly
sensitive and highly specific test will give a good answer to this question.
   Numerically, the probability answer is to compute the fraction of all Cheap-
Yeses who are Expensive-Yeses. To do this, I showed the bottom row sum in the
above table, since this is the fraction of Cheap-Yeses. Then the fraction of
Expensive-Yeses among the Cheap-Yeses is
                       Inference and Scientific Exploration

                                         TABLE 2
               Conventional table for assessing the reliability of a Cheap test


Cheap1                                   No                                         Yes

No                                    agree                                       disagree
Yes                                   disagree                                    agree

This is usually called the positive predictive value. Let's consider a cheap test
with sensitivity 0.95 and specificity 0.90. By conventional medical standards this
would be a very good test. For our first example, suppose that the condition we
are trying to detect happens in 50% of the population (this means that n = 0.50).
Then someone who is Cheap-Yes has a probability of 0.90 of being an
Expensive-Yes. For our second example, let us suppose the condition happens in
2% of the population. Then a person with a Cheap-Yes has probability 0.16 of
being an Expensive-Yes. It becomes obvious from this that whether the cheap test
is of any use for predicting the expensive test depends critically on the fraction of
the relevant population that has the condition. If the sensitivity dropped to 0.90
and the specificity to 0.80, then 0.16 would drop to 0.08. Numbers like this have
been cited, for example, to complain that while cheap tests for HIV positivity
may have satisfactorily high sensitivity and specificity, they may be of little use
for predicting actual HIV positivity when they are used for population screening
where HIV infection is rare. The important point to remember is that while the
sensitivity-specificity analysis is important, we must have a good estimate of n if
we want to use the cheap test to predict the expensive one.

Life being what it is, we often do not have the expensive test-there is simply no
way (within any reasonable cost) to make a near-perfect measurement. In these
cases we usually want to compare two cheap measurements, neither of which we
expect to be perfect. The question is still the extent to which they agree, but there
is now a symmetry, in that neither test is regarded as being better than the other.
Again we allow both tests to be applied to the same sequence of measurement
opportunities, and put counts in the following 2 X 2 table (Table 2).
   Here I have labelled the cells with the descriptors, whether they represent
agreements between the two tests or disagreements. Sensitivity and specificity
no longer have any meaning in this situation, because neither test is near-
perfect-they are both on the same footing. Agreement and disagreement are all
we can record. Obviously we want most of the cases to fall in the agreement
cells. This is called a reliability analysis. This means that to the extent that there
is agreement, we can rely on one test to give us the same answer as the other test.
   Reliability does not guarantee validity. In the validity case we had
a measurement process we trusted to measure the thing we wanted to measure.
                                       M. Aickin

                                        TABLE 3
                         A parametrization of the reliability table

                                                             Cheap 1

Cheap2                                  No                                    Yes

No                      (1 - .n)(l - p)(l   -   y)   +~   ( -l p)
Yes                     (1 - .n)(l - p)q                                            +
                                                                        (1 - n)p(I- Tcp
                                                                        (1 - nlpq   q)

In the reliability case we do not, and so there is the possibility that both tests
pretty much measure the same thing (that is why they tend to agree), but it is not
(or not exactly) what we wanted to measure. Even though reliability is a weaker
concept than validity, it is still important to establish reliability, because if
a measurement process is not reliable then it is hard to see how it can be valid.
   Analysis of a reliability experiment is statistically much more difficult than
a validity experiment. Table 3 is a parametrization of the above table. I don't
claim that this is the only way to parametrize the reliability experiment, but I
will try to explain why it is sensible. The concept is that there are some
measurement situations (people, in my example) who are classified the same by
both cheap tests because they have some characteristic that the cheap tests
correctly detect. The n parameter represents this fraction. The remaining fraction
of the population (which is 1 - n) is essentially classified independently by
chance by the two tests. Under this latter circumstance, Cheap1 says Yes with
probability p, and Cheap2 says Yes with probability q. (Note that p, q, and n in
the reliability case have nothing to do with the same symbols in the validity
case.) This conceptualization has the important consequence that those cases
classified as Yes by both tests consist of two groups: those classified the same by
chance ([I - nlpq) and those classified the same for a good reason (np).
Likewise, those classified as No by both tests consist of the chance agreements
([I - n][1 - p][1 - q]) and those classified the same for a good reason (n[l - PI).
It should be obvious that n is the critical parameter, since it measures the
fraction of the population classified the same by both tests for good reasons,
rather than just by chance.
   This model has, however, a very considerable drawback. There are four
parameters (p, q, n, p). It looks like the above table gives us four pieces of
information for estimating them, but that is not true. The fractions (or probabilities)
in the cells must sum to 1, so that in fact there are only three free parameters (pick
three of the cell fractions, and the fourth is determined). The model is, therefore,
overparametrized. This means that for any four cell fractions, there are many,
many possible combinations of p, q, n, and P that will yield those fractions. Thus,
even if we knew the cell fractions exactly, we would not be able to say anything
about our parameters, and in specific, anything about the all-important n.
   One way of looking at this is to identify the problem as the / parameter.
                          Inference and Scientific Exploration                       7

    the double Yes cell and the double No cell. If we somehow had external in-
    formation about p, the other parameters could be estimated. This will turn out
    to be important below.

I                                        Lab Tests
       One of the ways that these issues come into science is through laboratory
    tests. In biomedicine, where I work, we often want lab measurements on certain
    biomedical values that tell us something about the conditions of the people we
    are testing with various different treatments. One kind of reliability experiment
    involves having the lab analyze the same sample (such as aliquots of a single
    blood draw) multiple times, usually separated by days or weeks. We can then
    make the little 2 X 2 tables comparing the results at two different times.
    (Actually, there are extensions of this to higher-dimensional tables that consider
    all the results simultaneously, but I do not want to deal with them now.) If a lab
    cannot reliably reproduce its own analyses, then one can legitimately ask
    whether it is actually measuring anything, no less the thing it thinks it is
    measuring. The important point is that "reliably reproducing its results"
    ultimately boils down to 2 X 2 tables, as above.
       In the other situation the lab submits to a "round robin", in which samples
    that have been definitively classified by an external agency are tested in the lab.
    This is a validity analysis and the lab's sensitivity and specificity become the
    issue. Labs are compared against each other based on these two numbers.
       Lab-like situations often happen in social science settings. Here the concept is
    that each person has some trait, which means a characteristic that either does not
    change over time or that changes very slowly so that no meaningful change
    happens during our study period. The question then arises, how can we measure
    this trait? As one might expect, there will be several different ways to carry out
    the measurement, and the first issue is whether they are reliable-do they seem
    to be measuring anything? Unlike tissues in the laboratory, the process is more
    complex when humans fill out questionnaires. The tests have to be administered
    in some order, and then how can we not imagine that the person develops a style
    of response to this kind of test that evolves over time? This could happen even
    when we administer essentially the same test at widely separated times, since
    the person could be "learning how to take the test" in a way that again allows
    the responses to evolve, rather than remain constant. The lab analog is that the
    substance to be measured in blood may degrade over time, so that analyses at
    different times automatically show some disagreement. Thus, although the
    concept of reliability seems very basic, there can be considerable problems in
    implementing it practically.

                                 Microarray Analyses
      And that brings me to the topic that inspired this column. It was a note in
    Scierzce (2006; 313, 1559) reporting the favorable results of the MicroArray
8                                   M. Aickin

Quality Control (MAQC) project. Here is a thumbnail sketch of what genomic
microarrays do, phrased in my terms as a dilettante. The fundamental DNA
genomic material consists of a sequence of purine and pyramidine bases attached
to a sugar backbone to hold them in sequence. There are four bases (A, C, T, G),
and certain sequences of these bases, if presented in the chemically rich milieu of
the cell interior, will cause the assembly of amino acids into peptides or proteins.
For fascinating reasons, the actual base sequences (4-base codons) on DNA that
specify amino acids are interspersed with sequences that are non-coding in
the sense that they do not specify any amino acid. When RNA copies DNA as the
first step in assembling proteins, these non-coding parts are snipped out of the
RNA. Thus, the messenger RNA contains the actual clear-coding of the genetic
intent. Researchers then compel the clear-coded RNA to produce DNA (called
cDNA, for cloned DNA) that has the correct sequences for making proteins.
   The next part is (to me, anyway) even more amazing. A specific well in
a microarray is constructed so that sequences of bases (A, C, T, G) are trapped in
the coating on the well. Because they can be put into sequence, they can be
constructed to represent known sequences of bases that occur in genes of
interest. If we then pour a tissue sample with messenger RNA (or, alternatively,
cDNA) sequences into the well, the natural affinity of the bases will cause some
of the RNA (or cDNA) sequences in the tissue sample to bond to ("hybridize
with") the sequences embedded in the well coating. The final step is, if we mark
the RNA (or cDNA) sequences with a fluorescent dye, then we can look at the
well, and depending on the amount of fluorescence, decide whether or not the
gene that the well was trying to detect was present in the tissue sample.
Microarrays simply have thousands of wells, each of which was constructed to
capture a specific base sequence, and the analysis essentially consists of pouring
the RNA (or cDNA)-rich tissue sample over all the wells and then trying to
make sense of the patterns of resulting fluorescence.
   About 20 years ago, when microarrays were on the horizon, this technology was
anticipated to revolutionize our way of understanding basic biological processes,
as well as give us powerful clinical tools for diagnosing and treating disease. This
enthusiasm has perhaps turned out to be somewhat exaggerated. While we want to
interpret each well in a microarray as being either Yes or No (presence or absence
of a particular kind of gene), the inevitable technical variability involved in the
process has made simple Yes/No decisions very difficult, due to the fact that they
ultimately depend on the quantification of a degree of fluorescence. So it is that
one might worry that different microarray chips, or different technological
approaches, whether manufactured by the same or different companies, give
reliable measurements. That is what the MAQC set out to answer.
   In one of their primary papers, here is what MAQC did (Nature Biotechnology
2006; 24, 1115-1122). They looked at three different "platforms" (meaning
different chips, using somewhat different versions of what I have described
above). They presented to each "platform" a number of samples in duplicate.
Each sample had been determined to have a certain gene (by other means), and
                           Inference and Scientific Exploration                            9

                                           TABLE 4
                       Agreement data from the Nature Biotechnology article

    Type of chip         Detected twice           Non-detected twice          Number of genes


    so the issue was to see whether the microarray chips could detect it. Their final
    outcome was the number (or fraction) of genes that were detected by both chips
    or non-detected by both chips. Table 4 illustrates their results (the acronyms are
    from the source table).
       In effect, the MAQC project reported the "detected twice" fraction as the
    "detection sensitivity" (their term). These fractions are clearly very high, and
    that was the basis for the MAQC and Science reporting that these three
    microarray "platforms" are reliable.

                                  Microarray Reliability
       How can we assess the MAQC assertions in terms of the validitylreliability
    framework? The Science article seems to treat them as a validity analyses. The
    three "detected twice" fractions above are 0.860, 0.945, and 0.914, respectively.
    Evidently we are to regard these as sensitivities, and with the potential exception
    of TAQ they are quite impressive. Strictly speaking, they are sensitivities of the
    test that consists of using positive results from two chips to declare a Yes,
    whereas we might be more interested in the results for one chip, since one
    imagines that is what would usually be done, but of course this latter figure is
    certain to be higher than the already very impressive numbers cited.
       We know, however, that a reliability analysis also requires an estimate of
    specificity. Since a nf the MAQC samples cnntained the gene to be detected,
    only the Expensive-Yes column of my first table above was actually sampled.
    There is no information about specificity. This raises the following problem.
I   Suppose that the TAQ chip gives a Yes 0.927 of the time. And let us suppose
    that if we repeat analyses on the TAQ chip, they are independent of each other.
    In other words, let us suppose that the TAQ chip is no better at detecting genes
    than the random flip of a coin that is rigged to come up Yes 92.7% of the time.
    Then what fraction of Yes agreements would we expect to see in two
    applications? The answer is 0.927~ 0.860, precisely what the TAQ produced.
       Does this prove that the TAQ behaves like a biased random coin? Of course
    not. But the real problem is that it does not rule out this possibility either. It is
    a fundamental purpose of a scientific experiment to rule out alternatives,
    especially if they are embarrassing to the underlying theory. This MAQC
    experiment does not rule out an explanation that says TAQ is useless.
10                                  M. Aickin

   If there are no interpretable validity results here, then can we at least do
a reliability analysis on the two replications? For this we need to put the results
into my reliability 2 X 2 table above. As I have said, the problem with a sensible
parametrization of that table is that the parameters are indeterminate. But in the
MAQC data we know that all samples contained the genes to be detected, so that
the only agreements for good reason were duplicate Yeses, and this implies that
p = 1. It is now a matter of algebra to figure out an expression for n: in terms of
the cell fractions. The result turns out to be
                                         A- PQ
                                7t =
where P and Q are the fractions of Yeses in the two tests, and A is the fraction of
duplicate Yeses. Since the MAQC report did not give the split on the number
of disagreements between the two replicates, we cannot actually compute P or
Q. In order to make some progress, I assume that the split was 50-50 (that is,
the disagreements were split equally between the two replicates). With this
assumption, the computations of n: are as follows: TAQ, 0.79; GEX, 0.93; and
QGN, 0.86.
   These are, of course, very good numbers. The first point is that they are not as
high as the "detection sensitivities" quoted by the MAQC. The second is that
they are "reliabilities" computed only in a population in which the genes are
always present, and so they are not true reliabilities. There is, therefore, no
information in these data about how these chips perform in a population which
(more realistically) has individuals who do and do not have the genes to be
detected. (We leave aside here the issue of whether the samples used in these
analyses are in any way representative of the kinds of genes that would be seen in
typical human studies.) Therefore, we have only part of the information needed to
assess the overall reliability of these chips in any particular population.
If we recall that ultimately microarrays will be used to determine the presence
of certain kinds of genes in tissues actually taken from humans, we have to come
back to the positive predictive value. If the chip says someone has the gene,
then what is the probability that they do have it? As we saw by example above,
the sensitivity and specificity have to be quite extraordinarily high in cases
where the gene is rare. There is another figure, the negative predictive value
(that someone does not have the gene if the chip says they do not), which suffers
from the same kind of issue. Thus, in a certain sense the very large (more than
130 investigators) and well-publicized (occupying nearly the whole of Nature
Biotechnology 2006; 313) MAQC study seems to have fallen rather short of
a justification for clinical use.

   As readers of this column know, it is not a venue for scientific or technology
assessments of substantive fields (other than inference). I do not mean to suggest
                           Inference and Scientific Exploration                          11

    that microarray analyses are somehow scientifically suspect or irredeemably
    unreliable. Indeed, interested readers should obtain the entire Nature Bio-
    technology (free on the Web) issue to see an incredibly richer display and
    analysis of the data that underlie the simple Yes-No calls that I have analyzed.
    But as they do this, I would ask them to keep two points in mind. If the detailed
    fluorescent intensity measurements that are presented in these articles are so
    compelling, then why not use them directly, instead of trying to distill the results
    to Yes-No decisions? Secondly, why were no negative (gene not present)
    samples included in the MAQC study? My ultimate concern here is that this
    study will be cited for years as "proof' that microarrays are reliable (or valid),
    despite the fact that this study did not actually test either reliability or validity in
    any realistic sense.

       One of the first steps in developing a new area of science or technology is to
    establish the reliability andor validity of its fundamental measures. Mistakes at
    this stage plague the worth of the new area throughout its subsequent growth.
    Frontier scientists can take a lesson: adherence to the fundamental principles of
    science and inference is essential. The MAQC example is an object lesson to be
    studiously avoided. Yes, it is conventional science, and Yes, it is very impressive
    technically, but no, in its assessment of reliability and validity it is not (yet) real
I   clinical science.

1      For those who like "switching paradoxes" here is a good one. You are
    presented with two identical sealed envelopes containing money. One contains
    twice as much money as the other. You choose one at random and open it,
    finding d dollars. You can either keep this amount, or open the other envelope, in
    which case you can only keep its amount. You reason that because you chose
    at random, the probability is 112 that the other envelope contains 2d dollars, and
    112 that it contains dl2 dollars. The expected number of dollars in the other
    envelope is therefore (112) X 2d (112) X (d2) = 5 d 4 . It is better to open the
    other envelope.
       There must be something wrong, because this says that no matter which
    envelope you open, you then prefer the other one. I think this puzzle illustrates
    a very important fundamental point about statistical inference. So, the task is to
    provide an explanation of what is going on, and I would only ask readers to
    contribute if they are willing to have their solutions appear in this column with
    their names.
Journal of Scientific Exploration, Vol. 21, No. 1, pp. 13-26, 2007

        Linking String and Membrane Theory to Quantum
       Mechanics and Special Relativity Equations, Avoiding
               Any Special Relativity Assumptions

                                             M. G. Hocking
                          Materials Dept, Imperial College, London SW7 2AZ
                                   e-mail: m.hocking @imperial.ac.uk

      Abstract-M-brane quark string theory and the Supergravity theory require
      10 spatial dimensions. But if dimensions greater than 3 do exist, this must
      have important effects in other branches of physics, as quark theory cannot
      be compartmentalised-off. This paper shows how the concept of multi-
      dimensional space, essential to explain particle physics phenomena, removes
      conflicts between quantum theory and relativity. This leads, extremely simply,
      to both Schroedinger's Equation and to the Special Relativity equations in
      terms of absolute motion instead of assuming the 2 Principles of Relativity. The
      origin of the Big Bang provides an absolute spatial reference frame.
      Keywords: Schroedinger-re1ativity-M-brane-super-                       physics-
                multiple dimensions

String theory and M-brane theory predict 10 or 11 dimensions but suggest that 7
are coiled up to a very small diameter so that we only perceive the remaining 3.
If it is assumed that the extreme temperature of the Big Bang prevented any
complexity of structure, then it is likely that at the beginning there was no
coiling and so matter initially was in 10-dimensional space. As temperatures
dropped, structuring became possible and on this model matter then evolved into
the lower dimensions. In this case, ordinary 3-dimensional (3-D) matter is
formed by an energy entering from the next higher 4th dimensional space. This
model leads to the equations of quantum mechanics and Special Relativity in 2
lines but without requiring either of the 2 Relativity Principles.
   About 90% of the matter in the universe is described as "missing", meaning
missing from 3-D space, but this could be because it is distributed among the
higher 7 dimensions, which gravity can access but not photons, electrons etc., so
it would be apparent only from gravity measurements and be missing from all
other observations.
   Schroedinger's Equation dt)/dt = (hi14xrn) [d2t)/dx2] is functionally exactly
like Fick's 2nd Law of Diffusion dCldt = (const.) [d2~/du2], C (concentration
14                               M. G. Hoclung

of a diffusate) is replaced by Q.This comparison suggests the following simple
   A hypothesis of Dirac (1962) is that an electron resembles a bubble, rather
than a point of matter, and this idea also accords with current membrane theories
of space. It is supported by other indications that space is not "empty" but is
filled with a continuous all-pervading background medium (Besant &
Leadbeater, 1994), in which bubble-like particles move. Their movement
through such a "space" (even in a vacuum) must then be by a diffusion process
and hence Fick's Laws of Diffusion would be expected to apply, and in fact
Fick's Law does appear, in the form of Schroedinger's Equation which is Fick's
2nd Law of Diffusion but with an imaginary diffusion coefficient. 3-D matter in
space would then be bubbles in the continuous medium of space, inflated by
containing an energy of creation (rest mass) welling-up from 4-dimensional
(4-D) space as mentioned above.

                           Schroedinger's Equation
   Individual pollen grains in air diffuse jerkily due to molecular kinetic motion.
Their diffusion follows Fick's 2nd Law of diffusion, dCldt = ~ ( d ~ C l d 2But ).
there is no wavelike effect at all in the microscopically-observed diffusive
jumps. Fick's Equation (1855) is exactly similar to Schroedinger's Equation
(written 70 years later) which describes the motion of an elementary particle
through free space, except that the "diffusion constant, D" becomes imaginary.
Nature may be trying to tell us something here.
   Figure 1 shows a solution of Schroedinger's Equation for the motion of an
elementary particle in free space, and Q is imaginary in between the points on
the trajectory where Q = 1. This is like diffusive jumps but where the particle is
imaginary during its jump. An obvious interpretation of this imaginary feature is
that the particle may perform its diffusion jumps via a hidden dimension, the 4th
dimension, in which it is momentarily absent from 3-D space and hence is
"imaginary" during its jumps. This is an interpretation of Schroedinger's
Equation. In earlier years, before the advent of M-brane quark string theory,
which requires multi-dimensional space, many standard textbooks avoided this
problem by an (unjustified) assertion that "the particle must be somewhere" at
all times and they then square Q to prevent it from being imaginary (nowhere in
3-D space).
   Pursuing the analogy between Fick's Law of Diffusion and Schroedinger's
Equation, assume that elementary particles move in a similar way as diffusive
jumps, but at their size, comparable to a 4th dimension's coiling-up size, there is
some accessibility to a 4th spatial dimension (thus appearing to us as
a "quantum-mechanical tunnelling"). An ad-hoc assumption of diffusive jumps
into 4-D is not required if the Zero-Point energy oscillations routinely involve
very frequent regular excursions into 4-D where motion is not restricted by the
3-D background medium-a continuous medium would trap bubbles (see
                    Linking String Theory to Quantum Mechanics

         X=         0         1A        2A        3A          4A         (if tv is an integer)
or        t=        0        1/v        2Iv       3Iv         4Iv        (ifxiAisaninteger)
Fig. 1. Plot of Equation ii, \Ir = exp[-2in{xlh - tv)] or Equation iii, $ = exp[-2in{(xmv/2h) - tEl
        h)]. \I, = 1 whenever the particle has made an integral number of jumps, n (of length A),
        which is when its distance travelled = x = nh, or when its time of travel = t = nlv.

Dirac's hypothesis mentioned above) static in 3-D, like tiny air bubbles are
trapped static in a block of ice (discussed later).
   It would be strange if the existence of higher spatial dimensions required by
string and membrane theories had no effect at all on fundamental physical
processes such as atomic-scale motion.
   So here now is a 3-line derivation of the Schroedinger Equation for motion in
"free space" of an atomic-size particle, which does not require any kind of
   A remarkable equation in pure mathematics (Euler's Equation) is:
                                       exp [-2in]    =   1,                                    (i>
                                exp[-2in{x/h       -   tv}]   =   \Ir,                        (ii)
so that whenever the item in braces {) is an integer, then \Ir = 1, but \Ir is
otherwise imaginary. \Ir is not a wave: see Figure 1. The choice of xlh - tv for the
term in braces {} is explained as follows:
   x is the distance of a moving elementary particle along a free-space trajectory and t
       is its time along that trajectory.
   h is the jump distance of the particle along its trajectory and v is its jumping fre-
       quency-a diffusion-type model. So xlh is an integer if the distance x is a whole
       multiple of h. tv is the number of jumps in time t.
16                                 M. G. Hocking

  Whenever xlh and tv are both integers, the particle is at a jump halt and is
considered to be "present" (here in 3-D), but otherwise it is in transit and is
considered to be in a higher (4th) spatial dimension and thus not present
(imaginary) in our 3-D world.
  The difference of 2 integers is also an integer, so they can be conve-
niently combined as in Equation ii above, to represent travel through both space
and time. Figure 1 plots this equation, showing $ is unity where and when x
and t are both integers, but $ is imaginary elsewhere as required on the above
  Finally, apply De Broglie's Equation to the xlh term, and Planck's Equation to
the tv term, to get
                         exp[-2in{xmv/2h) - tE/h] = $,                            (iii)
which is a well-known solution of Schroedinger's Equation, where E is kinetic
energy only: Schroedinger's time equation is

Cf. Fick's 2nd Law:

derived 70 years earlier.
   As mentioned above, diffusion of pollen grains, or of ions jumping through a
lattice, have no wavelike character, so Schroedinger's Equation need not have,
   Schroedinger's Equation gives correct results for atomic-scale phenomena and
so must form a part of any valid theory of Nature.

No Wave Function
   Thus the "diffusivity", D, of a moving elementary particle is imaginary,
meaning simply that it does not continuously exist in 3-D space. Prior to the
introduction of 10-D space by quark string theory, the imaginary values of $
embarrassed physicists, who only considered 3 dimensions and thus decided in
effect to square $ to force it to be real and thereby artificially created "matter
waves". They called this process "normalising" $, and it compelled $ to
conform with the then "world view" of what Nature was felt to be. This
understandable attitude at that time (that there are no higher dimensions) is very
well illustrated by many standard textbooks which assert that "the particle must
be somewhere", to "justify" effectively squaring $ to prevent it from being
imaginary (nowhere in 3-D space)! This procedure discounts the possibility that
it actually could sometimes be nowhere in our 3-D space, if it oscillates or spins
in and out of 4-D space. This "normalising" approach artificially creates
a fractional probability (i.e., an uncertainty) that a particle is present at any given
location, which creates the notion that particles can somehow exist as waves and
18                                M. G. Hocking

   In diffusion through oxide layers, for example, quantum-mechanical
tunnelling allows electrons to reach the outer surface of the thin highly in-
sulating oxide film on aluminium and thus creates a billion voltslmetre field,
which then drives further oxidation unless prevented (Moussa & Hocking,
2001). These electrons cannot have reached the outer surface of the alumina
layer by moving through the alumina, as there is no electronic conductivity.

     Special Relativity Equations Derived Assuming Absolute Motion:
              Rest Mass; Length and Time Dilation; E = mc2
    On the basis of the "Big Bang" theory with its residual microwave radiation,
it is concluded that there is an absolute reference point of origin (the "Big Bang"
site) in space. This negates the 1st Principle of Special Relativity, which denies
an absolute reference point in space. A Big Bang point of origin in 3-D space
would also be accessible in higher dimensional spaces.
    A 2-line derivation is given below, of the mass, time and length dilation
formulae of Special Relativity but without assuming any relativity.
    2-D space is not viable for the existence of life forms because the complex-
ity required for brain interconnections, digestive tracts, etc. requires 3-D. Simple
calculations show that electron orbitals in atoms would not be stable for
dimensions higher than 3, which makes only 3-D space uniquely suitable
for life:
    The electrostatic force falls off as the inverse square of distance in 3-D, but it
would fall off as the inverse cube in 4-D space (it would then be too weak to
bind electrons to their atoms). The inverse square arises simply because a given
flux through unit element of area on the surface of a 3-D sphere is spread out in
proportion to the square of the radius, as the area of a 3-D sphere is 4xr 2, but the
volume of the 4-D analogue of a sphere is proportional to r3.
    A 3-D elementary particle and derived particles like atoms and molecules
cannot make up a 4-D object, because they have no extension in the direction of
a 4th spatial dimension. So they (and any larger body they constitute) are thus
confined to 3-D space only and so cannot enter 4-D space, with the 1 very
localised exception described in the section on Rest Mass below, as part of
a very small amplitude oscillation. For a larger scale excursion into a 4th or
higher dimensions, the 7 orders of coiling-up of the 7 higher dimensions in 3-D
particles must be reduced by 1 order, each time the next higher dimension
is reached.

Rest Mass
  In 3-D space, elementary particles which constitute molecules, etc., are
proposed in the Introduction above as being like gas bubbles in a continuous
medium (Dirac, 1962; Besant & Leadbeater, 1994). However, a continuous
medium cannot be described as a "fluid" because a fluid is able to flow and thus
permits particles to move through it due to mobile atomic-size "holes" in it
                 Linking String Theory to Quantum Mechanics                      19

(in the conventional well-known "hole theory" of fluid flow). E.g., a solid metal
does not flow-its viscosity is extremely large, but in the liquid state metals
contain a large proportion (about 10%) of "holes", which confers a very low
viscosity to them and they then flow very easily.
   An analogy is the common observation of a solid block of ice which has tiny
bubbles of air trapped in it-these bubbles are "locked up solid" and cannot
move at all.
   Thus it is proposed that 3-D elementary particles (bubbles) in the continuous
background medium can only have zero velocity in it. Actual physical
movement which is of course commonly observed in 3-D space can then be
postulated as occurring by the following mechanism, which is necessarily
similar to diffusion (being movement through a medium). This accords with the
identical functional forms of Fick's 2nd Law of Diffusion and Schroedinger's
   If 3-D space consists of a 3-D continuous background medium (Besant &
Leadbeater, 1994) as explained above (cf. air bubbles in block of ice model) an
elementary particle (bubble) would be unable to move in any of the 3-D
directions. But if it were able to jump out as part of an oscillation into a higher
spatial dimension where there is no such continuous medium, it could then move
and then land back in the 3-D space medium in a different place.
   An elementary particle might be rotating and vibrating continuously (even if
at rest in 3-D space) in a path which takes it continuously in and out of the 4th
dimension (an effect similar to zitterbewegung). "Zero-Point Energy" means
that even at zero degrees Kelvin "rest", a particle is still oscillating incessantly
(called "zitterbewegung", Ger. "trembling"). If the energy (welling-up from
a 4th spatial dimension) creating the 3-D bubble has a characteristic velocity of
c, then an observed average velocity v through the 3-D medium would consist of
periods at zero velocity in 3-D (due to its very large viscosity) alternating with
jumps at velocity c in 4-D. A characteristic velocity of c is not extraordinary-
e.g., a photon in free space has only got this 1 velocity, c, the velocity of light.
   Jumps into the next higher dimension would only be possible for elementary
particles as the amplitude of an excursion into 4-D space would be limited to the
very small diameter of the coiled-up 4th dimension for 3-D particles, and not
available to large bodies, and it is called "quantum-mechanical tunnelling" in
physics but not yet interpreted as involving jumps into 4-D space. If there are
higher dimensions, it would be very odd if they were not involved at all in
atomic-size processes. They cannot just apply to quark physics and nothing else.
   Such a model leads immediately to Schroedinger's time and distance equations
(for a case with zero potential energy), as shown above. It also provides a theory
of rest mass, and leads to the same experimentally verified equations of Special
Relativity but for absolute motion. The derivation is far simpler than that from
Special Relativity. This absolute motion derivation uses the assumption of quark
string physics that there are more than 3 dimensions in space.
   The mass, length and time dilation equations are easily obtained immediately
                                      M. G. Hocking

                            Fig. 2.    (moc12   + (my)* = (mc12.

by solving a Pythagorean triangle with sides moc, mv and the resultant mc
(Figure 2).
   moc can be regarded as the momentum of creation of a particle at rest in 3-D
space, due to an energy welling-up from the direction of a 4th spatial dimension
(which is at right-angles to any 3-D direction); mo is the rest mass of the
resulting stationary particle in 3-D space, which this force creates. If the particle
is then made to move in 3-D space, by giving it momentum in a direction in 3-D
space, it will then have an extra momentum mv (see Figure 2), at right-angles to
its rest-mass (4-D) momentum-of-creation vector, where m is its mass and v is its
observed velocity in 3-D space. The resultant total momentum content of the
particle due to these 2 momenta is mc (see Figure 2), m being the dilated
(increased) mass of the particle due to incorporation of its extra energy of
motion in 1 of the 3-D directions (this is additional to its rest-mass energy
welling-up from the 4th dimension). The momentum of creation must be at 90'
to any momentum of motion in 3-D, because the 4th dimension direction by
definition is at 90" to all 3-D directions-hence the Pythagoras triangle in
Figure 2.
   So, from Figure 2:

                             (moc)'    + (mv)2= (mc12,
which rearranges to:

This is the well-known and experimentally verified "relativistic" mass dilation
formula but has been derived above for absolute motion in only 2 lines and
without assuming the 2 principles of Special Relativity.

Time Dilation
  Time dilation will also occur, because when a particle (e.g., a meson) is
jumping in the 4th dimension, its internal decay processes will be frozen for the
duration of that jump and so its lifetime will be extended. The well-known time
                 Linking String Theory to Quantum Mechanics

                               Fig. 3.   t2= to2   + touT2.

dilation formula can then also be obtained immediately, as above, from
a Pythagorean triangle (Figure 3) with sides to, toUT and t, as explained below.
   To explain this, pursuing the analogy with diffusion, assume that the motion
of an elementary particle occurs by very short jumps alternating with longer
stationary periods, thus allowing any observed overall velocity to be made up,
modelled on the conventional mechanism of diffusive jumps of atoms or ions
through a lattice, from site to site.
   There are only 2 velocities possible, zero for the periods at rest in the 3-D
world, and c for the periods when the energy constituting the particle is moving
in 4-D space. Any actually observed overall velocity, v (0 < v < c), is then made
up of rapidly alternating combinations of these 2 values. The moving particle
travels in a series of very small jumps each of which is at velocity c, separated
by a series of short pauses at velocity zero (analogous to the movement of the
frames of a film strip), so that the overall actually observed velocity is appar-
ently v. This 4-D jumping model is consistent with the explanation given of the
imaginary values of $ given above.
   A moving atomic-size particle is thus a "particle" when stationary and may
appear to be an apparent "wave" (a non-particle) when jumping. Light photons
alternately jump a distance h in hlc seconds followed by a stationary
instantaneous wait or appearance. It is thought that photons (unlike gravitons)
cannot move appreciably away into 4-D and so are bound to continually
intersect our 3-D world.
   Let the total stationary time (spent residing at successive positions) be tlN and
the total transition time (spent in jumping between these positions) be toUT.The
inactive stationary time elapsing between jumps (tIN)can have any value (0 <
tIN < m). toUT is the time taken for a transition or jump between residences, and
represents a non-material (non-particle, apparently wavelike) condition in
between the physical sites at which the moving particle successively resides. It
means that there is no physical movement at all and that all actual movement
occurs during the time when the particle is in 4-D, by a series of non-material
(non-3-D) jumps. It is somewhat analogous to the conventional diffusion
mechanism for an atom or ion diffusing between fixed lattice sites. If Zero-Point
energy involves continuous vibration, or rotation, into and out of 4-D, then this
22                              M. G. Hocking

process is facilitated by that and does not need a separate ad-hoc mechanism
for it.
   Consider now the motion of a mechanical clock which contains a balance
wheel. On the proposed theory, the balance wheel (=B) jumps have their specific
discrete B activations, but when the clock (clock = C) as whole is also set in
motion, specific discrete C activations will occur additionally. Any activation
effect which becomes due to cause an imminent balance wheel (B) jump during
the course of a clock (C) jump, would be inoperative, as the clock is "frozen"-
already engaged in a jump and so its balance wheel cannot also simultaneously
move then. Consider the clock to be moving much faster than the balance wheel
rotations. Then the balance wheel (B) jump frequency is comparatively very low
and those B jumps which arise during a regular C jump will be lost. The
consequent loss of some B jumps will (in effect) slow down the balance wheel.
Consider now the motion of a clock whose tick-tick period is to at rest, which
corresponds to tINas defined above. Let this clock travel with a constant overall
velocity v and record the passage of 1 tick-tick time period to during its travel
through a certain distance s. The total C jumping time (at velocity c) which is
non-material (being in 4-D), is not sensed or recorded by the clock (by B jumps,
as explained above) is toUT where

A stationary observer would have a total time t in Equation iv above, elapsed on
his watch, as being the time taken for the moving clock to travel the distance s.
Now, t > to or tINdue to the additional time toUT taken for the journey, noticed
only by the stationary observer, which must be added to to. This addition must be
vectorial, because as the moving clock does not sense or record toUT there is no
break (in its sensation of time) at which toUT can be added in a scalar manner.
 to^^ and tINhave no component in common and must thus be added as vectors at
right-angles (Figure 3). This gives

by Pythagoras' Theorem, or

Substituting toUT from Equation iv above,

which is the well-known Time Dilation formula of Special Relativity, but all the
assumptions of Special Relativity are avoided. This equation has been well-
verified experimentally, e.g., by the increased lifetimes of decaying mesons
which are moving very fast, compared with slow-moving mesons.
  The time dilation formula can also lead to an alternative derivation of the
mass dilation formula, already derived otherwise, above.
                Linking String Theory to Quantum Mechanics                     23

Fitzgerald-Lorentz Length Contraction
   Similarly, the length of a moving body will contract (only in the direction of
travel) due to the interatomic cohesive forces pulling in its length across planes
of jumps when it is in 4-D space (where it is not affected by 3-D electrostatic
cohesive physical forces; only gravity can enter 4-D space and gravity is not
involved in cohesive forces).
   A similar Pythagorean triangle gives the well-known length contraction
equation. The length of a moving object is proportional to the number of moving
elements materially present ("IN") in it along any given line in the direction of
motion. The term "moving element" merely refers to an elementary particle of
the moving object. Along any such line through the object, some of its moving
elements will be jumping ("OUT") and thus materially absent from the object.
At a steady velocity there will be a steady proportion of moving elements thus
missing, and a consequent shrinkage of the length of the object in the direction
of its motion (due to the attractive forces of cohesion acting across the OUT
gaps). Planes of OUT gaps (analogous to vacancies) would be expected to sweep
through the object (which is not imagined to jump all at once, but as individual
particles or moving elements) in the direction opposite to that of the motion; the
planes of moving elements would be set perpendicular to the direction of
motion; thus there is no reason for shrinkage of the object in other directions
than that of the motion. Consider now a moving object, of rest length Lo
measured in the direction of its motion. At rest,
                               L = Lo and tIN= t.
The number of planes (perpendicular to the direction of motion), of moving
elements which are materially present (IN), at velocity v, is

where no is the number of such planes present at rest (for which state t I N t).

                               no O: LOand n   L,
where n and L are number and length respectively, at a steady velocity v.
 Thus, from n = no(tINlt)above, we have:

using Equation v above, so

                              L = L~J i - g,/t2,

and then using Equation iv above we obtain:

                              L = L~J,
which is the Fitzgerald-Lorentz length contraction.
24                                  M. G. Hocking

   An alternative approach also follows from the assumption that when an object is
travelling, some of the elementary particles constituting it are engaged in a jump in
4-D and are thus "missing" from the 3-D object, as suggested by the interpretation
of Schroedinger's Equation given earlier. Consider the number of elementary
particles in a line in its direction of travel to be no at rest and n at velocity v, where
n < no as some of them are jumping. n and no are their numbers in 3-D space.
   AS mass in conserved, nomo = nm, (where m is the enhanced mass at velocity v
given in mo = m1    J
   Then, as no Lo and n L, for a line in the direction of motion of the object,
Lomo = Lm and so L = LO J           T      .
   This is the Fitzgerald-Lorentz length contraction.

E = mc2 Derivation
  The well-known Relativity equation E = me2 can also easily be obtained (for
absolute motion), by elementary algebra: From the Pythagoras triangle of the
Rest-Mass section above,
                                               2            2
                               (moc)2= (mc) - (mv) .

(See Figure 1). Take differentials:
                         0 = 2c2mdm - 2mv2dm - 2vm2dv
Divide both sides by 2m:
                                c 2dm = v2dm       + mvdv
By definition, force is rate of change of momentum, so

By definition, a force is also an energy field or gradient, dElds and velocity v =
dsldt where s is distance. So
              dE = Fds = m(dv/dt)ds       + v(dm/dt)ds= mvdv + v2dm
Compare this with Equation vi above:
                                      dE = c2dm,
so, integrating,
                                        E = mc2
(Einstein's Equation). The integration constant is zero, as E = 0 when m = 0.

Heisenberg's Uncertainty Principle
  Heisenberg's Uncertainty Principle takes on a new meaning: a moving particle
will actually spend most of its time at rest (punctuated by very short times at c),
                 Linking String Theory to Quantum Mechanics                        25

but its experimentally observed velocity is measured as v, and so a measure of
the uncertainty in its velocity at any instant will be v - 0 = v. (This uncertainty
depends on exactly when an observation is made and so is in the mind or control
of the observer and is not a property of the particle.) From de Broglie's Equation,
mv is proportional to hlh, and so the Uncertainty Principle becomes an
expression of de Broglie's Equation if h is interpreted as the moving particle's
smallest jump length on the above diffusion model for motion.

   An object in 3-D requires a rotation of 360" to return it to its original position,
but a bizarre 720" of rotation (not just 360°!) is required to bring fermions
("spin-Y;' particles, such as protons) back to their original state. This is easily
explained as follows, on the above model:
   For clarity, a 2-D/3-D analogue will be used, instead of 3-D/4-D. If
a lowercase letter "d" is lifted out of its 2-D paper sheet and turned over in 3-D
space and then put back as a "b", then this would appear to a 2-D inhabitant to
be a d ++ b vibration with only its antinodes (d & b) being visible. If this
d +-+ vibration is analogous to Zero-Point Energy vibration, then if the "dm is
also spinning in 2-D (d t-+ ++ d), then after 360" of spin in 2-D it could
have simultaneously rotated to a "b" by the 3-D rotation, which means that the
360" spin in 2-D did not return the "d" back to its initial state and that a further
360" of 2-D spin is needed, by which time the "b" would have rotated back to
a "d" in its simultaneous 3-D rotation. Thus a bizarre (to a 2-D observer) 720" of
spin is required for a "d" spinning in 2-D space to return to its original "d" state.
   With this preamble, for our case in 3-D space, an observed (in 3-D) rotation of
720" is needed to return a proton to its original state, which can easily be
explained analogously to the example above. In 3-D to 4-D terms, this means
that (to give an analogy) a tennis ball spins in 3-D and 4-D simultaneously but
after 360" of observed (in 3-D) rotation the ball would be everted (i.e., having its
fluffy side inside and smooth side outside, without loss of the gas pressure which
it contains) by the simultaneous 4-D rotation and so clearly a further 360" of
observed (in 3-D) rotation would be needed for it to return (by further 4-D
rotation) to its original state with the fluffy side outside, making a total of 720°!
   This can only be understood in terms of the existence of 4-D space, and it
happens routinely for elementary particles, which have access to 4-D space.
   Note. A rotation in 3-D could only be perceived by a (hypothetical) 2-D
observer as a vibration (like Zero-Point Energy). And a rotation in 4-D could
only be perceived by us (in our 3-D world) as a vibration (Zero-Point Energy).
   Access of large objects to 4-D space is problematical. Eversion of tennis balls
has been reported anecdotally which is, of course, not scientifically acceptable,
but there is a report in Nature by Hasted et al. (1975) of a refractory crystal of
vanadium carbide being removed from a sealed tube in laboratory conditions,
26                                      M. G. Hocking

without any contact being made with the tube, which could only be feasible by
transfer out via 4-D space.

Besant, A., & Leadbeater, C. W. (1994). Space and Particles. http://www.4-D.0rg.uk. Accessed
    November 2006.
Cottrell, A. H. (1960). Theoretical Structural Metallurgy. Arnold.
Dirac, P. A. M. (1962). The Conditions for a Quantum Field Theory to Be Relativistic. Proc Royal Soc
   (London) Series A 268, 57; see also Stedile, E. (2004). Quantum Aspects of the Fundamental
   Dirac Membrane Model. Int J Theor Phys, 43, 385.
Feynman, R. P., Leighton, R. B., & Sands, M. (1966). The Feynman Lectures on Physics (Vol. 3).
Hasted, J. B., Bohm, D. J., Bastin, E. W., O'Regan, P., & Taylor, J. G. (1975). Recent Research at
   Birbeck College, University of London. Nature, 254, 470.
Margenau, H., & Murphy, G. M. (1961). The Mathematics of Physics & Chemistry. van Nostrand.
Moelwyn-Hughes, E. A. (1961). Physical Chemistry (chapter "Mathematical Formulation of the
   Quantum Theory"), Pergamon Press.
Moore, W. J. (1962). Physical Chemistry. Longmans.
Moussa, S., & Hocking, M. G. (2001). Photo-inhibition of Localized Corrosion of Stainless Steel in
   NaCl Solution. Corrosion Science, 43, 2037.
Schroedinger, E. (1926). Quantisation as a Problem of Eigenvalues. Annalen der Physik, 79, 372; and
   Schroedinger, E. (1926). Quantisation as a Problem of Eigenvalues. Annalen der Physik, 81, 135
   [Quoted by M. Jammer in "Conceptual Development of Quantum Mechanics", p. 372, 267,
   McGraw-Hill ( 1966).]
    Journal of Scientific Exploration, Vol. 21, No. 1 , pp. 2746, 2007

        Response of an REG-Driven Robot to Operator Intention

                    R.G. JAHN,*
                              B.J. DUNNE, D.J. ACUNZO,                   AND   E.S. HOEGER
                            Princeton Engineering Anomalies Research Laboratory
                                  School of Engineering and Applied Science
                                Princeton University, Princeton NJ 08544-5263

          Abstract-A small articulating robot driven by an on-board miniaturized
          random event generator (REG) executes two-dimensional stochastic motion on
          a circular platform. Human operators attempt, under pre-recorded intentions, to
          influence the device to reach particular exit positions around the table edge, or
          to remain in motion on the table for longer, or shorter, time periods, or to cover
          longer, or shorter, overall distances than characterize a large body of unattended
          calibration data. An overhead camera system tracks the robot trajectories and
          transmits them to database storage for subsequent analysis. Each of several
          protocols yields overall results that clearly separate in the directions of operator
          intention, with effect sizes comparable to those found in many other REG-
          based experiments. Although the databases are not sufficiently large to drive all
          of these to statistical significance by the usual p             <
                                                                        .05 criterion, certain
          operator subsets, most notably the females, the groups, and a few individuals,
          display more noteworthy performances. The consistency of this structural
          pattern of results with those obtained previously using substantially different
          equipment and protocols reinforces a generic character of such phenomena that
          eventually may lead to a useful comprehensive model for their representation
          and possible pragmatic applications.
          Keywords: consciousness-humadmachine anomalies-robot-random event
                    generator (REG)

                                    1. Background and Introduction
    As described in greater detail in a major review article,' a large portion of the
    PEAR experimental agenda has entailed a variety of humanlmachine
    interactions featuring feedback modalities designed to enhance emotional
    resonance between the operators and the target devices. These have included
1   such diverse displays as a cascade of balls through a matrix of scattering pins,
    a large free-swinging pendulum, an upward bubbling water jet, competing visual
    images on a computer screen, and acoustical beats from a Native American
    drum, among others. For the particular experiment reviewed herein, a small
    mechanical robot driven by a miniaturized on-board REG was designed,

       * Correspondence address: Mechanical and Aerospace Engineering, Engineering Quadrangle,
    Princeton University, Princeton NJ 08544
28                               R. G. Jahn et al.

constructed, and deployed to execute two-dimensional stochastic motion on
a circular table-top.
   The concept for this experiment was stirnulated by a succession of collegial
interactions with a French scholar, Ren6 Pioc'h, who himself had appropriated
robotic inventions of two colleagues, P. ani in' and R. Tanguy,' termed
"tychoscopea," for experiments involving young chicks and r n h b i t ~From these
studies, P6oc'h established the capacity of these animals to affect the trajectory
of the n ~ b o t their biological advantages, by some anomalous means.'
   Our extension of these techniques to experiments with human operators has
addressed the hypothesis that an anthropomorphic resonance with the behavior
of s~tch robot wo~kldenhance anonialous alterations of its randorn trajectory,
with corresponding departures of the digital output of the REG unit directing it.
The on-board mechanism driving the device has evolved empirically over the
course of these experiments to correct various operational difficulties, such as
wheel slippage, battery drain, initial alignment, cJtr, event~iallyreaching the
fixm detailed in the Appendix. Briefly, it comprises two independent, battery-
powered clock motors, each controlling one wheel of the robot. These in turn are
instructed by the REG unit to drive the wheels by a sequence of various
incremental amounts, thereby accomplishing a random array of forward
translations and clockwise or counter-clockwise rotations of the vehicle on
a 48-inch-diameter circular platform. Frotxi a set initial position and direction at
the center of the table, the device executes a two-dimensional stochastic
trajectory, eventually reaching thc table: edge. The equipment is deployed in one
of our principal experimental rooms, with the operators seated adjacent to the
table, but having no contact with it (Figure 1). To enhance its whimsical
attractiveness for the experirnerltal operators, the electrical and mechanical
componerlts of the robot are encased in a 15-cm dome-shaped housing somewhat
resembling a miniature Zamboni machine, with a toy frog perched in a driving
position (Figure 2).
                              An REG-Driven Robot

                            Fig 2 Clo\e-up of' PEAR Robot.

   Before committing many opera to^-5 to accun~iilittionof' the large datasets that
are a sine qua non of any such human/~~lachine/anomalies       experiments, a few
rudimentary pilot studies were perl'ormed by a limited number of operators
attempting to bias the distribution 01' the exit anglcs of the robot at the table
edge. These yielded some substantially larger anomalous effect sizes than those
typically seen in our benchmark REG c ~ p e r i r n e n t ,encouraging further
refinement of the equipment and the subseqirent collection of the requisite
large databases, as detailed in the following sections. Most notably, a small LED
was added to the robot dome, which could be tracked by an overhead digital
caniera, which in turn transmitted real-time specification of the entire course of
the robot trajectory to a dedicated computer and database manager. This allowed
each experimental run to be stored numerically and graphically for subsequent
analyses (Figure 3).

                            2. Exit-Angle Experiments
   In the first version of the formal experiments, the alternating intentions of the
operators are simply to induce the robot to wander from its initial placement at
the center of the table to exit locations as near as possible to the operator's initial
position (denoted 0°), or directly opposite (180"). Each such effort is termed
a "run," and two successive alternative efforts, a "set." Three phases of data
collection, distinguished by minor adjustments to the robot, table, camera
                                        R. G. Jahn et al.



     Fig. 3. Print-out of digitized robot trajectory extracted from overhead camera datafile.

alignment, and operator positioningt have been conducted, each accompanied by
its own calibration set. For this article, all of these formal data have been
combined into one composite database, which is summarized in Table 1 and
fully detailed in an associated Technical ~ e ~ o rThe ~
                                                       t . participating operators
have been divided into females, males, dual co-operators, and larger groups. Of
the ten co-operator pairs, three were femalelfemale, none malelmale, and seven
femalelmale. The six groups comprised from three to fifteen participants, and in
three cases were children, in three cases young adults.
   From the table it appears that even though the all-operator database is
statistically unimpressive by a Z-score, effect size, or X2 criterion, the female
sub-group achieves a modestly significant separation of the 0"-intention efforts
from the 180"-intention efforts, with an equivalent effect size nearly three times
larger than the all-operator data. The group performance displays an even more
substantial effect size, which attains marginal significance even for this
relatively small number of experimental sets. Both of these subsets also exceed

  In most cases, operators located themselves at the 0" table position for the 0" efforts, and moved to
the 180" position for the 180" efforts. Thus, both entailed efforts to attract the robot toward
themselves. In a few cases, operators remained at the 0" position for both attempts, hence the 180"
data followed from efforts to repel the robot. Insufficient data have been acquired in this latter
protocol variant to allow meaningful statistical comparisons. Hence, the primary differential data
discriminator is the exit-angle target intention, rather than the attracthepel nuance.
                                          An REG-Driven Robot                                         31

                                               TABLE 1
                            Summary of All Exit-angle Experiments 0" vs. 180"

   0per              # Ops       # Sets          Z@,)                E          xzp   @XI       # Oper    +
All                    87        1120                169)
                                                .957(.            .0286      85.561(.524)           48
Female                 41         567          1.878(.030)*       .0789      50.743(.142)           27*
Male                   30         416        -1.164(.880)       -.0571       23.011(.815)           12
Co-operator            10          72         -.298(.617)       -.0351        5.444(.860)            4
Groups                  6          67          1.738(.041)*       .2123       6.363(.384)            5*
Calibration            -          348           .810(.209)        .0434                             -

# Ops:         Number of individual operators, operator pairs, or groups contributing to the databases.
# Sets:        Number of 0°, 180" paired sets performed.
2:             Statistical Z-scores computed from T-scores via Rosenthal approximation (cf.Appendix B
               of reference 7).
               One-tailed probabilities of Z-scores against chance expectations (* denotes significance at
               p, 5 .050).
               Equivalent effect size, computed as Z l JW.
               Statistical chi-squared calculations over individual operator Z-scores; i.e., Cz2 to be
               compared with the number of degrees of freedom (number of operators) to estimate the
               structural probabilities of the Z distributions compared to chance expectations.
Px:            chance probabilities of x2 values.
# Oper    +:   Number of operators exceeding chance mean expectations in the intended directions.

chance by the criterion of the fractions of operators showing separations in the
intended directions, i.e., having collective positive Z-scores.
   Quantitative comparison of these effect sizes with other REG-based
experiments is inescapably somewhat arbitrary, given the disparities in the
manners in which the basic binary samples are operationally deployed.
However, if we refer to our extensive "benchmark" database,' with its more
than 1.6 million, 200-sample differential (high - low) trials, and argue that
performance of 200-trial "runs" thereof involve comparable operator time and
effort to the robot sets, their comparison effect size figure is approximately
r = .04, i.e. very much in the same range as the robot data.
   The effect sizes can provide a more commensurate indication of the statistical
distinguishability of the various robot data subsets, via the basic relation for
difference Z-scores:

                                   Z i , = (ci - E;)/ 4 1 / N ~ 1 INj

or equivalently:

which separates the females from all operators at the level ZF,A= -976 (p= .165);
the females from the males at ZF,M= 2.108 (p = .018); and the groups from all
operators at ZG,A= 1.470 (p = .071).
32                               R. G. Jahn et al.

   In other experimental contexts9 we have found it useful to define a "prolific
operator" subset of participants, i.e. those whose extended commitment to the
generation of data allows more incisive identification of individual performance
characteristics, and essentially obviates any "optional stopping" confounds. lo For
this experiment, generation of five or more sets was established as the threshold
criterion for the category, and the summary results for those prolific operators are
shown in reference 7. The statistical yield of this dataset has been weakened
somewhat by the exclusion of two significant female operators who performed
only four sets each, and by its somewhat smaller overall size, but otherwise it
closely resembles the full data array, and henceforth will not be pursued separately.
   The same body of all-operator data may be subjected to a more detailed
examination of the exit-angle distributions, with the results displayed in the
polar plots of Figures 4-8. For this analysis we have divided the 360" azimuth of
the robot platform into 12 sectors and plotted therein as radial excursions the
lumped experimental populations of those exit angles for the 0" and 180"
intentional efforts, superimposed on the mean value and the two-tailed 95%
inner- and outer-bound confidence limit circles. Figure 4, which presents a direct
comparison of the 0" and 180" efforts, shows a clear bias of both datasets toward
the 0" hemicircle, which is confirmed by the attached statistical calculations
(detailed in reference 7), and is probably indicative of some mechanical
asymmetry in the initial alignment of the device or its platform. This suspicion is
supported by similar patterns in the relevant calibration data (Figure 5). We
should concentrate, therefore, on the accumulated population differences
between the intentional data and the calibrations (Figure 6), or between the
0" and 180" populations around the twelve sectors (Figure 7), where such
artifactual biases should cancel out. In neither of these formats do we see
a statistically significant 0"-180" difference in the all-operator data, although
clear distinctions between the female and male data are evident in Figure 8.

                                 Key to Figures 4-8:
Z,:   Z-scores of empirical exit-angle bin population projections on the 0"-180" axis
      (cf. Appendix B-2 of reference 7).
p ~ : chance probability of 2,.
X2: chi-squared calculations over empirical bin population 2-scores, i.e. C z;, to be
                                                                          bins -
      compared with the number of degrees of freedom (number of bins minus one)
      to estimate the structural probabilities of the Z distributions, compared to
      chance expectations.
p,:   chance probabilities of X2 values.
      goodness-of-fit chi-squared calculations comparing 0" and 180" distributions
       (cf. Appendix B-3 of reference 7).
xL:   same comparison of female and male datasets.
                                    An REG-Driven Robot


 ----- expected populations
.......... two-tailed 95% inner


Fig. 4. a, b. Azimuthal distributions for 0" and 180" intentions: All operators, 1103 paired runs each.
                                         R. G. Jahn et al.


 ----- expected populations
.......... two-tailed 95% inner

Fig. 5. a, b. Azimuthal distributions for 0" and 180" intentions: All relevant calibrations, split into
artificial run pairs.
                                   An REG-Driven Robot


----- expected populations
.......... two-tailed 95% inner

Fig. 6 . a, b. Azimuthal distributions for 0" and 180" intentions subtracted from calibration data: All
                                      R. G. Jahn et al.


    Fig. 7. Differences in azimuthal populations of 180" and 0" intention data: All operators.

   Perhaps most indicative are direct differential comparisons of the 0" and 180"
data obtained in the individual paired experimental sets, as shown in Figures 9-16.
(The "cardioid" shapes of these polar graphs devolve from the use of the angle
dzfferences as the azimuthal coordinate, wherein for chance data, full 180"
separations are only rarely achievable, while 0" separations are obtainable from
many more data combinations; c Appendix B-4 of reference 7.) To be noted are
the close agreement of the polar segment difference distributions between all-
data experiment and theory (Figure 9), experiment and calibration (Figure lo),
and calibration and theory (Figure l l ) , compared to the more palpable
separations of the female and male data patterns from calibrations (Figures 12
and 13), from theory (Figures 14 and 15), and from each other (Figure 16).
Although the "goodness-of-fit" statistical calculations superimposed on the
figures suggest no significant differences between the 0" and 180" data
distributions, some distinctions between the female and male distributions can
be identified.

                                3. Duration Experiments
 An alternative protocol to assess possible operator influence on the robot
motion has also been invoked, e.g. efforts to induce the robot to execute longer
                                 An REG-Driven Robot


                                                                                a) 180"- 0";

----- expected populations
..........two-tailed 95% inner

                                                                                b) 180" - 0";

 Fig. 8. Differences in azimuthal populations of 180" and 0" intention data: a) Female; b) Male.

or shorter trajectories before reaching the edge of the platform (time-of-flight) or
equivalently, to cover longer or shorter total distances. (These could differ
slightly because of the respective ratios of translation increments to rotation
increments in the various datasets, or variations in the robot's translation and
                                       R.G. Jahn et al.

     Theoretical Population
 - Active Data
Fig. 9. Distributions of angular population differences between 180" and 0" intentions compared with
theoretical chance expectations: 83 operators, 1103 paired sets.

 ..... ...., Calibration
  +          Active Data
Fig. 10. Distributions of angular population differences between 180" and 0" intentions compared
with calibration data: 83 operators, 1103 paired sets, 353 calibration sets.
                                  An REG-Driven Robot



Fig. 11. Distributions of angular population differences between 180" and 0": 353 calibrations
compared with theoretical chance expectations.

 -      Calibration I
         Active Data

Fig. 12. Distributions of angular population differences between 180" and 0" intentions: 549 female
datasets compared with 353 calibrations.
                                       R. G. Jahn et al.



Fig. 13. Distributions of angular population differences between 180" and 0" intentions: 416 male
datasets compared with 353 calibrations.


Fig. 14. Distributions of angular population differences between 180" and 0" intentions: 549 female
datasets compared with theoretical chance expectations.
                                  An REG-Driven Robot


1  Theoretical Population
 -* Data
Fig. 15. Distributions of angular population differences between 180" and 0" intentions: 416 male
datasets compared with chance expectations.


Fig. 16. Distributions of angular population differences between 180" and 0" intentions: 562 female
datasets compared with 403 male datasets.
                                          R. G. Jahn et al.

                                             TABLE 2
                                  Summary of All Time-of-Flight Data

      Oper          # Ops       # Sets         ZPZ)             E           ~:p(Px)        # Oper i-

All                   33         678        1.085(.139)        .0417     23.687(.883)          18
Female                10         219         .978(.164)        .0661     12.245(.269)           4
Male                  23         459         .669(.252)        .0312     1 1.442(.978)         15
Calibration           -          295       -.986(.838)       -.0574          -

# Ops:         Number of individual operators, operator pairs, or groups contributing to databases.
# Sets:        Number of 0°, 180" paired sets performed.
Z:             Equivalent statistical 2-scores computed from T-scores via Rosenthal approximation
               (cf. Appendix B of reference 7).
Pz:            One-tailed probabilities of 2-scores against chance expectations.
E:             Equivalent effect size, computed as 2 Id-.
:              Statistical chi-squared calculations over individual operator 2-scores; i.e. Cz2 to be
               compared with the number of degrees of freedom (number of operators) to estimate the
               structural probabilities of the Z distributions against chance expectations.
Px:            chance probabilities of X2 values.
# Oper    +:   Number of operators exceeding chance mean expectations.

rotation speeds due to battery run-down or wheel slippage.) The experimental
results of the time-of-flight version are summarized in Table 2 and detailed in
Appendix C of reference 7, again broken into all-operator, female, and male
subgroups. (No Co-operator or Group Data were obtained under this protocol,
and since only one operator failed to meet the prolific criterion, this distinction
has been ignored.)
   A few minor structural differences may be noted between the exit-angle data
shown in Table 1 and these time-of-flight results. In the former, the overall
opposite-to-intention male performance counteracted somewhat the female
results of positive significance, reducing the "all" results below chance. Here,
the male data show comparable performance to the female, but neither are
sufficiently strong to drive the all-data results beyond chance. Of the 18
operators who achieved in the intended direction only one (female) produced an
independently noteworthy result (p = .0043), and the X:D values remain
unremarkable. The effect sizes, again computed as ~/d#set;, comparable
with those of the exit-angle protocol, and with those of our "benchmark"
experiments. [It perhaps should be noted that the smaller fraction of female
contribution to this protocol version necessarily biases the gender comparison,
e.g. if the female effect size were to be extrapolated over the male dataset size,
the corresponding Z-score would be 1.483 (p = .069), and the combined Z-score
would be 1.522 ( p = .064).]
   The data for the total distance variant of the duration experiments were
extracted expost facto from the same body of results just described. That is, the
corresponding path lengths were computed from the camera traces of the same
trajectories as a secondary empirical product of the data acquired under operator
intentions to induce longer (or shorter) resident times of the robot on the
                                An REG-Driven Robot

                                     TABLE 3
                      Summary of All Trajectory Length Analyses

  Oper        # Ops    # Sets         z(Pz>              E        x?p ( P x )   # Oper   +
All            33       678               127)
                                    1.140(.          .0438    24.471(.858)         18
Female         10       219         1.001(.158)      .0676    12.736(.239)          4
Male           23       459          .721(.235)      .0337    11.734(.974)         14
Calibration    -        295        -.991(.839)     -.0577           -              -

                                  Key: Same as Table 2

platform. As can be seen from Table 3, these results correspond closely to the
time-of-flight measures both in structure and overall effect size, thereby
reassuring us regarding the integrity of the robot motion and allowing us to
combine them for subsequent interpretations.

                                    4. Discussion
  In several respects, the empirical results of these robot experiments are
consistent with those of a number of other humadmachine interaction studies
performed in this laboratory over its many years of operation, using a wide range of
physical systems as targets for the intentions of large numbers of uncompensated,
volunteer operators. More specifically, here we again have found:
   1) Marginally significant overall anomalous correlations of machine
      performance with pre-stated operator intentions;
   2) Excessive fractions of individual operator achievements beyond chance
   3) Disparities in performance between female and male operators;
   4) Few, if any, "superstar" performances;
   5) Idiosyncratic operator sensitivities to protocol and feedback modalities;
   6) Other departures of structural aspects of the data from chance expect-
      ations, most notably the outlying performance of the small number of
      operator groups.

  The last feature may merit some passing comments with respect to the
desirability of attempting to replicate this group effectiveness in this, and other,
experimental contexts. Given the logistical problems of convening groups of
dedicated operators for more than one experimental session, the number of sets
acquired for the exit-angle studies was statistically small, and none were
obtained for either of the duration protocols. (A similar dearth of group data
prevails for our many other humanlmachine experiments, leaving little basis for
comparison or generalization.) Confounding the robot situation, yet worthy of
note in its own right, was the fact that several of the groups comprised children
around ten years of age, whose evident spontaneous enthusiasm for participation
may have been a salient factor in their performance. While there has been
44                               R. G. Jahn et al.

considerable parapsychological attention to group effects in other contexts, e.g.
healing, sCances, mediumship, and apparitions, etc., more extensive studies of
the importance of this factor in controlled physical experimentation seem justified.
   With this exception, the reappearance in the robot data patterns of the
characteristics just listed does not so much provide major new insights, as it
underscores the fundamental character of the phenomena--elusive, irreplicable,
and subjectively correlated as they may be. As such, these studies take a useful
supporting position in our ongoing efforts to formulate generic specifications of
all forms of consciousness-related anomalous physical phenomena.

                  5. Implications, Applications, Speculations
   From its inception, the PEAR program has pursued its studies of anomalous
phenomena from the perspectives, and to the purposes, of the applied physical
sciences. This has entailed the empirical and theoretical acquisition of
fundamental understanding at both the epistemological and ontological levels,
as well as consideration of the implications and applications thereof for
pragmatic purposes within contemporary and future technologies. The latter
necessarily entails both negative and positive aspects. On the one hand,
legitimate concerns arise regarding the integrity of delicately poised information
processing devices and systems, particularly those that embody a random
component, functioning in emotional proximity to human operators. On the
other hand, speculations can be made regarding possibilities for beneficial
practical applications of the insights and technologies that might ultimately be
derived from the basic research efforts.'
   In the particular context of the anomalous humantrobot interactions reported
here, it might seem that such an erratic device as this whimsical roving vehicle
would have little potential for practical deployment beyond a children's toy or an
adult coffee-table curiosity, wherein its consciousness-correlated aberrations
would be of no major consequence. But in fact, we are well into a cultural age
where robotic technology is becoming widely utilized to perform many services
to relieve human operators of various tedious, difficult, or dangerous functions.
Robotic vacuum cleaners and lawn sprinklers already can be ordered on-line;
robotic equipment is routinely deployed for surveillance and service in hostile
radioactive and heavy manufacturing environments; and commitment of certain
medical diagnostics and treatment to miniaturized robotic devices is now being
seriously considered and in some cases utilized. In this latter context
particularly, and many others as well, the escalating advances in the micro-
and nano-sciences and technologies presage an era of miniaturized mobile
devices that will navigate microscopic terrains, including our physiological
systems, providing information and interventions that could be achieved in no
other way. Even at this primitive stage, protection from inadvertent or malicious
mis-applications of any such futuristic equipment should be borne in mind,
along with their potential consciousness-coupled enhancements.
                                  An REG-Driven Robot                                          45

   While our basic research to date has enabled us to outline certain charac-
teristics of situations wherein anomalous mind/machine interactions may arise,"
we are clearly a long way from reliable invocation of consciousness-mediated
control of even such rudimentary vehicles as that employed in the experiments
reported here, let alone of their much more sophisticated siblings and descen-
dants. Nevertheless, the history of biofeedback successes, the proliferation of
robotic technologies, and the recent reports of physical control systems
responsive to operator attitudes12suggest that further fundamental study of this
form of mindlmachine interaction may well be worthwhile. Certainly the distant
vision of the ingestion or insertion of dedicated micro- or nano-robotic devices
that could be willed preferentially to perform particular diagnostic or therapeutic
functions within our biological frameworks, or any other accessible complex
systems, should not be categorically dismissed.

   This robot project was, by its nature, a very labor-intensive effort, requiring
skilled services by many members of our laboratory staff other than those listed
as authors. Our technical specialist, John Bradish, expended many months of
effort in designing, constructing, modifying, and servicing several generations of
the robot vehicles, and their operating platforms. York Dobyns assisted in
configuring the numerical/mechanical logic to drive the device. Roger Nelson
helped design the original experimental and calibration protocols, and analysis
of the pilot data. Greg Nelson arranged the electronic camera system that tracked
the robot motion and Michael Ibison wrote the software to render its output into
useful data. Several undergraduate students enhanced our usual pool of inside
and outside operators, to all of whom we are grateful for their interest, time, and
diligence in generating the requisite databases.
   Also acknowledged with enduring gratitude are the several personal and
institutional philanthropies that have provided financial resources for the PEAR
program for many years, including Mr. James S. McDonnell; Mr. John F.
McDonnell; Mr. George Ohrstrom; Mr. Laurance Rockefeller; Mr. and Mrs.
Richard Adams; Mr. Donald Webster; Mr. John Fetzer; the BIAL Foundation;
and numerous other private contributors who prefer to remain anonymous.

1. Jahn, R. G., & Dunne, B. J. (2005). The PEAR Proposition. Journal of Scientific Exploration, 19,
2. Janin, P. (1986). The tychoscope. Journal of the Society for Psychical Research, 53, 341.
3. Tanguy, R. (1987). Un Rkseau de Mobiles Autonomes pour L'Apprentissage de la Communication.
    Doctoral thesis, UniversitC Paris 6, 2 d6cembre 1987.
4. PCoc'h, R. (1988). Chicken imprinting and the tychoscope: An Anpsi experiment. Journal of the
    Society for Psychical Research, 55, 1.
5. PCoc'h, R. (1995). Psychokinetic action of young chicks on the path of an illuminated source.
46                                        R. G. Jahn et al.

6. Nelson, R. G. (1996). Report on Gate Pilot Protocol for Robot. PEAR Internal White Paper.
7. Jahn, R. G., Dunne, B. J., Acunzo, D. J., & Hoeger, E. S. (2006). Response of an REG-driven robot
    to operator intention. Technical Note PEAR 2006.03, November. Princeton NJ: Princeton
8. Jahn, R. G., Dunne, B. J., Nelson, R. D., Dobyns, Y. H., & Bradish, G. J. (1997). Correlations of
    random binary sequences with pre-stated operator intention: A review of a 12-year program.
    Journal of Scientific Exploration, 11, 345-367.
9. Dunne, B. J., Dobyns, Y. H., Jahn, R. G., & Nelson, R. D. (1994). Series position effects in random
    event generator experiments, with an Appendix, serial position effects in the psychological
    literature, by Thompson, A. Journal of Scientific Exploration, 8, 197-215.
10. Dobyns, Y. (2006). Personal communication.
11. Jahn, R. G., & Dunne, B. J. (2004). Sensors, filters, and the source of reality. Journal of Scientific
    Exploration, 18, 547-570.
12. Klouzal, T. J., & Plotke, R. J. (2006). Personal communication.

                           Appendix: Experimental Equipment
   The robot assembly includes a two-wheeled mobile vehicle with nose- and
tail-drags, a circular table on which it may move freely, and an optical detection
and data recording system. The outer shell of the robot is a half-sphere of radius
15 cm that encloses a chassis supporting a dedicated power supply and
a microelectronic REG. The device is propelled by two independent battery-
powered electric clock motors, each connected to one of the robot's wheels. The
motion is a succession of alternating rotations and translations for which the
angles and lengths are determined randomly by the internal REG, whose
processor generates random numbers by summing its output bites. The
theoretical expected sum is subtracted from these numbers to obtain random
digits that have null mean values. These are presented as 5-Hz successions of
"tics," whose period defines a time unit for the behavior of the robot.
   Once switched on, the robot motion begins with a rotation, after which it
alternates forward translations with subsequent rotations. When the robot is due
to start a translation, the system compares the last generated random number
with two values that separate the theoretical distribution into three equal
segments, on the basis of which it goes forward for 4, 5, or 6 tics, with equal
probability. After completing the translation, the robot determines whether to
rotate clockwise or counter-clockwise, based upon whether the next value
generated is positive or negative. If the value is null, the robot makes no rotation,
but proceeds through another translation, for which the distance is computed at
the following tic. In its rotation mode, the robot examines every number
generated during the consecutive tics in order to determine whether to continue
its rotation or to stop. The threshold is designed to have a probability of
approximately 0.1 to stop at each successive tic.
   On top of the "dome", on the axis of rotation, an LED is installed that allows
an overhead digital camera to detect and record the motion of the robot. The x
and y coordinates of this LED position are recorded three times per second,
along with the time of measurement. These files are the basis for all subsequent
analyses of the robot trajectories.
Journal of Scientific Exploration, Vol. 21, No. 1, pp. 47-66, 2007             0892-3310107

                  Time-Series Power Spectrum Analysis of
                 Performance in Free Response Anomalous
                          Cognition Experiments

                               "Center for Space Science and Astrophysics
                                     Stanford University, MC 4060
                                           Stanford, CA 94305
                                     e-mail: sturrock@stanford.edu
                                           '~ielsen Entertainment
                                             6255 Sunset Blvd.
                                           Los Angeles, CA 90028
                                         e-mail: james @jsasoc.corn

      Abstract-We analyze a database of 3,325 free response anomalous cognition
      experiments, using procedures recently used in an analysis of a catalog of UFO
      events. A histogram analysis shows evidence of a significant annual modulation
      in the success rate, but no significant evidence for modulations associated
      with time of day or local sidereal time (LST). The salient feature of a power
      spectrum formed by the Lomb-Scargle procedure is a peak at a frequency very
      close to twice the lunar synodic frequency. The probability that this feature
      occurs by chance is estimated to be 0.03%. A running-wave analysis indicates
      that this is an astronomical effect rather than a spurious property of the data-
      base. We also find a peak in the power spectrum with a period of one year. This
      could be associated with an LST effect, but the running-wave test, which is
      designed to distinguish between a real LST effect and a spurious LST effect,
      does not support this interpretation.
      Keywords: consciousness-parapsychology-anomalous cognition-physical

                                             1. Introduction
Spottiswoode (1997) analyzed a database of 1,468 free response trials of anom-
alous cognition experiments, examining the effect size as a function of local
sidereal time. He found that the effect size increased by 340% for trials within
1 h of 13.5 h LST ( p = 0.001) compared with the mean effect size. He also
analyzed an independent database of 1,015 similar trials that showed an increase
in effect size of 450% ( p = 0.05) within 1 h of 13.5 h LST. Spottiswoode
considered possible artifacts due to the non-uniform distribution of trials in
local time and variations of effect size with experiment but found that these
appeared not to account for the effect.
   We here analyze a database of 3,325 trials, which includes the earlier data and
adds some experiments which were not previously available, using a procedure
48                   P. A. Sturrock & S. J. P. Spottiswoode

recently used in an analysis of a catalog of UFO events (Sturrock, 2004). The
methods used to calculate effect sizes can be found in Spottiswoode (1997).
The key purpose of this procedure is to distinguish between a genuine LST
effect and an apparent (but spurious) LST effect caused by an interplay of
a non-uniform effect size as a function of hour of day (HOD) and a non-uniform
effect size as a function of "hour of year" (HOY), where a calendar year is
divided into 24 equal periods referred to, for convenience, as "hours." We
examine this distinction by carrying out a "running-wave" power spectrum
analysis that can potentially distinguish a genuine LST effect from a spurious
LST effect.
   We present some basic properties of the database in Section 2, and we carry
out a periodogram analysis, using the Lomb-Scargle procedure, in Section 3. We
find that the salient peak occurs at a frequency very close to twice the synodic
frequency of the Moon. In Section 4, we analyze the database by means of the
b b r u n n i n g - ~ a ~power spectrum procedure used in our UFO analysis. This
analysis tends to confirm that the peak in the Lomb-Scargle power spectrum is
indeed due to a lunar influence. We carry out a Monte Carlo significance test in
Section 5, which indicates that the feature is significant at the level p = 0.0003.
We discuss our results in Section 5.
   For readers not familiar with power spectrum analysis, we may explain that
it is a procedure for searching systematically for periodicities in a dataset. For
instance, if the air pressure were to be recorded one million times a second near
a piano when one strikes the middle-C key, the measurements would be found to
vary with a frequency of 262 Hz (cycles per second). However, the variation is
not a pure sine wave: for instance, the amplitude of the fluctuation varies with
time, and one would find fluctuations also at the harmonics [2 X 262, 3 X 262,
etc., Hz]. A "power spectrum" provides a visual display of all of the periodic
modulations that are part of the "time seriesw-in this case, the sequence of
pressure measurements. There would be a peak, of finite width, at 262 Hz
(or higher or lower if the piano is off-key), and smaller peaks at 524 Hz, 786 Hz,
etc. Power spectra (in cycles per year rather than cycles per second) will be
found in Figure 9, etc.

                                2. Basic Patterns
   In time-series analysis, it is prudent to examine the basic patterns in the data.
We show the distribution of trials in hour of day (computed as "solar time"
rather than "local time") in Figure 1. It is no surprise that the main part of the
histogram extends from 8 am until 5 pm, with a lunch-time dip in the dis-
tribution from noon until 2 pm. The distribution of trials in hour of year is given
in Figure 2. We see that the rate is greatest in the winter, and then declines
through spring, summer and autumn.
   It is convenient to make use of the following expression for local sidereal
time in terms of hour of year and hour of day:
               Analysis of Performance in Cognition Experiments                    49

  500                   I                I                I             I

                       Fig. 1. Histogram formed from hour of day.

                            LST = HOD +HOY +6.67.                               (2.1)
[The factor "6.67" arises from the fact that local sidereal time is related to "right
ascension." At midnight on January 1, the right ascension on the meridian is
6 h 44 m (Allen, 1973).]
  The distribution of trials in LST is given in Figure 3. We see that it is V-
shaped, with a minimum between 12 and 13 h.
  Following our earlier article (Sturrock, 2004), we introduce the concept of
"alias local sidereal time," (ALST) defined by
                            ALST = HOD - HOY + 6.67.                            (2-2)
The purpose of this step is that a modulation in local sidereal time that is due to an
interplay of an HOD pattern and an HOY pattern will lead also to a similar modu-
lation in alias local sidereal time. The distribution of trials in ALST is given in
Figure 4. We see that it has the form of an inverted V, with a maximum at about 15 h.
   For the LST histogram and the ALST histogram, the range of the count per
bin is about the same (100 to 200), so this analysis does not in itself point to an
LST effect.
   In Figure 5, we show the mean of the effect size Z and the standard error of the
mean for the 24 hour-of-day bins. There is an appearance of a low effect size
50                   P. A. Sturrock & S. J. P. Spottiswoode

         '            5
                       I                1


                                                        15           20
                                                                       I           I

                      Fig. 2. Histogram formed from hour of year.

near 3-4 am and a high effect size near 7 am, but a chi-square test of all bins
gives no evidence for a departure from uniformity ( p = 0.07). In Figure 6, we
show the mean of the effect size and the standard error of the mean for the 24
hour-of-year bins. There is a suggestion of a higher effect size for late May,
early June, and late August, and a lower effect size for early July. A chi-square
test indicates that this departure from uniformity is significant (p = 5 X lo"), so
this result deserves further investigation, examining different datasets separately,
to determine whether this pattern appears in all experiments, or only in certain
experiments. Figures 7 and 8 are similar displays for LST and ALST,
respectively. Neither shows a statistically significant departure from uniformity
(p = 0.14 and p = 0.6, respectively).

                            3. Periodogram Analysis
   In any investigation of a time series, it can be helpful to examine the power
spectrum. Perhaps the simplest procedure for a large but regular time series is
to form the "Schuster periodogram" or "Rayleigh power" (Bretthorst, 1988;
Mardia, 1972). An improved version of this operation is the Lomb-Scargle
              Analysis of Performance in Cognition Experiments               51

                  Fig. 3.   Histogram formed from local sidereal time.

procedure (Lomb, 1972; Scargle, 1982), which is designed for application to
time series with irregular sampling.
   If we normalize the effect size values, Z,, to have mean value zero,
                                xn = Z, - mean (2, ) ,                     (3.1)
the Lomb-Scargle power spectrum S(v) is given by

                     xn cos(2nv(tn- z))
                                            ]  +          n
                                                   [ ~ x in(2nv(tn -

                      cos2(2nv(tn T))
                                -           ] .[         sin2(2nv(tn r))


and z is defined by the relation
52                   P. A. Sturrock & S. J. P. Spottiswoode

     500                I                 I                 I               I

     450 -

     400 -

     350 -

     300 -

                 Fig. 4. Histogram formed from alias local sidereal time.

   The power spectrum formed in this way from the effect size time series is
shown, for frequency in the range v = 0-5 yr-', in Figure 9. The top ten peaks
are listed in Table 1. We see that there is a peak at 1.00 yr-I with power 5.21,
but this is blended with a stronger peak at 0.95 yr-' with power 6.02. We show
in the Appendix that, as a result of the severe non-uniformity in the time series
formed from trial dates, any peak in the power spectrum is likely to be con-
taminated by either a displacement, or the occurrence of auxiliary peaks, with
separation of order 0.03 yr-I or a small multiple of this frequency. We also find
a peak at 1.98 yr-' with power 7.29, which we may interpret as a contaminated
form of a peak at 2.00 yr-'. It appears, therefore, that there is evidence for
both annual and semi-annual modulations of the performance.
   We have also examined the power spectrum over a much wider range v = 0-
100 yr-l. The strongest peak in this range occurs at v = 24.65 yr-', quite close to
twice the synodic lunar frequency (24.74 yr-'), with power 10.75. For clarity, we
show in Figure 10 the power spectrum over the range v = 0-30 yr-'. It is
interesting to note that there is also a peak at 24.81 yrF1, since the pair of peaks
at 24.65 yr-' and at 24.81 yr-' may be interpreted as sidebands of modulation
at 24.74 yr-I since the interval is, in each case, 0.09 yr-l, a small multiple of
                 Analysis of Performance in Cognition Experiments                             53

Fig. 5 . Mean value of effect size and standard error of the mean for each of 24 hour-of-day bins.
         The dotted line denotes the mean value for the entire database.

0.03 yr-l. In this connection, we may also note that there is a peak at 12.36 yr-l,
with power 3.34, virtually identical to the lunar frequency.
   Hence Lomb-Scargle analysis of the effect size points towards modulation of
the performance by a lunar-related process.

                               4. Running-Wave Analysis
   In the previous section, we looked into the possibility that the effect size
exhibits one or more oscillations in time. In this section, we ask a different but
related question. We look into the possibility that the effect size exhibits a
significant pattern in terms of a rotating reference frame. It is convenient to take
as our basic reference frame one that is centered on the Earth, with respect to
which the Sun has a fixed position. An observer on Earth would see this frame
rotate with a period of one day. An observer on a nearby star would see the
frame rotate with a period of one year.
   With respect to this frame, we denote by cp, the angular position of the zenith
(the position looking vertically upward) at the time of trial n, but normalize the
angle to run from 0 to 1:
                        P. A. Sturrock & S. J. P. Spottiswoode

                            I                 1

                           5                10                 15
Fig. 6. Mean value of effect size and standard error of the mean for each of 24 hour-of-year bins.
        The dotted line denotes the mean value for the entire database.

This quantity is related the hour of day of the trial by

   We now denote by v the angular velocity of the test frame with respect to
the basic (Sun-locked) frame, where v is measured in cycles per year. For
this analysis, it is convenient to evaluate the following form of the Rayleigh

where t is measured in years.
  We note that v = 0 corresponds to the Sun-locked frame and v = 1 yr-l
corresponds to the star-locked frame. If t is measured from 0 h on January 1 of
some year, then t is related to the "hour of year" by
                  Analysis of Performance in Cognition Experiments                          55



   -1.5                      I                I                 I                I

           0                5                10               15                20
Fig. 7.    Mean value of effect size and standard error of the mean for each of 24 LST bins. The
           dotted line denotes the mean value for the entire database.

We see from Equations 4.2 and 4.4 that, for v = 1 yr-I, the combination K + vt is
effectively the sum of the hour of day and hour of year, so that it is related to the
local sidereal time:
                             Q + t(mod 1) = -(LST - 6.67).
                                                  24                                      (4.5)
Hence if there is an LST modulation in effect size values, it should show up
as a peak at v = 1 yr-* in the running-wave power spectrum. Similarly, if there
is an ALST modulation in effect size values, it should show up as a peak at
v = - yr-l in the running-wave power spectrum. It follows that the running-
wave analysis can distinguish a real LST effect from a spurious LST effect
due to the interplay of an HOD modulation and an HOY modulation: the
former will yield a peak at v = 1 yr-' but no peak at v = -1 yr-l, whereas the
latter will lead peaks at both v = 1 yr-l and v = - yr-l (Sturrock, 2004).
   We show in Figure 11 the two power spectra for forward waves (v > 0)
and reverse waves (v < O), for the frequency range 0 to 5 yr-'. The top ten
peaks for the forward-wave power spectrum and for the reverse-wave power
spectrum are shown in Tables 2 and 3, respectively. There appears at first sight
to be a peak in the forward-wave power spectrum at v = 1 yrP', with no
                      P. A. Sturrock & S. J. P. Spottiswoode


     -1 -

   -1.5                  I                I                I                I

          0              5               10               15              20
Fig. 8. Mean value of effect size and standard error of the mean for each of 24 ALST bins.
        The dotted line denotes the mean value for the entire database.

accompanying peak in the reverse-wave power spectrum. However, inspection
of the power spectrum shows that this peak is actually at v = 0.95 yr-*. There is
no peak at v = 1 yr-' in either the forward or reverse power spectrum. Hence (in
contrast to our analysis of UFO events) the running-wave analysis does not
provide evidence for or against an LST effect.
   In Figure 12, we show the two power spectra for forward and reverse waves
for the frequency ranges 20 yr-l to 30 yr-l. We find that the expected peak at
24.65 yr-l shows up in the reverse-wave power spectrum (with power S = 9.77),
but not in the forward-wave power spectrum (for which the power is only 2.44).
This is exactly what one would expect of a real lunar effect since, with respect to
a Sun-locked frame, the Moon moves in a direction opposite to that of the stars.

                        5. Monte Carlo Significance Test
   In order to obtain a robust significance estimate for the feature at twice the
lunar synodic orbital frequency, we generate Monte Carlo simulations by means
of the shuffle process. In this procedure, we shuffle either the time values or the
effect size values so as to randomly re-assign effect size values among times. We
will search to see how often we can find a power as large as the actual power
               Analysis of Performance in Cognition Experiments                        57

Fig. 9. Lomb-Scargle power spectrum formed from effect size values for the frequency range
        0-5 yr-l.

(10.75) as close to the target frequency (twice the lunar synodic frequency, 24.74
yr-l) as the actual peak at 24.65 yr-l. We therefore search the band 24.65-24.83
yr-l. For each simulation, we compute the power spectrum over the search band,
and note the power of the highest peak in that frequency range, which we denote by
SM for "spectral maximum." We then examine the distribution of the maximum-
power values. Figure 13 shows the distribution of values of SM from the simula-
tions, and indicates the value of the peak in the actual data, with power S = 10.75.
We find that only 14 simulations out of 100,000 have values of SM equal to or
larger than 10.75. Hence the probability of finding a peak by chance as large as
the actual peak, and as close to the target frequency as the actual peak, is 0.014%.
   However, one should probably estimate the likelihood of finding a peak as
large as the actual peak as close as the actual peak to either the fundamental or
the harmonic of the synodic lunar orbital frequency. This increases the above
estimate by a factor of two to 0.03%.

                                     6. Discussion
  The original purpose of this investigation was to review the earlier finding by
Spottiswoode (1997) of an LST effect in the performance of free response
58                     P. A. Sturrock & S. J. P. Spottiswoode

                                         TABLE 1
          Top Ten Peaks in the Power Spectrum over the Frequency Range 0 to 5 yr-'

                 Frequency (yr-')                                 Power

anomalous cognition experiments. Spottiswoode showed that when effect sizes
were averaged in 2-h LST windows, there appeared to be a significant peak at
13.5 h LST. Spottiswoode evaluated the significance of this peak by carrying out
Monte Carlo simulations, in which the effect size values were randomly
permuted with respect to the LST values. In only 14 out of 10,000 simulations

     "0            5                10       15              20            25           30
Fig. 10. Lomb-Scargle power spectrum formed from effect size values for the frequency range
         0-30 yrP1.
                Analysis of Performance in Cognition Experiments                          59

Fig. 11. Running-wave power spectrum formed from effect size values for the frequency range
         0-5 yr-' .

did Spottiswoode find a mean effect size as large as or larger than that of the
actual dataset, implying that the feature is significant at the level p = 0.001.
Spottiswoode's analysis has been repeated for the present expanded dataset, and
the result is shown in Figure 14. The feature at 13.5 h LST appears still to be
evident, but the same Monte Carlo test now gives a reduced significance level of

                                        TABLE 2
Top Ten Peaks in the Forward Running-Wave Power Spectrum over the Frequency Range 0 to 5 y r l

                 Frequency (yrP')                                  Power
60                     P. A. Sturrock & S. J. P. Spottiswoode

                                        TABLE 3
Top Ten Peaks in the Reverse Running-Wave Power Spectrum over the Frequency Range 0 to 5 yr-'

                 Frequency (yr-')                                 Power

p = 0.01. On the other hand, our Figure 7, derived from fixed bins rather than
sliding bins, shows little evidence of an LST modulation.
   Our Lomb-Scargle power spectrum analysis, shown in Figure 9, shows
evidence of an annual modulation, but the results of our running-wave analysis,
shown in Figure 11, are ambivalent. The forward-wave power spectrum has
a peak at 0.95 yr-l with power 7.63, and the reverse-wave power spectrum has

      O20              22               24
                                                   26                     28              30

Fig. 12. Running-wave power spectrum formed from effect size values for the frequency range
         20-30 yr-'.
                Analysis of Performance in Cognition Experiments                           61

Fig. 13. Histogram formed from 10,000 Monte Carlo simulations, showing the count as a func-
         tion of maximum power in the frequency band 24.65-24.83 yr-l. The vertical line
         shows the power of the actual peak at 24.65 y r l , with S = 10.75. On running 100,000
         simulations, we find that only 14 have S 10.75.

a peak at 1.05 yr-l with power 5.30. In order to try to evaluate the significance
of this departure from the expected frequency of 1 yr-l, we have carried out
the analyses shown in Appendices A and B.
   In Appendix A, we study the time-distribution of trials, and find that it is
highly non-uniform. Hence we should not expect power spectra of such non-
uniform time series to behave in the same way as power spectra formed from
uniform time series.
   In Appendix B, we study power spectra formed from Monte Carlo simulations
of the time series of trial results. We find that the distribution of peak frequencies
has a half-height half-width comparable with the Nyquist frequency 1/(2 X T'),
where T is the duration of the time series. This would not explain the discrep-
ancy between the peak frequency and the expected frequency if all trials have
the same modulation. However, we find that early trials make a bigger contribu-
tion to the peak power than the later trials, so that the appropriate value of T may
prove to be shorter than the value of 15-20 yr that we would infer from Figure
15. This point deserves further study, which we hope to present in a later article.
   However, if we consider that the peaks at 0.95 yr-l and at 1.05 yr-l are
relevant to our search for an LST effect, we would need to conclude that these
                       P. A. Sturrock & S. J. P. Spottiswoode

Fig. 14. Mean effect size versus LST, computed in running 2-h bins. The dotted lines indicate
         the departure determined by the standard error of the mean. The dashed line indicates
         the mean value.

results argue against a real LST effect since we find comparable powers in both
the forward-wave power spectrum and the reverse-wave power spectrum.
   These results leave us with the puzzle of understanding the origin of the
annual modulation shown in Figures 6 and 9. It will probably be accepted that
the psychological state of a person varies with the seasons. The main effect
seems to be a fall-off in performance from late May to early July. It would be
interesting to try to determine whether performance in anomalous cognition
experiments depends on psychological state.
   The surprise in this study was clear evidence for a lunar modulation of
performance in anomalous cognition experiments, although we note that
evidence for a lunar modulation of psi phenomena has been reported previously
by Radin and Rebman (1994). The Monte Carlo test of Section 5 indicates that
this feature of the power spectrum could occur by chance with a probability
0.03%. Furthermore, the running-wave analysis of Section 4 shows a clear
asymmetry in the strengths of the forward and reverse waves, in the same sense
that we expect of a real lunar modulation.
   We are faced with the puzzle of trying to understand the small discrepancy
between the frequency of the principal peak in the power spectrum (at 24.65 yr-l)
              Analysis of Performance in Cognition Experiments                    63

  140              I             I              I             I               I

                Fig. 15. Count of number of trials in bins of width 0.1 yr.

and twice the lunar synodic frequency (at 24.74 yr-l). (The lunar synodic
frequency is the orbital frequency of the moon as seen from Earth, which is the
same as the frequency of the phase of the moon.) We find that the amplitude of
this modulation is not constant, being stronger for earlier times and weaker for
later times. Such a time variation can lead to sidebands, and it is possible that
what we have detected is such a sideband.
   Apart from this discrepancy, the mechanism of a lunar modulation is a puzzle
and, to the best of our knowledge, there is no known mechanism that could explain
this effect. We have looked briefly into the possibility that the effect might be at-
tributed to fluctuations in the geomagnetic field. However, a search for lunar
modulation of the geomagnetic field at the time of a lunar eclipse by Fraser-Smith
(1982) yielded evidence of only very slight fluctuations. We have carried out a
Lomb-Scargle power spectrum analysis of the ap and Ap indices for the time in-
terval of the anomalous cognition database, and found no evidence of modula-
tion at the lunar synodic frequency or its harmonic. It therefore seems unlikely that
the lunar modulation can be attributed to fluctuations in the geomagnetic field.
   There have, of course, been many suggestions that the phase of the moon can
have psychological consequences. If this is true, and if the variation in the
success rate of trials can be attributed in part to psychological processes, this
combination could possibly explain the modulation that we have found.
64                     P. A. Sturrock & S. J. P. Spottiswoode

     1 801               I                I                 I                I

                                        Frequency ( y i ' )
  Fig. 16. Power spectrum formed from the count of number of trials in bins of width 0.1 yr.

  We thank the experimental teams who have generously shared their data
with us.
                                       Appendix A
   In order to help interpret the power spectra, it is helpful to examine the time
series of trials. If the time series is uniform, the interpretation of the power
spectra is straightforward. However, if the time series is non-uniform, the power
spectra may be contaminated by aliases or sidebands.
   Figure 15 shows the number of trials in bins of width 0.1 yr. We see that
the time series is far from uniform. There are four prominent peaks, with separa-
tion of approximately 4 yr, which leads us to suspect that a power spectrum
analysis of this time series may show a structure with a frequency scale of the
order 0.25 yr-l.
   Figure 16 shows the power spectrum of this time series, computed by the
Rayleigh power procedure (Bretthorst, 1988; Mardia, 1972). We see that there
are many strong peaks within the band 0-1 yr-l, with power 30 or more,
including one peak at 0.25 yr-l as expected, and there is one very strong peak
(with power 170) at 0.03 yr-l.
  -2          I                 I                I                I                I        I
             0.8              0.9               1                1.1              1.2
Fig. 17. Error-bar display of power spectra formed from 1,000 Monte Carlo simulations of time
         series with 10% depth of modulation at frequency 1 yr-'.

                                       Appendix B
  In order to help interpret the discrepancy between peaks in the power spectra
and recognizable frequencies, we carry out power spectrum analyses of many
simulations of time series comprised of random noise plus a stable sinusoidal
  We first examine modulations with frequency 1 yr-l. We adopt the form
                              x, = randn   + D sin(27cvotn),                            (B.1)
where randn denotes a normal random distribution with mean zero and standard
deviation unity, vo= 1 and we consider a 10% depth of modulation, i.e., D = 0.1.
We create 1,000 simulations and form the Lomb-Scargle power spectra. The
mean and standard deviation of the resulting power spectra are shown in Figure
17. The peak is at 1 yr-', as expected, and we find that the half-height half-width
is 0.03 yr-l. This is close to the salient peak in the power spectrum of the
time-distribution of trials, as shown in Figure 16. This corresponds to the
Nyquist frequency appropriate to a uniform time series of duration 142 X 0.03)
yr, i.e., about 17 yr, which is comparable with the duration shown in Figure 15.
   We have repeated this calculation for vo = 24.74, and the result is shown
66                       P. A. Sturrock & S. 3. P. Spottiswoode

                         I                   I                 I                   I

     24.5              24.6               24.7         24.8                     24.9                25
Fig. 18. Error-bar display of power spectra formed from 1,000 Monte Carlo simulations of time
         series with 10% depth of modulation at frequency 26.74 yrp'.

in Figure 18. The shape of the power spectrum appears to be identical to that
shown in Figure 17.
Allen, C. W. (1973). Astrophysical Quantities (p. 297). Athlone Press, London, UK. 1976.
Bretthorst, G. L. (1988). Bayesian spectrum analysis and parameter estimation. In Berger, J., Fienberg,
    S., Gani, J., Krickeberg, K., & Singer, B. (Eds.) Lecture Notes in Statistics (Vol. 48), Springer-
Fraser-Smith, A. C. (1982). Is there an increase of geomagnetic activity preceding total lunar eclipses?
    J. Geophys. Res., 87, 895.
Lomb, N. (1976). Least-squares frequency analysis of unequally spaced data. Astrophys. and Space
    Science, 39, 447.
Mardia, K. V. (1972). Statistics of Directional Data (New York: Academic).
Radin, D. I., & Rebman, M. (1994). Lunar correlates of normal, abnormal, and anomalous human
    behavior. Subtle Energies and Energy Medicine, 5, 209.
Scargle, J. D. (1982). Studies in astronomical time series analysis. 11. Statistical aspects of spectral
    analysis of unevenly spaced data. Astrophys. J., 263, 835.
Spottiswoode, S. J. P. (1997). Apparent association between effect size in free response anomalous
    cognition experiments and local sidereal time. Journal of Scientific Exploration, 11, 109.
Sturrock, P. A. (2004). Time-series analysis of a catalog of UFO events: evidence of a local-sidereal-
    time modulation. Journal of Scientific Exploration, 18, 399.
Journal of Scientific Exploration, Vol. 21, No. 1, pp. 67-84, 2007

  A Methodology for Studying Various Interpretations of the
     N,N-dimethyltryptamine-InducedAlternate Reality

             Center for Evolution, Complexity, and Cognition, Vrije Universiteit Brussel
               Computer Science Department, University of California at Santa Cruz
                                    e-mail: mrodrig2@vub.ac.be

      Abstract-N,N-dimethyltryptamine, or DMT, is an endogenous psychoactive
      chemical that has been shown through repeated human subject experimentation
      to provide the subject with a perception of an 'alternate reality'. When
      administered a sufficient DMT dose, subjects have reported the presence of
      intelligent beings that do not appear to be the projections of their subconscious
      in the Freudian sense. Furthermore, and of particular interest to this article,
      many subjects believe that the perceived alternate reality is persistent in that it
      exists irrespective of their subjective momentary perception. Past research into
      the DMT-induced alternate reality comes solely from subject testimonies and to
      date, no analysis has been conducted to understand the objective aspects of
      these extraordinary subjective claims. This article provides a methodology for
      studying the nature of the DMT-induced alternate reality by means of various
      simple information theory experiments. These experiments can be used to test
      which of the presented interpretations of the DMT-induced alternate reality
      appear most plausible.
      Keywords: DMT-Alternate-Reality-Hallucinogen-Alien Encounter-
                Parallel Universe

                                             1. Introduction
N,N-dimethyltryptamine, or DMT, is a fast acting, short lived psychoactive
chemical that is part of the tryptamine family of psychedelics (Kaplan et al.,
1974; Shulgin, 1976). DMT was first synthesized in 1931 by Manske (Manske,
1931). Exogenous DMT can be consumed either by smoking, snorting,
ingesting, or injecting (intramuscular or intravenous) it, but unlike related
psychedelic chemicals such as lysergic acid diethylmide (LSD) (Hofmann,
1983) and psilocybin (Heim & Wasson, 1958), DMT is an endogenous
substance (Barker et al., 1981) suspected to be synthesized by the pineal gland
(Strassman, 1991).
   To date, the most extensive large-scale U.S. Food and Drug Administration
(FDA) and U.S. Drug Enforcement Agency (DEA) approved human subjects
testing research to be conducted using DMT took place at the University of New
Mexico in the early 1990's under the primary investigation of Rick Strassman
68                               M. A. Rodriguez

M.D. (Strassman, 2001). Strassman made use of 60 normal (i.e. non-psychiatric)
volunteer subjects and over 400 intravenously administered DMT doses during
the 5-year study. Of particular interest to this article is the personal testimonies
provided by the human subjects after each administered dose. Twenty percent of
Strassman's volunteers, usually at a 0.4 mglkg intravenous dose, reported an
'alternate reality' in which seemingly intelligent beings existed. Furthermore,
similar testimonies regarding the perception of alternate alien realities has been
published by other DMT-oriented research projects (Sai-Halasz et al., 1958;
Szara, 1989). It has been speculated that the popular accounts of alien abduction
may be caused by the unregulated release of endogenous DMT by the 'abductee'
(Strassman, 2001). Whether this alternate reality is a complete hallucination (i.e.
a purely subjective synthesis of the human mind) or an actual objective reality is
unknown. To date, the various interpretations of the DMT experience are left
solely to personal belief. For those that have not had direct experience with
the DMT-induced alternate reality, all that exists for formulating one's opinion is
the extensive pool of human subject testimonies published in research manu-
scripts, popular science books, and on the world wide web.
When intoxicated by DMT, the mind finds itself in a convincingly real, appar-
ently coexisting alien world. Not a world about our thoughts, our hopes, our fears;
rather a world about the tykes-their joys, their dreams, their poetry. (McKenna, 1992,
p. 262)

   The purpose of this article is to provide an experimental methodology for
objectively studying three interpretations of the DMT-induced alternate reality.
The three interpretations are 1) an inconsistent subjective hallucination, 2)
a consistent subjective reality, and finally, 3) an objective co-existing alternate
reality (Heelan, 1983; Meyer, 1992). Two information theory experiments are
presented that can be used in future human-based DMT research to test which
interpretation appears most plausible. The successful implementation of the
various experiments is not without some difficulty. Given the level of intoxication
of the human subjects, the unknown physics of the alternate reality, the seemingly
different culture of the alien beings, and the short period of time a subject has to
interact with the beings, the variables against a clear signal transmission,
interpretation, and return of results all reduce the probability of a successful
experiment. Therefore, this article will also address the requirements of the
information passed and the necessary traininglexperience of the chosen subjects.

                               2. Related Research
   Unfortunately, even with the numerous astoundingly consistent reports of
DMT-induced alien experiences across many DMT research agendas, no
research to date has attempted to study the nature of this 'reality'. While the
objective physiological effects and the subjective psychological effects of DMT
are well known, the universe 'rules' governing the pure hallucinatory state of
                        DMT-Induced Alternate Reality                            69

DMT inebriation are lacking. The similarity of reports among DMT subjects
poses larger questions as to the source of these seemingly far-fetched tales. Is the
DMT-induced alternate reality purely a synthesis of the human mind, or does the
DMT reality have the characteristics of an objective place? Human subject
testimony isn't sufficient for validating which of the many interpretations of this
hallucinatory state is most plausible. Anecdotal tales and subjective belief
should be circumvented when attempting to understand which interpretation of
the DMT-induced alternate reality is most plausible.
   In the mid-1960's Timothy Leary made use of an engineered 'experiential
typewriter' (Leary, 1965) to allow inebriated subjects to report back particular
events during their DMT experience (Leary, 1966). While insightful as to the
timing of the various phases of the experience (e.g. onset, encounter,
comedown), the results of these experiments only provide a real-time subjective
assessment of the experience. This experimental methodology only provides
a more objective understanding to the course of events during DMT inebriation.
   Strassman, in his early 90's study, made limited use of electroencephalog-
raphy (EEG) and functional magnetic resonance imaging (fMRI) technology,
though he was unable to correlate the response from the devices and the
testimonies of the subjects (Strassman, 2001). On the other hand, Jordi Serrano's
EEG analysis reports that the human subjects, when given an ingested
monoamine oxidase inhibitor (MAOI) DMT concoction, are in an alert and
aroused stated during inebriation (i.e. low delta and theta activity) (Serrano,
2003). These recordings are correlated with user experience in that most subjects
report excitement and anxiety-large DMT doses tend not to induce a calm
meditative state. However, lower doses of DMT do produce a calm and relaxing
effect (Jacob & Presti, 2005). Currently, no positron emission tomography (PET)
scanning during DMT inebriation has been reported in the literature. Even with
such recording and brain imaging devices, these technologies are unable to
expose the type of information this article proposes to capture, such as whether
the DMT hallucination is a perception of an objective reality.

                    3. The Phases of the DMT Experience
  The DMT experience (without the use of M A 0 inhibitors) is short lived in
comparison to other psychedelics of the tryptamine family-from onset to
comedown, approximately 20 minutes. The time frame of the different
experiential phases of the DMT experience varies between individuals and
according to the method of DMT administration. What is presented is what can
be approximately expected from a relatively high dose of either smoked (i.e.
approximately 30-50 mg/three full inhalations) or intravenously (i.e. 0.4 mg/kg)
administered DMT (see Figure 1).
  The initial onset of the DMT experience is rapid and usually occurs before the
extremely intoxicated individual can take the third and final necessary inhaled
dose to move beyond the 'veil' (McKenna, 1992) or before the intravenous line
70                                   M. A. Rodriguez

 1 minute                   5 minutes                                15 minutes

 admin         enter                             exit
                                                                  residual effects
 dose          veil                              veil
                   Fig. 1. Phases of the typical high-dose DMT experience.

can be flushed with a saline solution (Strassman et al., 1994). The typical DMT
experience begins with the subject reporting intense psychedelic, kaleidoscope
visualizations with the presence of a loud humming, or buzzing, noise. With
a strong enough dose or a focused concentration on the part of the subject, the
subject may be able to bypass this colorful two-dimensional veil and enter into
the alien inhabited alternate reality.

There is nothing that can prepare you for [the DMT experience]. There is a sound,
a bzzzz. It started off and got louder and louder and faster and faster. I was coming on and
coming on and then POW! There was a space station below me and to my right. There
were at least two presences, one on either side of me, guiding me to the platform.
(Strassman, 2001, p. 189)

  Within the alternate reality, beyond the classic psychedelic veil experience, is
where encounters with alien beings are reported. What is interesting about this
aspect of the DMT experience is the complexity of interaction between the
subject and the DMT beings,
They were pouring communication into me, but it was just so intense. [...I There was
something outlined in green, right in front of me and above me here. It was rotating and
doing things. She was showing me, it seemed like, how to use this thing. It resembled
a computer terminal. I believe she wanted me to try to communicate with her through that
device. (Strassman, 2001, p. 209)

the seeming scientific complexity of the beings,
There were insect-like creatures everywhere. They were in a hyper technological space.
(Strassman, 2001, p. 209)
   I felt like I was in an alien laboratory, in a hospital bed like this, but it was over there.
A sort of landing bay, or recovery area. There were beings. I was trying to get a handle on
what was going on. (Strassman, 2001, p. 196)
and the autonomy of the entities.
They had a space ready for me. They weren't as surprised as I was. [...I There was one
main creature, and he seemed to be behind it all, overseeing everything. (Strassman,
2001, p. 197)
   I was aware of many entities inside the space station-automatons, android-
like creatures [...] they were living beings, not robots. [...] They were doing
some kind of routine technical work and paid no attention to me. (Strassman, 2001,
p. 189)
                            DMT-Induced Alternate Reality                                  71

   Though the DMT experience lasts approximately 20 minutes, only about 5 of
those minutes occur behind the veil and within the alternate reality. It is in this
alternate reality that most subjects report contact with the DMT entities and
therefore it is during this 5 minute window when communication with the DMT
beings can occur. This is the point of the subjective experience where objective
analysis must take place in order to verify or falsify the three interpretations
presented next.
I told [the being] "I can't go with you now. See, [the doctors] want me back." It didn't seem
offended and, in fact, it 'followed' me back until I sensed it had reached its boundary. I felt
like it was saying good-bye. (Strassman, 2001, p. 213)

       4. Three Interpretations of the DMT-Induced Alternate Reality
   This section discusses three interpretations of the DMT-induced alternate
reality. These interpretations run the gamut from DMT being a psychoactive
molecule that provides a complete hallucination (i.e. a full sensory hallucination)
to DMT acting as a gateway to a co-existing alien reality. First, the DMT expe-
rience can be interpreted as a hallucination where the human subject synthesizes
the elaborate experience each time DMT is administered (i.e. an inconsistent
subjective reality). In this sense, the DMT-induced reality is not persistent in that
the alternate reality cannot exist without the momentary perception of the DMT
inebriant. Furthermore, it is inconsistent in that repeated doses of DMT do not
have correlated responses within the individual human subject (e.g. recurrent
experiential themes). In this case, the DMT-induced perception would be akin to
the states perceived while on other psychoactives such as LSD and psilocybin.
However, unlike LSD and psilocybin, the subject is experiencing a complete
hallucination (i.e. a purely synthetic world) and not simply a distortion of their
perceptual mechanisms (e.g. 'melting walls' and visual trails).
It was pretty weird, but I figured it was just the drug. (Strassman, 2001, p. 193)

   Second, the alternate reality could be a consistent subjective hallucination in
that the DMT experience is persistent only to the individual human subject. This
means that the subject may return to the DMT-induced alternate reality and
perceive recurrent themes (e.g. same beings, similar conversations), but that
perception is privy only to the isolated individual and does not occur across all
subjects. This interpretation could imply that there exists some subconscious
mechanisms facilitating the persistent personal experience. Whether that mech-
anism is solely biological (e.g. stimulating particular fundamental, or low-level,
areas of the brain associated with the representation of humanoid beings) or
psychological (e.g. stimulating high-level cortical regions associated with one's
life experiences) are two potential sub-hypothesis of the falsification of the
72                                 M. A. Rodriguez

This time, I quickly blasted through to the 'other side'. Suddenly beings
appeared. They were cloaked, like silhouettes. They were glad to see me. They
indicated that they had had contact with me as an individual before. (Strassman, 2001,
  I went directly into deep space. They knew I was coming back and they were ready for
me. (Strassman, 2001, p. 215)

   Finally, the DMT-induced alternate reality may be an objective reality that is
persistent irrespective of the perceiving individual and therefore is the same
reality being experienced by all DMT inebriants (i.e. a co-existing world, a true
alternate reality). Though being the most extraordinary interpretation, as will be
shown, this interpretation may be the easiest to validate.
You can choose to attend to this or not. It will continue to progress without you paying
attention. You return to where you left off, but to where things have gone since you left.
Its not a hallucination, but an observation. (Strassman, 2001, p. 195)

           5. Testing the Various Alternate Reality Interpretations
   This section will describe the experiments necessary to test the various
interpretations of the DMT-induced alternate reality. The first interpretation, an
inconsistent subjective interpretation (i.e. a hallucination) requires individuals
who are not familiar with DMT nor what to expect from an administered dose,
and thus can be coached to interpret their experience contrary to what other more
experienced DMT users report. The second interpretation, the consistent
subjective interpretation, assumes that the DMT-induced alternate reality is
a non-random, non-mental1y contained reality (i.e. the experience is generated
from features outside the individual's mind) and thus can be used for information
storage. Validation of this hypothesis comes by means of a computation within
the alternate reality. If the alternate reality can compute information then the
DMT-induced alternate reality is, in fact, a co-existing world-though that
existing world may be a co-existing subjective world (i.e. a personal reality), not
an objective co-existing world (i.e. a collective reality). However, if not, then two
sub-hypotheses emerge. Does DMT act at the biological or psychological level-
where biological effects occur due to specific neural sites being activated and
psychological effects occur because of the subject's life history? In other words,
what mechanisms of the brain is DMT acting on in order to create the fanciful
endogenous experience? It is noted that the distinction between the biological and
psychological level is blurry. Finally, the third, and last, interpretation will
assume that the DMT-induced alternate reality is a purely objective reality that all
human subjects access during DMT inebriation. In this sense, it should be
possible to transmit information between human subjects within the alternate
reality. The distinction between the second and third test is to determine whether
information can be transmitted between humans in the alternate reality.
                         DMT-Induced Alternate Reality                                   73

                                                                   Negative experimental
   consistent                                                     results provide incentive
   subjective                                                    to explore sub-hypotheses


                                        subjective          $      Positive experimental
                                                            (P     results will verify the
                                                            9         existence of a
   co-existing                          objective                   co-existing reality
           Fig. 2. The various hypotheses of the DMT-induced alternate reality.

Transmission between humans guarantees an existing objective channel of
communication. Figure 2 outlines the various hypotheses for ease of reference.

          5.1. Inconsistent Subjective Interpretation Experiment

Hypothesis: The DMT-induced alternate reality is an inconsistent subjective
hallucination and therefore does not persist without the momentary perception
of the inebriated human subject.
   To validate or falsify this hypothesis, the experimenter should perform
a single blind study in which human subjects are used who have never heard of
DMT and its extraordinary effects on the human psyche. These subjects are told
that DMT inebriation will provide them solely a visual and auditory
hallucination. By simply defining the experience as that experienced in front
of the veil, the ill-informed subject will have no preconception as to what is
possible given the right conditions for entering into the DMT-induced alternate
reality. If subjects continually return only to describe the world in font of the
veil, then it can be concluded that the DMT experience can be influenced by
biasing the subject. In this sense, the hallucination is driven by preconceptions
and therefore may be understood solely as an inconsistent subjective hallu-
cination. On the other hand, if the inexperienced human subjects return with
testimonies of encounters with alien beings, then, as it has been repeatedly
shown, DMT is responsible for alien entity experiences (Strassman, 2001). It is
noted that this experiment has already been implemented with positive results.
Dr. Strassman's work used unassuming human subjects that did, in fact, return
from DMT inebriation with entity experiences (Strassman, 2001).
   The less provocative, and potentially more plausible explanation for entity
experiences may be that DMT acts on regions of the brain responsible for
74                               M. A. Rodriguez

a humanoid being devoid of a personality 'inscription' from the human's normal
waking associations (i.e. a nameless face). In this sense, DMT is working at the
'biological' level with limited reference to higher psychological associations.
   On the other hand, the entity experience may be generated exogenous to the
individual. For this hypothesis, it is important to see if while in this alternate
reality, the humanoid entities can function as autonomous beings. If the entities are
autonomous, and thus require no computation on the part of the inebriated
subject's cognitive faculties to function, then it holds that exogenous information
is being impinged on the human subject (i.e. the experience is rendered external to
the individual and is being perceived, not generated by the individual). A proof of
this nature would lead one to suspect that the DMT-induced alternate reality is in
fact a co-existing reality. Such proof is difficult to achieve due to the fact that
during REM sleep states, humanoid entities appear to be functioning auton-
omously within the individual's dreams. However, DMT provides humans with
the unique experience of a lucid dream-like world in which the human subject is
less removed from their normal waking realization. It is exactly this reaction that
will allow one to interact with the entities in a controlled, experimental manner.

            5.2. Consistent Subjective Interpretation Experiment

Hypothesis: The DMT-induced alternate reality is a subjective persistent
co-existing reality.
   In order to test whether the DMT-induced alternate reality is consistent to the
human subject, it is necessary to determine if the alternate reality can hold and
compute information independent of momentary observation. To hold in-
formation that can be objectively verified requires a computation on the part of
the DMT beings since placing information into the alternate reality and
retrieving the same piece information can be explained simply by a functioning
human memory. Therefore, the inputted information must be transformed in
a non-random manner and the results of the transformation must be retrieved for
analysis. This requires that the human subject provide the DMT beings with
a piece of information and a simple function to compute on that information. To
ensure that this computation occurs external to the individual and within the
alternate reality, the computation must be complex enough such that no normal
human subject (except potentially a savant) could yield a solution alone. One
such computation is the prime factorization of a large digit non-prime number.
   For review, a prime number is any number that can be divided only by itself and
1 without incurring a remainder in the process. The fundamental theorem of
arithmetic states that every natural number greater than one is either a prime
number or can be represented as the unique product of a set of prime numbers
(Baker, 1984). For example, the number 26, which is not a prime itself, can be
expressed as the product of the two prime numbers, 2 and 13. Obviously, the prime
factorization of 26 can be determined easily by the human subject using only their
                           DMT-Induced Alternate Reality                                  75

                                       Alternate Reality

Input Data                                                                  Output Data
1. large non-prime factor                                    1. a single prime factor
2. desire for prime factorization
        Fig. 3. Studying computation in the alternate reality with prime factorization.

internal cognitive faculties. Therefore, to test for a co-existing reality, prior to
DMT inebriation, the individual must memorize a large non-prime number (e.g.
23,467). The non-prime chosen must require the use of a technologically
advanced space to determine the prime factors (23,467 = 31 X 757).
   In his classic psychology paper, George Miller demonstrates the fact that most
individuals can only memorize items that are composed of less than 7 2 2
'chunks' of information (Miller, 1956). With the potential for dissociation during
inebriation, it might be necessary to provide a large digit number that not only
has a small number of prime factors (<3), but also can be recalled by the human
subject while in the DMT-induced alternate reality. For example, the number
11,111 is composed of the primes 41 and 271. Or 1,111,111 is the product of 239
and 4649. Similarly, 122,333 = 71 X 1723.
   All non-prime numbers are not simply the product of two prime numbers (e.g.
12,345 = 3 X 5 X 823). The reason to choose a non-prime that is the product of
two primes is that the human subject need only remember one of the prime
numbers to have solved the function. For instance, if the chosen non-prime
number is 11,111, then the inebriated human subject need only retrieve one of
the prime factors from the alternate reality (e.g. 11,111141 = 271).
   The goal of the human subject is to communicate not only the large digit non-
prime number, but also the desire for the DMT entities to compute the prime
factorization on that number. Given numbers with 7 2 2 digits, our current
algorithms can determine the prime factors in fractions of a second. Therefore, if
the DMT entities are able to compute this function and provide the solution to the
human subject within the five minute window, then the DMT-induced alternate
reality can be said to contain intelligent entities or is able to expand the inebriated
individual's mathematical faculties to levels that are not possible during normal
states of consciousness. This experimental computation is represented in Figure 3.
   To validate a persistent reality (i.e. a stable co-existing reality), it is important
that the inebriated human subject not return with the prime factors. Instead, the
                                    M. A. Rodriguez

                                        Alternate Reality

                               I                                        I

                                   in   f     out        'u

Input Data
                             /                                     \ Output Data
1. large non-prime factor                                     1. a single prime factor
2. desire for prime factorization
        Fig. 4. Studying persistence in the alternate reality with prime factorization.

when the human subject returns to the alternate reality. Therefore, this test for
persistence requires at least two sequential DMT administrations to the same
human subject. The first inebriation provides the DMT beings a large non-prime
number to factorize. The second inebriation requires the human subject to
retrieve the factor solutions. This experiment is represented in Figure 4.
   If the non-prime number is large enough, and therefore takes longer to
compute (according to our knowledge of prime factor algorithms) then it may be
possible to say that the DMT-induced alternate reality was computing the
answer during the time the individual was not existing behind the veil.
Therefore, not only would persistence be shown, but progression as well. There
are many problems with verifying progression. For one, unlike matter which
evolves according to constant rules (known physical laws), information
evolution is dependent upon the acting algorithm for which we constantly
discover more. Therefore, if we don't know the algorithm used by the DMT
beings to compute the prime factors, then it is difficult to know the nature of
time in the DMT-induced alternate reality.

            5.3. Co-Existent Reality Interpretation Experiment

Hypothesis: The DMT-induced alternate reality is an objective persistent
co-existing reality.
   Testing for complete objectivity and, therefore, to determine if the DMT-
induced alternate reality is a persistent co-existing alternate reality, a simple
information inlinformation out experiment using two human subjects can be
performed. One version of this experiment requires no manipulative computa-
tion on the part of the DMT beings except simply maintaining an exact replica of
the information provided (i.e. information storage). On the other hand, to
remove the potential interpretation that DMT may provide the human subject
with a form of extra sensory perception (ESP) (Roll 1989), it is best to have the
alien entities compute the prime factors of a large non-prime number.
                           DMT-Induced Alternate Reality                                 77

                                      Alternate Reality


Input Data                                                               Output Data
1. large non-prime factor                                     1. a single prime factor
2. desire for prime factorization
         Fig. 5. Studying objective reality through information storage and retrieval.

   The proposed experiment requires two human subjects which will be called
human X and human Y. Human X will enter the alternate reality and provide the
DMT beings with a large non-prime number known only to him or herself. Once
human X has provided the DMT beings with the large non-prime number and the
desire for a prime factorization, then human X returns from the alternate reality.
Next, human Y is inebriated and attempts to retrieve one of the prime factors of
the large non-prime number from the DMT entities. If human Y returns with
a valid prime, then the hypothesis is validated and it can be said the DMT-
induced alternate reality is an objective persistent co-existing reality that every
human subjects 'goes to' while inebriated on DMT (Figure 5).
    Assuming that the DMT-induced alternate reality is home to a population of
alien beings, then given that there is no known information as to the size of the
population, the 'physical' size of the DMT world, nor their internal com-
munication infrastructure, it may be difficult to get a positive result from this
experiment. For instance, imagine the reverse scenario. Imagine if these alien
beings were using a psychoactive chemical in their world to appear in ours. These
beings may be appearing in very different coordinates of our physical space. This
may be as small as different areas of our planet, or as problematic as different
areas of our known universe. Therefore, if human X is to route information to
human Y by means of the alternate reality, then it is necessary that the DMT
beings have the appropriate communication infrastructure to ensure that the
message is passed from wherever human X appears to wherever human Y appears
in the coordinate space of the alternate reality. Given the seemingly techno-
logically advanced nature of these beings, this might not be a problem. However,
given that the rules governing the DMT-induced alternate reality are little known,
it is difficult to assume that our interpretation of our reality is a fit model within
their reality. Though this experiment provides a sound validation of an objective
reality, there exist many potential experimental noise sources as will be described
in the next section. Fortunately, a single positive result is guarantee enough that
78                                     M. A. Rodriguez

           Human Subject                                                 DMT Being

I   pb~i!-+~


                                         I   Medium
                                                                     I    Culture

               Fig. 6 . Potential noise sources in the DMT-induced alternate reality.

                           6. Sources of Experimental Noise
   This section will discuss some potential noise sources of the various
experiments. When the subject is injected with approximately a 0.4 mgkg dose
of DMT, the subject, if able, will break through the DMT visual veil and enter
the alternate reality of the DMT beings. It is here that the subject will interact
with the DMT beings. There are many obstacles that will effect a clear positive
result of the experiments to follow (see Figure 6).
   First, the individual will be extremely intoxicated by the DMT dose and
therefore may not be able to effectively transmit the proper experimental
information to the DMT beings (Figure 6 noise source A). Second, the
environment separating the inebriated subject and the beings may not be
conducive to signal propagation (Figure 6 noise source B). Finally, due to the
difference in culture between the human subject and the DMT beings, the beings
may not be able to interpret the signal appropriately (Figure 6 noise source C).
Figure 6 is a modification of Claude Shannon's diagram of a general
communication system (Shannon 1948) where noise sources A and C are
introduced to account for human intoxication and alien cultural interpretation,
respectively. Finally, the experiments require bi-directional communication.
Therefore, the reverse process will incur further potential for error. The
following subsections will elaborate on each potential noise source.
   It should be noted that the alien experience is not the only reaction that
humans have while under DMT inebriation. Within the various classes of
experiences, a distinction can be made between intra-personal and inter-personal
experiences. An intra-personal DMT experience is one in which the human
subject experiences events related solely to their life events. For example, many
subjects report various subconscious associations with life-long traumas
(Strassman, 2001). While such intra-personal experiences are purely subjective,
there does exist a class of inter-personal experience which points to the
hypothesis that DMT actually provides the inebriated individual with an
experience of an objective co-existing alternate reality. Therefore, it is important
to seek those human individuals who consistently have alien entity experiences
as subjects for the experiments outlined in this article.
                         DMT-Induced Alternate Reality                              79

                 6.1. Noise Source A and Human Intoxication
   The role of the human subject is to input and retrieve information into the
DMT-induced alternate reality. For the purposes of these experiments, it is
assumed that this information must be stored and computed by the cognitive
faculties and technology of the DMT beings. Therefore, this requires
communication with the beings. It is important that when in the DMT-induced
reality, the subject is able to remember their purpose for entering and recall the
information that must be transmitted for the experiment. Though this may seem
simple enough, the dissociative feeling and anxiety experienced on DMT may
make such a simple cognitive task difficult. Fortunately, even though DMT is an
extremely potent psychoactive, many subjects report that once through the veil
and in the alternate reality, there is a complete sense of sobriety.

When I'm in there, I'm not intoxicated. I'm lucid and sober. (Strassman, 2001,
p. 195)
   It was incredibly un-psychedelic. I was able to pay attention to detail. (Strassman,
2001, p. 197)
   This question about "being high '-I don't know. I had my capacities. I was able to
observe quite clearly. I didn't feel stoned or intoxicated; it was just happening.
(Strassman, 2001, p. 207)

   Even with subjects reporting a clear sense of rational thinking, there is still
confusion and anxiety towards the undoubtedly bizarre experience. Therefore, it
is important that the subjects chosen for the experiments are comfortable with
the effects of DMT.

I communicated with them but there wasn't enough time. I was so strung out, excited,
agitated when I arrived there. They wanted to try and reduce my anxiety so we could
relate. (Strassman, 2001, p. 190)
   When they were on me, there was a little bit more confusion than fear. Kind of like,
"Hey! What's this?!" And then there they were. There was no time for me to say "Who
the hell are you guys? Let's see some ID!" (Strassman, 2001, p. 199)

   For consistent subjective and objective interpretation testing, then like
Strassman's chosen subjects, it is important to include subjects that are
experienced with DMT or other psychoactives of the tryptamine family. A
completely inexperienced subject experiencing the dissolving of their known
world would incur a fear, confusion, and anxiety so great that the potential for
a successful signal transmission would be reduced. However, unlike Strassman,
the goal of these experiments is not to test the effects of DMT on a wide
demography. Instead, it is important to communicate quantifiable information to
the beings of the alternate reality. Therefore, these experiments are independent of
the subjective experience and therefore only those individuals that can perform at
the required level should be sought. For the proposed methodology, a large subject
pool is unnecessary since a single successful result is sufficient for validation.
   Unlike other psychedelics, DMT has been shown to not induce a physical
tolerance in the human subject (Strassman, 1996; Strassman et al., 1996) and so
80                                  M. A. Rodriguez

can be used repeatedly by the subject without a waning effect. In fact, repeated
dosing was necessary in some instances for the subjects to feel less anxious
towards the experience (Strassman, 2001). This comfort, through experiential
habituation, may allow the subject to transmit the experimental information
effectively and thus would reduce the negative effects of noise source A.

               6.2. Noise Source B and the Information Medium
   The second noise source is noise source B in Figure 6. In standard information
science, this is noise impinged upon a signal due to various external
disturbances. In the DMT alternate reality, the 'physics' are unknown. From
various testimonies, it appears that communication between human and alien
occurs through some form of telepathy or manipulation of complex objects.
There is definitely a visual communication medium, but many reports
demonstrate a lack of coherent auditory transmission.
There was an initial sense of panic. Then the most beautiful colors coalesced into beings.
There were lots of beings. They were talking to me, but they weren't making a sound.
(Strassman, 2001, p. 190)
   It also appears that the beings are able to read thoughts. When a subject
thought that she was dying, she repeated over and over to herself: "at least there
is God." The beings' reaction was:
"Even here? Even here?" was not spoken in words. It was an empathetic communication,
a telepathic communication. (Strassman, 2001, p. 207)
   Given that many subjects state that the DMT beings are able to interpret their
thoughts, non-auditory information transmission may come by means of the
human subject repeating the necessary information as a 'looped' thought process
(i.e. a constant internal repeating of the information to be transmitted).
There was the usual sound: pleasant, a roar, a sort of an internal hum. Then there were three
beings, three physical forms. There were rays coming out of their bodies and then back to
their bodies. They were reptilian and humanoid, trying to make me understand, not with
words, but with gestures. [...I Once I established what they were communicating,they didn't
just fade away. They stayed there for quite a while. Their presence was very solid.
(Strassman, 2001, p. 191)

                 6.3. Noise Source C and Entity Interpretation
I'm aware of them and they're aware of me. It's like they have an agenda. It's like walking
into a different neighborhood. You're really not quite sure what the culture is. It's got
such a distinct flavor, the reptilian being or beings that are present. (Strassman, 2001,
p. 189)

   The final noise source is alien interpretation. The DMT beings that are being
interacted with are definitely odd in nature. Many describe them as humanoid
                          DMT-Induced Alternate Reality                                81

entities that communicate with gestures, telepathy, and complex mutating
objects. Do these entities understand the concepts that we use to describe our
world? Are they familiar with mathematics?
It started out typically as DMT but then I went past it, beyond where I've been on DMT.
There is that ringing sound as you're getting up there, and then I went to the language or
number thing. [...I The first number I saw was 2 and I looked around and there were
numbers all around. They were separate in their little boxes, and then the boxes would
melt and the numbers would all merge together to make long numbers. (Strassman, 2001,
p. 179)

  It has been repeatedly reported that the DMT beings appear to be tech-
nologically advanced and therefore may have a strong sense of the model we use
for describing our reality (i.e. our scientific paradigms). Some reports have stated
that the beings were excited that we had discovered the DMT technology and
were hopeful that we, as a species, would learn how to stay behind the veil for
longer periods of time. These reports allude to the intelligence of these beings and
therefore, the DMT entities may be able to manipulate information represented
within our constructs (e.g. base 10 numbers and prime factorization algorithms).
There was a human, as far as I could tell, standing at some type of console, taking
readings and manipulating things. He was busy at work, on the job. I observed some of
the results on that machine, maybe from my brain. (Strassman, 2001, p. 194)

                        7. Extending Time Behind the Veil
They told me there were many things they could share with us when we learn how to
make more extended contact. (Strassman, 2001, p. 215)

   This section provides only speculation as to the necessary mechanisms to
extend the subject's time within the DMT-induced alternate reality and
therefore, their time with the DMT beings. 3,4-dihydro-7-methoxy-1-methyl-
b-carboline, or hamaline, is a MA0 inhibitor that blocks enzymes that
metabolize DMT in the human system. This substance is known to destroy M A 0
in the human gut, thus allowing the DMT to be ingested as the popularly known
ahuyausca brew (Grob et al., 1996). The constant administration of DMT
through an intramuscular or intravenous infusion pump after the subject had
received their initial DMT dose seems like a plausible mechanism for extending
contact due to the lack of tolerance seen in repeated DMT administrations
(Strassman et al., 1996). However, due to the intensity of DMT and a prolonged
high-dose experience, it may be desirable to administer an anxiolytic (i.e.
a sedative) to ease the subject into the extended DMT session. The anxiolytic
should be administered in a sufficient amount to calm, but not distort the
cognitive faculties of the subject. Again, these ideas are only speculative and
82                                  M. A. Rodriguez

   By extending the human subject's time behind the veil, subjects will have
more time to get acquainted with the DMT-induced alternate reality and may be
able to learn how to manipulate the environment such that more complex
experiments may be able to take place. What these future experiments will look
like can only be determined once particular interpretations of the DMT-induced
alternate reality are validated or falsified.

                                      8. Conclusion
   DMT is an extraordinary psychoactive in that it is one of the few known
chemicals that can produce a full sensory hallucination in humans. Furthermore,
of those psychoactives that produce a complete hallucination, DMT seems to
provide the most vivid and extraordinary experience. It is this aspect of DMT
that has many individuals believing that the DMT experience is actually a
perception of an objective co-existing reality inhabited by alien beings. The many
subjective testimonies of a science-fiction-like world beg the psychedelic
research community to further study what is actually happening during DMT
inebriation. DMT, unlike other psychoactives, either provides a fascinating
glimpse into the power of human imagination or a glimpse into an alternate alien
They seemed pleased that we had discovered this technology [DMT]. I felt like a spiri-
tual seeker who had gotten too far off course and, instead of encountering a spirit
world, overshot my destination and ended up on another planet. (Strassman, 2001,
p. 214)

   If DMT provides only a complete subjective hallucination then can the
experienced alternate reality drive novelty in ours? Like LSD and other
hallucinogens, can DMT (as simply a hallucinatory psychoactive) yield func-
tionality in the individual that is difficult to attain in normal waking con-
sciousness (Stafford & Golightly, 1967)? Or, at the other side of the spectrum, if
DMT provides humans with the ability to tunnel to an alternate reality, can novel
information be harnessed and brought back to our world?
DMT has shown me the reality that there is infinite variations on reality. There is the real
possibility of adjacent dimensions. It may not be so simple that there's alien planets with
their own societies. This is too proximal. It's not like some kind of drug. It's more like an
experience of a new technology than a drug. (Strassman, 2001, p. 195)

   To date, there is no generally accepted interpretation of these subjective
testimonies. Given the DEA Schedule 1 status of the DMT molecule, in depth
research in the DMT-induced alternate reality is difficult and usually can only be
conducted from the standpoint of the medical research agenda. What has been
proposed in this article is devoid of a medical agenda and is not interested in the
human subjects (their spiritual enlightenment, personal interpretations, nor
physical reactions). Instead, this proposed methodology is interested solely in
understanding the objective nature of the DMT-induced alternate reality. Given
                             DMT-Induced Alternate Reality                                       83

a clearer picture of what the DMT-induced alternate reality is, it will be possible
to design more advanced experiments. Can this alternate reality be used to
compute complex functions and/or provide our scientific program with a new
tool for studying consciousness? Only through experimentation can such ques-
tions be answered.
DMT is not one of our irrational illusions. I believe that what we experience in the
presence of DMT is real news. It is a nearby dimension-frightening, transformative, and
beyond our powers to imagine, and yet to be explored in the usual way. We must send
fearless experts, whatever that may come to be, to explore and report back on what they
find. (McKenna, 1992, p. 259)

   I would greatly like to thank Matthias Detremmerie of the Economische
Hogeschool te Brussel (EHSAL) for his thorough review for which much
clarification was offered.

Baker, A. (1984). A Concise Introduction to the Theory of Numbers. Cambridge University Press.
Barker, S. A., Monti, J. A., & Christian, S. T. (1981). N,N-dimethyltryptamine: An endogenous
    hallucinogen. International Review of Neurobiology, 22, 83-110.
Grob, C., McKenna, D., Callaway, J., Brito, G., Neves, E., Oberlaender, G., Saide, O., Labigalini, E.,
   Tacla, C., Miranda, C., Strassman, R., & Boone, K. (1996). Human psychopharmacology of
    Hoasca, a plant hallucinogen used in ritual context in Brazil. Journal of Nervous and Mental
    Disorders, 184, 8 6 9 4 .
Heelan, P. A. (1983). Space-Perception and the Philosophy of Science. University of California Press.
Heim, R., & Wasson, R. (1958). Les Charnpignons hallucinogenes du Mexico. Paris: Museum of
    Natural History. pp. 101-122.
Hofmann, A. (1983). LSD: My Problem Child. Los Angeles, CA: J.P. Tarcher Books.
Jacob, M. S., & Presti, D. E. (2005). Endogenous psychoactive tryptamines reconsidered: an
    anxiolytic role for dimethyltryptamine. Medical Hypotheses, 64, 930-937.
Kaplan, J., Mandel, L. R., Stillman, R., Walker, R. W., VandenHeuvel, W. J. A., Gillin, J. C., &
    Wyatt, R. J. (1974). Blood and urine levels of N,N-dimethyltryptamine following ad-
    ministration of psychoactive dosages to human subjects. Psychopharmacology, 38,
Leary, T. (1965). The experiential typewriter. Psychedelic Review, 7, 70-85.
Leary, T. (1966). Programmed communication during experiences with DMT. Psychedelic Review, 8,
Manske, R. (1931). A synthesis of the methyltryptamines and some derivatives. Canadian Journal of
    Research, 5, 592-600.
McKenna, T. (1992). Food of the Gods: The Search for the Original Tree of Knowledge. Bantam
Meyer, P. (1992). Apparent communication with discarnate entities induced by dimethyltryptarnine
    (DMT). Psychedelic Monographs and Essays, 6. http://www.serendipity.li/dmt/dmtartOO.html.
    Accessed 13 December, 2006.
Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for
    processing information. The Psychological Review, 63, 81-97.
Roll, W . G. (1989). This World or That: An Examination of Parapsychological Finding Suggestive of
    the Survival of Human Personality after Death. Ph.D. thesis, Lund University, Lund, Sweden.
84                                      M. A. Rodriguez

Sai-Halasz, A., Brunecker, G., & Szara, S. (1958). Dimethyltryptamin: Ein neues psychoticum.
    Psychiatric Neurology, 135, 285-301.
Serrano, J. R. (2003). Human Pharmacology of ayahuasca. Ph.D. thesis, Universitat Autonoma de
    Barcelona, Barcelona, Spain.
Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal,
    27, 379-423.
Shulgin, A. T. (1976). Profiles of psychedelic drugs: DMT. Journal of Psychedelic Drugs, 8, 167-
Stafford, P., & Golightly, B. (1967). LSD: The Problem-Solving Psychedelic. New York, NY: Award
Strassman, R. J. (1991). The pineal gland: Current evidence for its role in consciousness. Psychedelic
    Monographs and Essays, 5, 167-205.
Strassman, R. J. (1996). Human psychopharmacology of N,N-dimethyltryptamine. Behavioral Brain
    Research, 73, 121-124.
Strassman, R. J. (2001). DMT the Spirit Molecule: A Doctor's Revolutionary Research into the
    Biology of Near-Death and Mystical Experiences. Rochester, V T : Park Street Press.
Strassman, R. J., Qualls, C. R., & Berga, L. M. (1996). Differential tolerance to biological and
    subjective effects of four closely spaced doses of N,N-dimethyltryptarnine in humans. Biological
    Psychiatry, 39, 784-795.
Strassman, R. J., Qualls, C., Uhlenhuth, E., & Kellner, R. (1994). Dose-response study of N,N-
    dimethyltryptamine in humans 11: Subjective effects and preliminary results of a new rating scale.
    Archives of General Psychiatry, 51, 98-108.
Szara, S. (1989). The social chemistry of discovery: The DMT story. Social Pharmacology, 3, 237-
Journal of Scientific Exploration, Vol. 21, No. 1, pp. 85-87, 2007      0892-3310107

                        Comment on "A Methodology for
                       Studying Various Interpretations of
                      the N,N-dimethyltryptamine-Induced
                               Alternate Reality"

Hallucinogenic, or psychedelic, drugs reliably induce in humans a characteristic
syndrome of altered consciousness in which nearly all mental faculties are
affected: cognition, perception, emotion, sense of self, and volition. While the
often florid nature of their sensory effects has caused us to focus, perhaps
inordinately, on their "hallucinogenic" properties, other aspects of the mind also
are profoundly modified. Classical hallucinogens include LSD, mescaline, and
psilocybin (1).
   DMT is a powerful short-acting classical hallucinogen that occurs in a multitude
of plants, and in all mammals, including man. The isolation and preliminary
characterization of the gene that transcribes the DMT-synthesizing enzyme have
shed new light on the regulation of this unique endogenous compound (2).
   DMT became a relatively minor drug of abuse soon after the discovery of its
psychoactivity by Hungarian psychiatrist Stephen Szara in the mid-1950s (3).
DMT also is the visionary ingredient in the Amazon brew ayahuasca, and as
contemporary use of this brew increases, so will Western exposure to DMT (4).
   Psychiatric research with DMT began in the 1950s and lasted until the early
1970s, at which time all clinical research with these drugs ceased because of
regulatory factors. DMT was the focus of the first new U. S. clinical research
with psychedelics in 20 years ( 5 ) , coinciding with a resumption of European
studies using mescaline (6). Our studies with DMT characterized multiple
biological variables and developed a new rating scale to quantify psychological
effects. In addition, we paid very careful attention to volunteers7descriptions of
their DMT experiences (7,8).
   Nearly half of our volunteers described the experience of coming into contact
with autonomous sentient "beings" while under the effect of a high dose of
DMT. These beings seemed to inhabit a "parallel" ongoing reality (9). The con-
sistency and frequency of these reports by our volunteers are perplexing, and one
is hard-pressed to offer definitive explanations as to their bases. Three hypo-
theses for these experiences are suggested by Rodriguez in the preceding paper:
   1) They are solely products of the volunteer's mind-that is, they are
86                                   Rick Strassman

  2) They occur in a non-consensually validated, but seemingly objective
  3) They occur in a consensually validated, seemingly objective reality.
   Giving credence, or even consideration, to theories 2) and 3) may appear to
some as fundamentally suspect. That is, are we going even deeper into some
psychedelic delusional thinking process? Or, are we applying the scientific
method to areas into which no one has cared or dared to take it previously?
   We have taken the first step in resuming human studies with hallucinogens.
Given this opportunity, how are we to deal with the novelty of the full
psychedelic experience?
   There are several ways to profess thoroughly investigating the "psychedelic"
experience without really doing so. It is important to be alert to these issues
when appraising results generated from contemporary human studies. One is to
keep the doses of drugs rather low. For example, German (10) and Swiss (1I), as
well as U. S. (12), studies with psilocybin use doses of this drug that are one-half
to one-quarter those we found in our preliminary studies necessary to elicit
a "psychedelic" level of intoxication. Another is to focus on non-subjective
effects, such as brain imaging, and other biological variables. Lastly, to limit
those descriptions to quantifiable rating scales and ignore the substance of the
reports obtained in the clinical interview situation when describing subjective
   In order to merit considering our resumption of human studies as an
advancement, rather than merely a repetition, of previous research, we must
contribute something novel to the field of human consciousness through these
studies. We can do this by confronting directly some of the truly paradigm-
challenging findings that previous researchers could not adequately integrate
into their extant scientific world views.
   This process, however, may teeter dangerously on the razor's edge of
"respectable" vs "pseudo-" science. Nevertheless, we cannot avoid taking some
conceptual risks when attempting to explicate the seemingly inexplicable.
Perhaps this may mean we will need contributions from our non-clinical col-
leagues, such as computational cognitive scientist Rodriguez, who have greater
freedom to consider experiments and processes which stretch the framework and
language of clinical investigators.

                                                               RICK STRASSMAN, MD
                                          Clinical Associate Professor of Psychiatry
                                       University of New Mexico School of Medicine
                                                        rickstrassman @ earthlink.net

 1. Strassman RJ. Hallucinogenic drugs in psychiatric research and treatment: Perspectives and
    prospects. J Nerv Ment D s 1995;183:
                                          Commentary                                             87

 2. Thompson MA, Moon E, Kim U-J, Xu J, Siciliano MJ, Weinshilboum RM. Human
    indolethylamine N-methyltransferase: cDNA cloning and expression, gene cloning, and
    chromosomal localization. Genomics 1999;61:285-297.
 3. Sai-Halasz A, Brunecker G, Szara SI. Dimethyltryptamin: ein neues Psychoticum. Psychiat
    Neurol, Basel 1958; 135:285-301.
 4. Grob CS, McKenna DJ, Callaway JC, et al. Human psychopharmacology of hoasca, a plant
    hallucinogen used in ritual context in Brazil. J Nen, Ment Dis 1996;184:86-94.
 5. Strassman RJ. Human hallucinogenic drug research in the United States: a present-day case history
    and review of the process. J Psychoactive Drug 1991;23:29-38.
 6. Oepen G, Fuengeld M, Harrington A, Hermle L, Botsch H. Right hemisphere involvement in
    mescaline-induced psychosis. Psychiatry Res 1 989;29:335-336.
 7. Strassman RJ, Qualls CR. Dose-response study of N,N-dimethyltryptamine in humans. I:
    Neuroendocrine, autonomic, and cardiovascular effects. Arch Gen Psychiatry 1994;51:85-97.
 8. Strassman RJ, Qualls CR, Uhlenhuth EH, Kellner R. Dose-response study of N,N-
    dimethyltryptamine in humans. 11: Subjective effects and preliminary results of a new rating
    scale. Arch Gen Psychiatry 1994;51:98-108.
 9. Strassman R. DMT: The Spirit Molecule. Rochester,VT: Park Street Press, 2001.
10. Gouzoulis-Mayfrank E, Thelen B, Habermeyer E, et al. Psychopathological, neuroendo-
    crine and autonomic effects of 3,4-methylenedioxyethylamphetamine(MDE), psilocybin and
    d-methamphetamine in healthy volunteers. Psychopharmacology 1999; 142:41-50.
11. Vollenweider FX, Leenders KL, Scharfetter C, Maguire P, Stadelmann 0 , Angst L. Positron
    emission tomography and fluorodeoxyglucose studies of metabolic hyperfrontality and
    psychopathology in the psilocybin model of psychosis. Neuropsychopharmacology 1997;6:357-
12. Griffiths RR, Richards WA, McCann U, Jesse R. Psilocybin can occasion mystical-type
    experiences having substantial and sustained personal meaning and spiritual significance.
    Psychopharmacology 2006; 187:268-283.
Journal of Scientific Exploration, Vol. 2 1 , N O. 1, pp. 88-88, 2007

                         Comment on "A Methodology for
                        Studying Various Interpretations of
                       the N,N-dimethyltryptamine-Induced
                                Alternate Reality"
This is a very clear, logically-sound, lucid, clever treatment of an interesting
subject, as would be expected from a computer scientist. This paper is
informative and useful in that it may spur specific experimentation and certainly
more thinking about rigorous ways to test hypotheses about unusual mental
states. I recommend a brief addition of consideration of 2 issues:
   1) has anyone tried putting two people (perhaps, people with a significant
      emotional, mental, or genetic connection-twins, husband/wife pair, etc.)
      under DMT treatment at the same time? Some of the models of this
      phenomenon may predict that the people could report being in the same
      space, or perhaps see each other there, be able to communicate, etc.
   2) in the introduction, the motivation for someone actually doing these
      experiments may be increased if the author could attempt (or at least
      suggest as a first step) an objective analysis of the descriptions of the DMT
      experience: how much concordance is there between different subjects'
      descriptions of what they find? If there is considerable similarity, it would
      be some evidence for the more "realistic" models. If there is none,
      however, it is still possible that people visit different places, etc. The fact
      that 20% of the subjects in a described study reported an alternate reality
      with beings in it is not the same thing-an analysis of the more specific
      descriptions of what people experienced would bear on whether the
      "realistic" models are plausible enough to warrant testing.
Journal of Scientific Exploration, Vol. 21, No. 1, pp. 89-98, 2007

                             An Experimental Test of
                        Instrumental Transcommunication

                         King's University College, Department of Psychology
                       266 Epworth Avenue, London, Ontario N6A 2M3, Canada
                                       e-mail: baruss@ uwo.ca

      Abstract-As a result of a previous study in which electronic voice phenom-
      enon failed to be found, the author introduced two new elements in an
      experiment seeking to produce instrumental transcommunication: the creation
      of text using random text generators and the presence of a medium. There were
      26 experimental sessions carried out from April 28,2003 to August 30,2003 in
      the Psychology Laboratory at King's University College. The random text
      generators were engaged a total of 715 times producing 23,281 discrete units of
      textual data. Only a yeslno generator produced anomalous results. Of the 49
      times the yeslno generator was used, 11 of them were in response to questions
      for which the answers could be verified. Of those 11 responses, 9 were correct
      with a probability of occurrence by chance of .042. Such a result could be due
      to chance, anomalous human-machine interaction between the participants and
      the computer, or some other influences such as those arising from possibly
      existent unseen dimensions of reality. The use of text generators and the pres-
      ence of a medium in instrumental transcommunication (ITC) research are
      discussed, including the potential provision of information by the medium
      regarding strategies that could facilitate ITC.
      Keywords: instrumental transcommunication-electronic voice phenomenon-
                survival hypothesis-mediumship-random event generator-
                anomalous human-machine interaction-psychokinesis-PK-
                altered states of consciousness

I carried out an experiment in electronic voice phenomenon (EVP) from
September 15, 1997 to January 22, 1998. The study consisted of having research
assistants try to interact with unseen beings in the presence of two radios tuned
between stations while all of the audio that was present was recorded. Sub-
sequently the research assistants would listen to the recordings to identify any
anomalous sounds. There were 81 sessions with a total recording time of ap-
proximately 60 hours. No anomalous sounds were heard (BaruSs, 2001).
  Upon completion of this study, I realized that seeking communication with
beings in other dimensions through the generation of EVP was unlikely to yield
convincing results simply because acoustic sensory data are too perceptually
malleable (Skinner, 1936; Warren, 1968). Thus, for a second study, I decided to
introduce computerized pseudorandom text and to see whether anomalous
meaningful phrases would occur in the printed text. Such use of a computer
would make this a study of instrumental transcommunication (ITC).
   A second decision that I made was to have a medium present during the ITC
sessions. There is no consensus as to whether or not someone with such abilities
would enhance the production of ITC (cf. Fontana, 2005). Actually, if anyone
could be helpful, it seems to me that it would be a poltergeist, as in the case of
William O'Neil (Fuller, 1985; Macy, 1996). However, if the medium were able
to contact beings in the same realm as those trying to communicate through ITC,
then we could be able to obtain advice about what to try to do in order to make
the process more successful, as was done, for example, in the Scole experiment
(Keen et al., 1999).

   I carried out a study of ITC consisting of 26 sessions held from April 28,2003
to August 30, 2003 in the Psychology Laboratory at King's University College.
For the purposes of the study I wrote three computer programs in the word
processor WordPerfect using its random number generator to create random text.
One of those programs randomly chose a letter of the English alphabet, a digit
between 0 and 9, or a space, and displayed it on the screen, and then repeated that
process a specified number of times. In this way, character strings of 10 to 500
characters could be created. A second program chose a set number of words,
ranging from 1 to 500, from a pool, and printed them as a block. An initial pool
containing 144 words chosen in an unsystematic way was supplemented in the
course of the experiment by words that were thought to be useful so that the pool
contained 176 words at the termination of the experiment. Examples of these
words can be seen in the results section of this paper. A third program randomly
chose just the word "yes" or "no" and printed it on the screen. Each of these pro-
grams could be triggered by hitting a single key on a computer keyboard.
   The random number generator in WordPerfect is almost certainly a pseudo-
random number generator that creates numbers in a deterministic manner. There
is no documentation concerning this random number generator and technical
support personnel at Corel, the company that owns WordPerfect, told me that
information about their random number generator is proprietary. It is likely
a linear congruential random number generator that starts with a seed number,
such as a number compiled from the date and time, and then puts it through an
algorithm that uses remainders from division as new seeds. For practical pur-
poses, these numbers can be regarded as random. And, on the face of it, the
output from these programs certainly appeared to be random. These would be
the primary means through which ITC would be expected to occur. After the
termination of the experiment, a research assistant was hired to scrutinize the
output in order to look for anomalies.
                       Instrumental Transcommunication                          91

   A medium with a reputation for obtaining correct anomalous information
about the deceased was hired to participate in the study. She was given a battery
of psychological tests to take before the start of the first session and again after
the termination of the final session. In the end, I decided not to score these in
order to protect the privacy of the medium. She also filled out Ronald Pekala's
Phenomenology of Consciousness Inventory (PCI; Pekala, 1991) at the begin-
ning and again at the end of each session, except for the first session, which was
considered a practice session, when she filled it out partway through. The PC1
was scored and the results are reported below.
   Both myself and the medium were present for all sessions. These were held
in Room PL2 of the Psychology Laboratory at King's University College. I sat
at a Dell PC computer on which I had loaded the random text generators and
on which I kept a "worksheet" for each session. The random text was generated
directly onto the worksheets. In addition, I kept notes on the worksheets of
events that transpired during the sessions including the substance of the con-
versations between myself and the medium.
   In addition to generating randomized text, a Sony Cassette-Corder TCM-
500DV was used for recording EVP onto audio cassettes. For the practice
session and first 14 formal sessions, the recorder was placed either in Room PL5
of the Psychology Laboratory or in a fire escape stairwell leading out of the
Laboratory. In either case, no noise sources were used. For sessions 15 to 25, the
cassette recorder was set to record in PL2, the room in which the experimenter
and medium were located. This time Stefan Bion's EVPmaker running on an
IBM NetVista A40i computer was used as an explicit noise source for brief
intervals during those sessions. EVPmaker works by taking a sound file, chop-
ping it into bits, and then "randomly" reassembling them and playing them back
(Bion, 2006). During session 17, I recorded a politically correct version of Alice
Bailey's "Great Invocation" (Bailey, 193411951) as a WAV file which was
subsequently, as needed, normally chopped into 70 millisecond bits and
"randomly" reassembled, usually, as a 7 second sound file. A question would
often be typed onto the worksheet and then EVPmaker would be turned on
approximately at the same time as a random text generator. Beyond some gen-
eral constraints discussed below, there was no planning involved in the questions
that were asked; they just seemed like reasonable questions at the time. The
medium listened to all of the recordings made with EVPmaker through Sony
MDR-V700 headphones. Subsequently, a research assistant independently lis-
tened to all of the cassette tapes, using the same headphones, to identify any
anomalous sounds.

  There was one practice session and 25 formal sessions, each session lasting
an average of 1 hour and 44 minutes (o= 28 minutes) for a total time of about
45 hours. The random string generator was engaged 341 times for a total of
19,500 characters. The word generator was engaged 325 times for a total of
3,732 words. The yesfno word generator was used 49 times with "yes" occurring
28 times and "no" coming up 21 times. There were about 36 hours and 19
minutes of cassette recording including 34 minutes and 46 seconds of recordings
from 235 instances of using EVPmaker.

Psychological Measures
   The medium experienced changes to her consciousness from the beginning to
the end of any given session as measured by the PC1 with overall F(12,37) =
3.19, p = 0.003. These changes occurred along four dimensions: amount and
vividness of imagery (p = 6.08, o = 4.87 to p = 12.08, o = 5.54), inwardly
directed attention (p = 14.96, o = 5.51 to p = 19.04, o = 4.5 I), awareness of self
(p = 14.56, o = 2.69 to p = 12.88, o = 2.39), and volitional control (p= 11.60,
o = 4.45 to p =5.60, o = 3.64). All of these changes are consistent with increased
absorption in inner experience. These scores are above the sitting quietly with
eyes open norms as determined by analysis of variance against the constant
values of the norms. For example, even though volitional control on the part of
the medium decreased from the beginning to the end of sessions, her average
score (p = 8.60, o = 5.04) was still statistically significantly higher than the
norm (p = 3.95, o = 1.36; t = 6.53, d = 49; p < 0.0005).
   The medium also experienced changes to her consciousness within the exper-
imental context from the beginning to the end of the sequence of sessions as
judged by scores on the PC1 completed at the ends of the sessions. In particular,
there was a decrease in alteration of consciousness as evidenced by a correlation
of r = -.67 (df = 23; p < 0.0005) for altered states, and an increase in some
aspects of ordinary cognitive functioning as evidenced by a correlation of r = .41
(df = 23; p = 0.044) for memory.

Random Text Generators
   The random character generator did not produce any identifiable anomalous
text. For example, when, according to the medium, a deceased relative tried to
influence the random character generator, the result was: "ydqazns kgue oi
fqbkiiqxv dyeltoccgfvgattvzultmt . . ." In other words, there was no reason to
believe that there was any departure from random behaviour.
   The strings of words produced by the random word generator were open to
interpretation. For example, in session 21, in response to our question "What
would you have us do to make this work better?" we received the answer "we ITC
dimension fortunate when irreparable continue." We wondered if the word
"continue" at the end meant that the answer was supposed to continue and so, upon
engaging the random word generator again, got the phrase "feel acquire light
figure logical people continue." Again, "continue" suggested continuation of the
use of the word generator, which resulted in "equipment are underlie add for
coming giant." Putting all three strings together as a "sentence" gave us: "We ITC
                         Instrumental Transcommunication

                                        TABLE 1
                            Questions with Verifiable Responses

          Question                                                          Correct    ITC
Session   number                          Question                          answer    answer

                     Are we outside the building playing in the sun?         no
                     Is the new building nearly finished?                    Yes
                     Is [the medium] in the room here?                       Yes
                     Is Imants in the room here?                             Yes
                     Is two and two equal to four?                           Yes
                     Do we live in London?                                   Yes
                     Do we live in London, Ontario?                          Yes
                     Does [the medium] have five kids?                       no
                     Are they all boys?                                      Yes
                     Will a provincial election be called next Wednesday?    no
                     Will the provincial Tories win the next election?       no

dimension fortunate when irreparable feel acquire light figure logical people
equipment are underlie add for coming giant." The problem is that such
a "sentence" does not have a single obvious meaning nor is there any evidence that
such a "sentence" is any less random than the apparently random character strings.
   Of the 49 times that the yeslno generator was used, 30 were in the last session.
Using exact binomial probability calculations, the probability of getting a yes, as
we did, at least 28 times by chance is about .196. Thirty-eight of the yeslno
questions did not lend themselves to verifiable answers, such as the questions
"Are any of the sounds on the cassette tape the result of ITC?" and "Did you
guys have fun doing this ITC stuff with us?" The answers to those questions,
incidentally, were "yes" and "no" respectively. Eleven of the times the yeslno
generator was used were in response to questions that were reasonably
unambiguous and whose answers could be verified. All but one of those
questions were asked during the twenty-fifth and final session for reasons given
in the discussion section. All of the questions with verifiable responses are listed
in Table 1.
   With the answer to Question 19 in session 25, the medium and I realized that
the "London" to which we were referring could be ambiguous given that there
are more famous Londons around. Hence the reason for asking Question 20. In
retrospect, in the same vein, the expression "kids" could be considered to be
ambiguous, particularly given that the medium has three children and two large
dogs. However, we considered correct answers to be answers to our intended
questions. It is also noteworthy that, at the time of this session, October 30,
2003, it was widely expected that a provincial election would be called four days
hence on Wednesday, September 3, 2003. An election was actually called on
Tuesday, September 2, 2003. The Tories were the ruling party at the time of the
session but lost the election that was held on October 2, 2003. The probability of
getting at least 9 of 11 correct answers is about .033 based on exact calculations
of the relevant binomial probability distributions.
  It is possible, even though it was not a statistically rare event, that the greater
than expected occurrence of "yes" responses of the yeslno generator was due to
biassed behaviour on the part of the pseudorandom number generator on the
computer. Assuming that the random number generator was, in fact, biassed,
giving an expected probability for a "yes" response of 28/49, the probability of
getting at least 9 of the 11 answers correct is still only about .042. This value is
obtained by considering all six possible combinations of yes and no responses to
give at least 9 of 11 correct overall, partitioning the yes and no responses, and
applying the appropriate bias when calculating the probabilities of each partition
using the binomial probability distribution. The net result is that even when the
possibility of a biassed random number generator is taken into account, the
occurrence of 9 of I 1 correct answers is still a statistically rare event.

Audio Recordings
   The results of analysing the audio recordings were inconclusive at best, as
they had been in the previous study. There was almost no agreement between the
medium and the research assistant as to what was heard on the tapes, and the
instances in which there was agreement could be explained by the probable
action of common psychophysiological mechanisms used when structuring
ambiguous acoustic sensory impressions. Without EVPmaker there were scratch-
ing sounds, clicks, distant noises, and so on. With EVPmaker words and short
phrases could be heard, such as "six," "optical," "subject matter," and "it's
possible," but usually only by one listener.
   Perhaps the most notable EVP occurred in session 17. I will give two
examples. The first example occurred 102 minutes into the session. In response
to the question "What do we need to know that we don't understand about this?"
the medium immediately said that she heard the word "opportunity" produced
by EVPmaker. The simultaneously activated word generator had produced the
phrase "on sharp opportunity are was yes name" so that the word "opportunity"
was both heard by the medium and produced by the text generator as one of
seven words. The research assistant spontaneously heard the words "business"
and "Austin" for the same passage but not "opportunity" until she went back to
listen for it. I could not hear "opportunity" but could imagine it at one point in
the output after having listened for it numerous times. The second example
occurred 12 minutes later at the end of the session. I typed into the worksheet "If
you have any last words, this is your chance," and the medium turned on
EVPmaker while I activated the word generator. Upon completion of the
acoustic sound generation, I immediately said that I had heard the phrase "This
isn't leaving." Subsequently, the medium was able to hear the same phrase as
was the research assistant upon listening for it. This time there was no cor-
respondence with the pseudorandom text output which read "near on hot angel
alive woman to."
                       Instrumental Transcommunication


Randomly Generated Text
   Two changes were made in this study from the previous one: the use of
randomly generated text and the introduction of a medium. With regard to the
first of these, the use of randomly generated text removed some of the ambiguity
associated with acoustic data. The utilization of the character generator removed
much of the ambiguity about what was actually present, but there was no
indication of anomalous effects in the character strings. The word generator
provided murkier data given that words are meaningful and strings of words are
open to numerous interpretations. Hence, again, there was nothing clearly
anomalous. The yeslno generator is certainly discrete, although questions need
to be phrased in an unambiguous manner if their validity is to be assessed. In the
case of the yeslno generator, it is possible that there is evidence for the presence
of anomalous effects.
   However, now we encounter another problem, namely, that those of us who
were involved in the study could have produced whatever unusual effects were
present through anomalous human-machine interactions such as those that have
been demonstrated in other contexts (Jahn et al., 1997; Radin, 2006). For ex-
ample, the medium and I both knew that we were indoors, rather than outdoors,
so we could have influenced the pseudorandom number generator to give a "no"
response to a question about whether we were outdoors. This complication
applies also to questions to which we did not have answers at the time, such as
the questions about the Ontario elections. The medium or myself could have
precognized the timing of the election announcement and the outcome of
the election. Such possibilities have been discussed in detail, for example, by
Stephen Braude (2003), in the context of the survival hypothesis.
   The use of pseudorandom text rather than acoustic recordings in ITC
experiments makes it easier to determine whether or not anomalous effects have
occurred and I would encourage such use in future research. Whether such pro-
cesses are more difficult than others to manipulate by beings from other dimen-
sions is unknown, if, indeed, there are any beings in other dimensions to do such
manipulation. In the event that anomalous output does occur, the problem then
becomes one of determining the source of the influence.
   The significance of the degree to which the devices used in ITC behave in
a random manner is not known. There may be none. However, if it were to turn
out that the living are less able to influence pseudorandom number generators
based on fixed computer algorithms than "truly random" number generators
based on quantum processes, then it could be argued that that difference lends
weight to the survival hypothesis in the event of statistically significant results
with pseudorandom number generators such as occurred in this study. However,
given that it is not even known if there is any difference in the ability of the
living to influence pseudorandom versus random devices (Jahn & Dunne, 2005),
I see no point at this time in trying to advance any arguments on the basis of the
randomness of the devices used.

The Presence of a Medium
   It is not clear to what extent the presence of a medium made a difference to
the results of the experiment. However, one of the reasons for the participation
of a medium was to have her try to communicate with beings in dimensions of
reality from which ITC originates-if there are such dimensions or beings-in
order to facilitate ITC. This led to a number of different ideas about the pro-
duction of ITC. The problem was, of course, that there was no way of evaluating
the validity of these ideas except for trying out those with practical implications.
Let me mention some of the ideas that we had and the strategies that we tried.
   In the first formal session, the medium had the impression that it is an arduous
task for beings in other dimensions to try to affect electronic equipment. Their
experience of trying to do so is similar to our experience of going into a bad
neighbourhood where there is the threat of personal injury. They get shaken and
rattled. Therefore few are willing to take the risks. Those who could safely navi-
gate these neighbourhoods are too far removed from them. The medium had the
impression that the beings with whom she could communicate could do so
because they could stay where they were without the need for approaching the
material aspects of existence as they would need to do in order to affect material
   As a result of such insights, the medium and I considered different possible
mechanisms through which physical effects could be produced. In the second
session, the medium ostensibly contacted one of my deceased colleagues who
discoursed about quantum theoretic mindmatter interactions in what appeared
to be much the same style as when he had been alive. In the third and fourth
sessions we had a discussion of the manner whereby manifestation is said
to occur according to Alice Bailey (193411951). Subsequently, in session 7, the
medium had the impression of having contacted Bailey, who recommended a
daily meditation whose purpose was to make the electronic equipment that we
were using more susceptible to anomalous influence. Although we were unable
to determine its value, we decided to try the meditation, and engaged in it fairly
regularly until the end of the study.
   As we experimented with such speculative strategies, the medium and I felt
that we entered a zone of uncertainty that was contrary to the clarity required of
scientific research. However, we thought that such uncertainty may be neces-
sary, at least for a while, if these phenomena were to be given a chance to
develop so as to be able to manifest in a measurable form (cf. Dunne & Jahn,
2003). For instance, during the eleventh session, we decided not to test the yes1
no generator since that would take us outside the realm of ambiguity. It was not
until the twenty-fifth and last session that we deliberately evaluated the output
from the yeslno generator with a series of questions to which we could know
                            Instrumental Transcommunication                                     97

the answers. Surprisingly, as discussed previously, 9 of the 11 answers to those
types of questions turned out to be correct. Whether such a statistically rare
event would have occurred already in the eleventh session or whether any of
what we did actually affected the result is unknown.
   The presence of a medium in this study added a potential source of infor-
mation about the dynamics of attempting to produce ITC even if her potential
influence on the production of anomalous output itself remained unknown. I
would recommend the use of such a person in other studies along with an effort
to determine the psychological parameters that lead to anomalous results. How-
ever, both the medium and myself were concerned that due caution be observed
with ITC research, in that it could not only be dangerous for beings in other
dimensions to try to influence electronic equipment, but also for any living
participants engaged in such studies. I have addressed this issue in more detail
elsewhere (BaruSs, 1996). Furthermore, any obvious establishment of electronic
access to the deceased, should such access become possible, could upset various
individuals who have a vested interest in the retention of particular materialist or
religious ideologies. It is certainly the case, for example, that there is little
tolerance for ITC research among mainstream scientists. This could also be
a reason why stronger results are not possible at this time.

   I want to thank the medium for her participation in this study. I also want to
thank my research assistant Shannon Foskett for many hours of work; the
mathematician, Sauro Camiletti, for checking the statistical calculations of the
yeslno generator; and the engineer, Mike Turner, for his detailed explanation of
the manner in which pseudorandom number generators work. I appreciate the
thoughtful comments of the three referees who reviewed this paper. And finally,
I am grateful for research funding that was provided for this project by the New
Science Fund of the Community Foundation of Silicon Valley, San Jose,
California, USA, and also by King's University College.

Bailey, A. (1951). A Treatise on White Magic. New York: Lucis. (Original work published 1934)
BaruSs, I. (1996). Authentic Knowing: The Convergence of Science and Spiritual Aspiration. Purdue
    University Press.
BaruSs, I. (2001). Failure to replicate electronic voice phenomenon. Journal of Scientific Exploration,
   15, 355-367.
Bion, S. (2006). Manual for EVPmaker. http://www.stefanbion.de/evpmaker/evpmkr~e.ht-
    m#abschn21. Accessed October 26, 2006.
Braude, S. E. (2003). Immortal Remains: The Evidence for Life after Death. Lanharn, MD: Rowman
    & Littlefield.
Dunne, B. J. & Jahn, R. G. (2003). Information and uncertainty in remote perception research. Journal
    of Scientific Exploration, 17, 207-241.
Fontana, D. (2005). ITC: What kind of technology? ITC Journal, 23, 22-28.
Fuller, J. G. (1985). The Ghost of 29 Megacycles: A New Breakthrough in Life after Death? London:
Jahn, R. G. & Dunne, B. J. (2005). The PEAR proposition. Journal of Scientific Exploration, 19, 195-
Jahn, R. G., Dunne, B. J., Nelson, R. D., Dobyns, Y. H., & Bradish, G . J. (1997). Correlations of
   random binary sequences with pre-stated operator intention: A review of a 12-year program.
   Journal of Scientific Exploration, 11, 345-367.
Keen, M., Ellison, A, & Fontana, D. (1999). The Scole report: An account of an investigation into the
   genuineness of a range of physical phenomena associated with a mediumistic group in Norfolk,
   England. Proceedings of the Society for Psychical Research, 58 (Pt. 220), 149-392.
Macy, M. (1996). The Miracle of ITC: Electronic Communication across Dimensions [Audio
   cassette]. Boulder, CO: Continuing Life Research.
Pekala, R. J. (1991). Quantifying Consciousness: An Empirical Approach. Plenum.
Radin, D. (2006). Entangled Minds: Extrasensory Experiences in a Quantum Reality. New York:
Skinner, B. F. (1936). The verbal summator and a method for the study of latent speech. The Journal
   of Psychology, 2, 71-107.
Warren, R. M. (1968). Verbal transformation effect and auditory perceptual mechanisms.
   Psychological Bulletin, 70, 261-270.
Journal of Scientific Exploration, Vol. 21, No. 1, pp. 99-120, 2007                       0892-3310107

               An Analysis of Contextual Variables and the
                Incidence of Photographic Anomalies at an
                     Alleged Haunt and a Control Site

                               Department of Psychology, Lund University,
                                     Box 213, Lund 221 00, Sweden
                                 e-mail: devin.terhune@psychology.lu.se

                                              3709 Maize Rd.
                                           Columbus, OH 43224
                                         e-mail: ventola.4@osu.edu

                                    Integrated Knowledge Systems, Inc.
                                       2561 Hall Johnson Rd., #I217
                                           Grapevine, TX 76051
                                      e-mail: jim-houran @yahoo.com

      Abstract-This field study assessed whether areas in an alleged haunt and
      a control site, and active and inactive areas within the haunt site, differed with
      respect to the presence of contextual variables that might contribute to haunt
      experiences and exhibited differential incidences of photographic anomalies.
      Contextual (aesthetic, physical, and structural) variables were measured, and
      randomized photographic (black-and-white, color, digital, infrared, and Polar-
      oid) data were recorded under blind conditions in fourteen representative areas
      of the two sites. The haunt site displayed lower ambient temperature and higher
      humidity levels than the control site, but only suggestive differences were
      found between the active and inactive areas of the haunt site. Ratings from
      experimentally-blind photographic consultants indicated that the haunt site
      exhibited a higher incidence of photographic anomalies than the control site, as
      did the active areas of the haunt site, relative to the inactive areas. Color prints
      exhibited a higher incidence of photographic anomalies than all other media
      types. The results are discussed within the context of contemporary theoretical
      accounts of hauntings and methodological protocols employed in haunt
      Keywords: hauntings-apparitions-anomalous                 experiences-contextual variables
                photographic anomalies

The term 'haunting' refers to recurrent culturally-sanctioned anomalous expe-
riences that are confined to a particular site or locale. Such experiences may
include, but are not limited to, visual apparitions, apparent movements of ob-
jects, and the sensing of an unseen presence. The traditional conception of
100                           D. B. Terhune et al.

haunting phenomena, espoused by some contemporary researchers (e.g., Maher,
1999; Stevenson, 1972), is that they result from the parapsychological activity
of deceased individuals. Much of the recent research in this area, however, has
attempted to understand haunt experiences by recourse to recognized cogni-
tive and social (Lange & Houran, 2001) and neuropsychological processes
(Persinger & Koren, 2001). The present research sought to address multiple
outstanding issues in the contemporary literature that fall within two different,
though related, domains: the role of contextual variables in the mediation of
haunt effects and the incidence of photographic anomalies in prints captured at
haunt sites.
   Contextual variables include a variety of stimuli available to an individual
at the time of an anomalous experience or shortly thereafter that purportedly
structure, or help structure, that experience. Such stimuli may include envi-
ronmental cues such as suggestion or the aesthetic features of a locale, or
endogenous stimuli such as one's beliefs and expectations about the respective
site in particular andlor paranormal phenomena more generally. These and other
contextual variables may come to influence the interpretation, incidence, and
phenomenology of a variety of experiences. For instance, contextual information
may lead one to anticipate certain events or outcomes and thereby inflate the
probability of their occurrence, or to color ambiguities in a manner which is
consistent with the respective cues. That is, contextual variables may prime an
individual towards particular behavioral responses or perceptual experiences.
Concurrently, by informing the content of ambiguous endogenous stimuli (e.g.,
shifts in arousal), contextual variables may function to reduce anxiety con-
cerning the inexplicable nature of an anomalous experience (Zimbardo et al.,
1993; see also Bentall, 2000; Houran, 2000).
   In the present context of haunt experiences, two highly salient priming
variables include prior suggestion that a site is haunted and prior paranormal
belief (Lange & Houran, 2001). To the extent that a particular item of contextual
information suggests an interpretation of an event or series of events, or
exacerbates pre-existing tendencies toward certain experiences or responses, the
occurrence of a congruent action or experience may be said to be mediated by
the respective contextual cue. Previous research suggests that haunting phe-
nomena are contextually mediated (Lange et al., 1996). Although it is well
recognized that hallucinations can be induced in laboratory settings via sug-
gestion (e.g., Barber, 1969), this effect has not been examined in the field until
quite recently. Lange and Houran (1997a) found that of two cohorts of indi-
viduals touring an abandoned theater, those who were told that the site was
haunted reported more anomalous experiences than those who were informed
that the site was undergoing renovation. During participants' tours of an alleged
haunt, Wiseman and colleagues extended this finding by documenting that the
reporting of unusual experiences was related to participants' self-reported para-
normal belief and whether or not they had been informed that the site had
recently witnessed an increase in parapsychological phenomena (Wiseman et al.,
                      A Field Study of Haunt Phenomena                        101

2002). Other aesthetic and structural variables, such as the presence of reflective
surfaces (Kelly & Locke, 1981) and the spatial dimensions of a room (Wiseman
et al., 2003), have been found to be associated with the reporting of haunting-
type experiences. Furthermore, although few studies have examined the in-
fluence of contextual variables upon the content of anomalous experience,
available evidence is consistent with a contextual mediation hypothesis (Houran,
2000; Skirrow et al., 2002). Given such findings, the relationship between the
occurrence of anomalous experiences and the presence of contextual variables
requires continued study.
   Other physical factors, which may or may not be cognitively registered, yet
may play a role in the mediation of haunt perceptions through their tendency to
induce various ambiguous experiences, are also worthy of consideration. Nickel1
(2001) has suggested that haunted locations may be inherently, yet naturally,
cold or draughty. Similarly, Wiseman et al. (2003) found that the reporting of
anomalous experiences in areas of an alleged haunt positively correlated with
the lightings levels of the particular areas in which experiences occurred. Lange
and Houran (2001) have argued that ambiguous stimuli such as these may be
misinterpreted as haunt effects given cognitive and motivational biases favoring
paranormal explanations for inexplicable events (see also Zimbardo et al., 1993).
   A considerable amount of attention has been afforded to the causal or medi-
atory role of contextual physical factors in the induction of haunting-type
experiences. While numerous physical variables have been found to be associ-
ated with anomalous experiences (e.g., Braithwaite & Townsend, 2006; Radin &
Roll, 1994; Tandy, 2000), the influence of magnetic fields upon the incidence of
haunt phenomena has received the greatest empirical scrutiny. Persinger and
colleagues have hypothesized that haunt experiences result from the interaction
of electromagnetic andlor geomagnetic fields with the neuro-electromagnetic
patterns within an individual's brain (for a review, see Persinger & Koren,
2001). Support for this account has been provided in field settings, where high
peak strength geomagnetic or highly variable electromagnetic (EM) fields have
been consistently documented in haunted locales (see Roll and Persinger 2001
and Persinger and Koren 2001 for reviews). In many of these studies, however,
the measurement of magnetic fields has not been conducted under blind con-
ditions, formal control areas within a site have been inconsistently utilized, and
no study of which the authors are aware has included a second, independent
control site (cf. Houran & Brugger, 2000). Although two studies (Maher, 2000;
Maher & Hansen, 1997) failed to replicate these effects, both only measured
local ambient electromagnetic field magnitudes, which are less likely to dem-
onstrate the hypothesized variability required for the induction of haunt
experiences than transient magnetic fields, whose fluctuations can be more
sensitively measured over extended periods of time (e.g., Persinger & Cameron,
1986). Multiple experiments have purported to induce the experience of a sensed
presence and other anomalous experiences in the laboratory by applying weak
(i.e., 500 to 1000 nanoTesla [nT]), complex magnetic fields in a burst-firing
102                            D. B. Terhune et al.

pattern to the temporal and temporal-parietal regions of the cerebral cortex of
human participants (Cook & Persinger, 1997, 2001; Persinger & Healey, 2002;
Persinger et al., 2000). Conversely, the only study which has attempted to
independently replicate this research failed to discern an effect (Granqvist et al.,
2005). This experiment was double-blind and its experimenters allege that it
adhered to Persinger's protocol specifications. However, the reason(s) for this
failure to replicate remain the source of debate (Larsson et al., 2005; Persinger &
Koren, 2005).
   Despite the utility of conventional psychological explanations, the contention
that haunts result from paranormal agency remains ubiquitous. Many findings
have been interpreted as supportive of this hypothesis, but the most pervasive by
far is the observation that photographic anomalies repeatedly appear in photo-
graphs taken at haunted locales (see Lange & Houran, 1997b; Maher, 1999;
Nickell, 2001). Reports of the capturing of low-grade anomalies, such as density
spots (orbs) or fogging, at sites that have previously played host to haunt effects
are rampant in the popular press and on the world-wide-web (Potts, 2004) and
are commonly attended by attributions of paranormal agency. High-grade anom-
alies, such as apparent apparitions or unequivocally paranormal objects, are
often compelling enough to invoke either belief in the phenomenon or accu-
sations of fraud. Despite the attention that reports of photographic anomalies
draw, these reports are ultimately anecdotal, as there have only been a few
empirical studies that have examined and presented evidence for these purported
   The most compelling evidence for photographic anomalies thus far was
reported by Maher and Hansen (1997). They instructed a 'sensitive' (i.e., some-
one who professed to have the ability to detect ghosts) to take photographs
inside of an alleged haunt site. The sensitive was blind to the identity of the areas
in which the photographs were taken. Moreover, his earlier markings on a floor-
plan and reports on a checklist did not significantly correspond to the locations
of haunt effects, as reported by the inhabitants of the site, or the features of the
apparitional experiences, respectively. Approximately one third of the prints
exhibited photographic anomalies, described as "translucent, cloud-like aber-
ration(~) a pinkish cast" (Maher, & Hansen, 1997, p. 197). Those with anom-
alies were found to have been significantly more likely to have been captured in
areas in which previous haunt effects had been reported.
   Although this study is to be applauded for its ingenuity and its incorporation
of statistical analyses, it suffers from numerous methodological limitations,
which temper the conclusions that can be drawn from it. First, the photographs
were not taken in a randomized fashion, nor were the same number of photo-
graphs taken across areas. The incorporation of such methodological features
would have allowed the exclusion of potential confounding variables. Second,
the individual who assumed the role of the photographer lacked professional
training and may have been unaware of various means of preventing inadvertent
artifacts.' Third, the photographer was not blind to the context of the
                      A Field Study of Haunt Phenomena                         103

investigation and the hypothesis under test and may have thereby unwittingly
caused the occurrence of more anomalies by his conduct. Finally, two other sets
of photographs were taken at the site by professional photographers, but ex-
cluded from the analysis because of a lack of apparent anomalies. The inclusion
of such photographs might have diminished the significance of the results. Given
the shortcomings of this study, there are no compelling reasons to believe that
Maher and Hansen's (1997) reported anomalies did not result from the inclusion
of a novice photographer in the experimental protocol or other methodological
limitations of the study. Furthermore, in another study of an alleged haunt,
Maher (2000) reported that a photographic consultant was able to provide a
conventional explanation for apparent photographic anomalies captured during
the course of the investigation.
   Although few experiments have tackled the issue of photographic anomalies
in a suitable fashion, two other studies are worthy of brief mention. In an anal-
ysis of previously published prints documenting photographic anomalies in
Fortean contexts, Lange and Houran (1997b) found that the type of photographic
effect (light streak, fogging, density spot [orb], amorphous form, shadow, de-
fined image, or other) is artifactual of the type of photographic medium used,
that is, certain types of anomalies appear to be unique to, or more commonly
found with, certain media types. In a study investigating the possible mech-
anisms of 'anomalous orbic images' captured with digital cameras, Schwartz and
Creath (2005) presented evidence to suggest that anomalous images are com-
monly caused by stray reflections or diffraction of the flash reflecting off of dust
or dirt particles.

The Present Study
   This study concerns a residential site, which was home to a male and female
occupant in their mid-forties who gave consent to allow an investigation into the
phenomena occurring at their home. The couple, and at least one guest who was
unaware of the previous reports, reported various anomalous experiences on the
grounds of the site over the course of multiple years and attributed these events
to a discarnate agent. Experiences included auditory and visual apparitions,
a sensed presence, ostensibly precognitive nightmares, object movements, and
the display of aberrant behaviors by the couple's pet dog. This set of experiences
conforms to the classic symptomatology of haunt or poltergeist-like episodes
(e.g., Roll, 1977).
   The site (and case) possessed a number of attributes that made it optimal for
a field experiment. The two inhabitants were both mental health professionals
and thus it was deemed likely that they had considerable psychological knowl-
edge germane to haunt phenomena and that their reports did not result from
naivete or the failure to consider obvious, mundane explanations. The site had
received no media attention, thereby allowing the experimenters to ensure that
experimental blinds (for experimenters and participants) could be maintained
104                            D. B. Terhune et al.

unlike in investigations of more famous sites in which this is relatively
impossible (e.g., Wiseman et al., 2003), nor did it appear that such attention was
sought by the inhabitants. Finally, the house next-door to the haunt site was
available for use as a control site with the full consent of both families. Given
their close proximity to one another, the two sites did not differ in terms of
location or other possible mediating variables which may be responsible
for haunt phenomena, such as the presence of underground faults or water
(Persinger & Cameron, 1986), geomagnetic flux, or proximity to power stations,
airports, and other sources of transient electromagnetic fields (Persinger &
Koren, 2001), or overt contextual variables in the surrounding neighborhood
(e.g., the presence of a graveyard) (see, e.g., Houran, 2000).
   The investigation of this haunt site was intended to be exhaustive. It was
planned that the investigation would incorporate and improve upon methodo-
logical features previously used in both psychological and parapsychological
field and experimental studies. In addition, the investigation was to have the
rigor of a controlled field experiment while maintaining a case study approach
through the collection of interview data from percipients. For reasons discussed
below, these intentions were unable to be fully realized.
   The present study of this alleged haunting attempted to circumvent confounds
which have plagued previous parapsychological field studies (Houran &
Brugger, 2000) by incorporating a second (control) site in which no phenomena
had been reported. In addition, all experimenters were blind to the identity of the
sites and all personnel were blind to the identity of the sites and the nature of the
study. Data from a variety of aesthetic, physical, and structural variables were
collected at the two sites, and a professional photographer was hired to capture
photographs with multiple media types in a randomized fashion. Multiple in-
dependent professional photographers volunteered their services and evaluated
photographic prints captured at the sites for the presence of potential anomalies
and proffered plausible explanations for any that were identified. Groups of
participants were to complete a battery of psychological measures, which have
been previously used in haunt research (Houran et al., 2002), and an online task
of extrasensory perception. Subsequently, participants were scheduled to tour
both of the sites while recording any anomalous experiences on an experiential
checklist (Houran, 2002). Finally, following the completion of the experimental
stage, we intended to conduct interviews with the couple residing at the haunt
site about the location, type, and phenomenology of their experiences and have
them, and the family residing at the control site, complete the aforementioned
battery of psychological instruments.
   Contextual variable and photographic data were collected, but the experiment
abruptly ended on the first day of the experiment involving participant tours. On
this day, one of the residents of the site entered the house and displayed alarm-
ingly aggressive behavior toward the participants and experimenters, which led
to the immediate termination of this stage of the experiment. Through brief
discussions with the other resident of the site, it was found that haunt effects at
                       A Field Study of Haunt Phenomena                         105

the site had gone into remission in the months leading up to the investigation.
However, in the days preceding and during data collection, the two inhabitants
of the site had begun to experience a plethora of distressing haunting-type
perceptions following the period of quiescence. The inhabitants apparently came
to believe that the investigation had directly caused this recrudescence of haunt
effects. It is evident that the return of such effects triggered fear and anxiety in
the resident culminating in his aggressive outburst. He subsequently claimed
amnesia for the event and stated that he believed that he had been possessed by
the agencies which he believed were haunting the site. Other than a few brief
communications, the inhabitants discontinued all contact with the investigators,
seemingly out of fear of further 'reprisals' from these discarnate entities. Given
this turn of events, the investigation of this site, as it is presented here, is
fragmentary and is not as thorough as initially intended. Despite this caveat, it is
worth noting that the occupants of the site were reporting haunting phenomena
during the course of the data collection. Therefore, unlike many previous
investigations which have concerned historically haunted sites, this site was
undoubtedly 'active' at the time of this investigation.
   Based on the foregoing and the data available to us, the following hypotheses
were generated: (I) the control and haunt sites will differ in terms of the
presence of aesthetic, physical, and structural variables; (2) within the haunt site,
active and inactive areas will differ in the presence of contextual variables; (3)
photographic prints from the haunt site will exhibit a greater number of
anomalies relative to the control site; and (4) of the haunt site prints, those taken
in active areas will exhibit more anomalies than those taken in inactive areas.
Given the germane findings of Lange and Houran (1997b), it was further
conjectured that (5) the incidence of photographic anomalies would vary by
media type. The direction of this relationship (i.e., which media type(s) would
exhibit the greatest incidence(s) of photographic anomalies) was not specified in

  The experiment had two stages. The first consisted of the measurement of
multiple contextual variables and the completion of randomized photography
sessions at an alleged haunting and a control site under blind conditions. The
second stage involved the assessment of the catalogued prints for the presence of
photographic anomalies by experimentally-blind professional photographers.

   An alleged haunting (henceforth 'target site') and a control site were identified
by a colleague of the third author. The sites are located in a middle-class
neighborhood of a small city in Illinois, and both were constructed in the mid-
twentieth century. Seven matched rooms at each of the two sites were estab-
lished as experimental areas. Each cohort of representative areas included
106                            D. B. Terhune et al.

a dining room, three common areas (living rooms, hallways, and kitchens),
a basement room, and two bedrooms. Haunting-type experiences had previously
been reported in three areas of the target site, while no similar reports were made
in the remaining four experimental areas. These areas were designated 'active'
and 'inactive', respectively. The three authors remained blind to the identity of
the sites until the conclusion of the experiment.

   A professional photographer, Chad Mitchell, was hired to take photographs of
the experimental areas. At the time of the experiment (August 2003), he had
over six years of photography experience and owned his own professional studio
in Illinois.
   Eight photographers (MAge= 30.25, SD = 6.43; one female) consented to
volunteer their expertise to assess the presence of anomalies in the print catalog.
All photographic consultants were required to have a minimum of 3 years of
professional or educational experience (M = 12.63, SD = 6.19; range: 4-21

Physical variable instrumentation
   Carbon monoxide. A Nighthawk 60 Hz commercial carbon monoxide (CO)
detector (Model No. KN-COP-DP) was used. This instrument provides output in
units of parts per million (ppm).
   Lighting levels. The F-stop of an Olympus Infinity camera (see below) was
used by Chad Mitchell to measure lighting levels.
   Magneticfields. An F. W . Bell (Bell Technologies, Inc.) Model 4080 Triaxial
ELF [extremely low frequency] Magnetic Field Meter was used to measure local
ambient EM fields. This meter has three internal orthogonal magnetic field
sensors (corresponding to the three dimensions). A microprocessor computes
and displays the vector magnitude of the magnetic field measured. The mea-
surement range of this meter extends from 0.01 to 51.1 microTesla (pT), and it
has a frequency response of 25 Hz to 1 kHz. It has a sampling rate of 0.4 seconds
and a typical accuracy error of +2%.
   Temperature and humidity. A RadioshackTM model 63-1036 digital indoor
thermometer/hygrometer was used to measure ambient temperature and
humidity. The instrument has a sensing cycle of 10 seconds, a temperature
range of 32" to 122OF ( 21 .g°F),and a relative humidity reading range of 20% to
99% (55%).
   Photographic instruments. Black-and-white photographs were taken with
a Canon Elan 7E 35 mm with a Tamron 28-80 mm lens and Kodak 400 speed
film. Digital pictures were taken with a Vivitar Vivicam 10, a point and shoot
                      A Field Study of Haunt Phenomena                        107

color photographs with a fixed-lens and Kodak Max 800 speed film. Infrared
photographs were taken with a Canon Rebel 2000 using a Canon 28-80 lens and
a Red 25 lens filter (for better infrared reproduction) and Kodak HIE Infrared
film. Polaroid photographs were taken with a Polaroid camera (# CDI 368LE
CA RA). This camera was purchased for the study, and it was tested using two
corresponding Polaroid 600 film cartridges.
   Psychometric instruments. The Revised Paranormal Belief Scale (RPBS) is
a twenty-six-item scale with items anchored on a seven-point Likert scale. The
original PBS developed by Tobacyk (1988; Tobacyk & Milford, 1983) was
subjected to a 'top-down purification procedure' intent on removing items
contaminated by age and gender biases (Lange et al., 2000). The RPBS
possesses two subscales measuring new age philosophy (NAP) and traditional
paranormal belief (TPB). This measure was administered to the photographic

   Contextual variables. Multiple covert and overt variables were measured and
recorded in all experimental areas at both sites. Variables measured in each
room include: lighting levels, spatial dimensions, the number of mirrors, the
number of pictures with and without human forms, the number of windows
and air vents, humidity, ambient temperature, carbon monoxide (CO), and local
ambient EM fields (peak magnitude and field variability).
   Photographic print collection. The photographer was hired under the initial
information that he would be taking photographs for a psychology experiment.
He was responsible for capturing black-and-white, color, and infrared photo-
graphs and measuring lighting levels. He was debriefed as to the purpose of the
study following the completion of data collection and informed of the identity of
the two sites following the return of the film and processed prints. Digital and
Polaroid photographs were taken and processed by the experimenters.
   Thirty-five photography trials were conducted at each of the two sites. Each
room was assigned a number, and the order of the trials was determined by
a web-based random event generator. Each trial consisted of the serial acqui-
sition of one photograph of each of the five media types (order: infrared, color,
black-and-white, Polaroid, digital) from the same vantage point. The photo-
graphy trials at the target site preceded those at the control site. A total of 175
photographic prints were obtained at each site. Prints of the exterior regions of
the sites were not included in the data analysis, and two infrared prints (one from
each site) were destroyed or lost during processing. Consequently, the data set
consisted of 169 prints from each site.
   The photographic prints were cataloged in serial order by site, trial number,
and media type. Photographic consultants were blind to both the purpose of the
experiment and the identity of the sites and were told only that the prints were
108                           D. B. Terhune et al.

assessment of the print catalog, consultants completed a short questionnaire
concerning their educational and professional photography experience. They
subsequently conducted the print catalog assessment and completed the RPBS in
counterbalanced order.
   The prints were presented in one of four different counterbalanced order
conditions. Consultants began with either trial 1 or 18 of the target site or trial
1 or 18 of the control site and continued through the remaining prints. They were
instructed to look over all of the photographs carefully and select those images
which they considered anomalous. For the purposes of the study, an anomalous
image was operationally defined to consultants in writing as one which
contained any obscurities, defects, bizarre images, or the like, which could not
be conventionally explained by the presence of natural artifacts present during
the actual photography session (e.g., light reflections), or the subsequent pro-
cessing of the film. Consultants were instructed to rate the degree to which all
selected prints matched the aforementioned definition on a four-point Likert-
type scale (1: definitely not anomalous, 2: somewhat not anomalous, 3: some-
what anomalous, and 4: definitely anomalous). It was explicitly stated to
participants that all prints that were not selected would receive a rating of '1'.
Two missing data points (prints selected but not rated) received ratings of '1'.
Following the evaluation session, participants were debriefed regarding the
purpose of the experiment and the conditions under which the photographs had
been obtained.

   Statistical analyses consisted of standard univariate techniques including
Analyses of Variance and Pearson product-moment correlations. The use of non-
parametric statistics indicates that a data set violated the assumption(s) of
distribution normality and/or homogeneity of variance.

Contextual Variables
  Inter-site analyses revealed that the target site had a significantly lower mean
ambient temperature and a greater level of humidity than the control site. In
addition, the control site had a suggestively, albeit non-significantly, greater
number of mirrors. A floor effect was observed with the CO data. Thirteen of the
fourteen areas yielded CO measurements of 0 parts per million (ppm); the
remaining area emitted a CO count of 210 ppm. The high CO count was mea-
sured in an active area of the target site. Given the negative skew, the small
sample size, and apparent uniform absence of CO at the two sites, no analyses
were performed. All other analyses were non-significant, indicating that the sites
were relatively matched along the respective variables measured (See Table 1).
  Intra-site contrasts between active and inactive areas at the target site
indicated that other than suggestively greater peak EM field strengths and
                             A Field Study of Haunt Phenomena                                     109

                                              TABLE 1
            Contextual Variables as a Function of Site [Target (n = 7 ) and Control ( n = 7 ) ]

                                          Target site         Control site

Variables                                   M (SD)               M (SD)            F        P      1

Area (sq. ft.)
Ambient temperature ( O F )
Humidity (96)
Artwork with human forms
Artwork without human forms
Peak ambient EM field (pT)
Variability in ambient EM fields

Lighting level (F-stop)
Air vents
Note: All tests are subjected to a Bonferroni correction ( a = 0.005).

variability in EM field magnitudes in active areas, the two sets of areas did not
differ according to any of the contextual variables measured (see Table 2).

Photographic Prints
  Of the 338 photographic prints, 309 (88.3%) were considered to be non-
anomalous by the consultants. Of the remaining twenty-nine prints, eighteen

                                           TABLE 2
 Contextual Variables as a Function of Areas of Target Site [Active (n = 3) and Inactive (n = 4)]

                                         Active areas         Inactive areas

Variables                                   M (SD)               M (SD)            F       P       Q;
Lighting level (F-stop)                  4.00   (0.00)         4.13 (0.25)       0.71     0.44    0.13
Humidity (%)                            48.67   (1.15)        49.75 (3.59)       0.24     0.64    0.05
Windows                                  3.67   (1.15)         3.75 (1.71)       0.01     0.95    0.00
Air vents                                0.67   (0.58)         0.75 (0.50)       0.04     0.85    0.01
Artwork with human forms                 1.67   (1.53)         0.75 (0.50)       0.99     0.37    0.17
Artwork without human forms              2.67   (1.53)         3.75(0.96)        1.36     0.30    0.21

Area (sq. ft.)                   2539.47 (2772.85)          1446.51 (771.01)     5        0.35    0.72
Ambient temperature ( O F )        78.10 (0.35)               78.45 (1.64)       5.50     0.18    0.86
Mirrors                             2.00 (1.73)                0.25 (0.50)       2.50     1.38    0.17
Peak ambient EM field (pT)          0.48 (0.46)                0.16 (0.04)       0.50     1.96    0.050
Variability in ambient EM fields    0.45 (0.45)                0.13 (0.04)       0.50     1.96    0.050
Note: All tests are subjected to a Bonferroni correction ( a = 0.005).
                      A Field Study of Haunt Phenomena                         111

assigned greater anomaly print ratings than Polaroid (U = 1963, Z = 2.47,
p = 0.014), black-and-white (U = 1864, Z = 3.40, p = 6.8 X lo4), digital
(U = 1915.50, Z = 2.93, p = 0.003), and infrared prints (U = 1879, Z = 2.68,
p = 0.007). None of the other media types exhibited differential ratings (all p
values > 0.20). Print anomaly ratings were also found to differ significantly
between the five different media types within the target site (x2= 9.65, d = 4,
p = 0.047) and the control site (x2 = 14.46, df = 4, p = 0.006) with color prints
receiving the greatest anomaly ratings at both sites.
   A series of analyses were conducted in order to see if any of the measured
demographic or belief variables correlated with consultants anomaly print
ratings (n = 8). Of the demographic variables, anomaly ratings did not correlate
with years of professional photography experience (r =-0.60, p = 0.12), age (r =
-0.47, p = 0.25), or sex (rPb=-0.45, p > 0.10 [1 = female; 2 = male]). Similarly,
consultants ratings did not correlate with their endorsement of new age (NAP,
r = 0.20, p = 0.64) or traditional paranormal beliefs (TPB, r = 0.36, p = 0.38).
   The relationship between contextual variables and anomaly print ratings was
next examined. Print ratings didn't correlate with any of the contextual variables
measured (n = 14): temperature (r = -0.35, p = 0.23), humidity (r = -0.01, p =
0.97), area (sq. ft.) (r = 0.39, p = 0.17), lighting levels (r = -0.43, p = 0.12),
number of windows (r = 0.00, p = 1.00), number of air vents (r = 0.18, p = 0.54),
peak EM field (r = -0.27, p = 0.35), variability in EM field (r = -0.22, p = 0.45),
number of pieces of artwork without human forms (r = -0.30, p = 0.30), and
number of pieces of artwork with human forms (r = -0.21, p = 0.48).

   The present study was intended to be a comprehensive, controlled inves-
tigation of an alleged haunting which would have included the use of blind
personnel, the measurement of a variety of contextual variables, and the assess-
ment of psychological and parapsychological hypotheses through the incor-
poration of participant tours (see Lange & Houran, 1997a; Maher, 1999;
Wiseman et al., 2003). The testing of multiple, and sometimes competing, hy-
potheses in tandem coupled with the use of heterogeneous methods including
advanced instrumentation and psychological inventories represents an advanta-
geous research strategy and one that we hope future researchers will employ.
Before turning to a discussion of the results of this study, perhaps it is best to
consider first the events that led to its premature termination.
   The episode involving the inhabitant at the haunt site, which subsequently led
to the cancellation of the remainder of the experiment, highlights the sensitive
nature of haunt experiences. It offers a lesson as to the manner in which inves-
tigators should approach, and conduct themselves within, field experiments.
Haunting experiences can be quite distressing. Unfortunately, this feature is
commonly left unaddressed in the parapsychological literature surrounding these
phenomena. For this reason, we think that it is critical that investigators of haunt
112                                D. B. Terhune et al.

cases directly involve, or at the very least have immediate access to, a mental
health professional. The third author of this study has such qualifications and
was available to immediately consult with the resident following his outburst.
However, professional clinicians are infrequently involved in parapsychological
field studies and probably less so in the case investigations of amateurs, which is
a cause for concern. An investigation, in many cases, involves interactions with
people who have witnessed some frightening events, which are often interpreted
by lay individuals as symptomatic of pathology or as suggestive of the presence
of a paranormal agent. That is, experients are often presented with only two
negatively connoted explanatory options, neither of which is explicitly more
attractive than the other. It should further be recognized that entering one's home
for the purposes of investigation can for many individuals be an impingement on
one's privacy, and thus might serve to further magnify the unease associated
with haunt effects, as appears to have occurred in the present case. Accordingly,
this case compels us to remind researchers to be cognizant of ethical issues
involved in haunt investigations (Baker & O'Keeffe, 2005), especially with
respect to the treatment of experients and experimental participants (American
Psychological Association, 2002).
   With respect to the mechanisms of haunting-type experiences, this episode is
somewhat insightful. First, it exemplifies the fundamental role played by atten-
tion in the occurrence of anomalous experiences. It is especially interesting
because there is evidence to suggest that poltergeist manifestations often remit
when an investigation ensues (Roll, 1977). In this case, the converse was ob-
served (though no objective phenomena were witnessed by, nor experiences
reported to, the experimenters), and it thereby demonstrates that there is great
variability with respect to the effects of external attention brought to a case. At
the individual level, this episode additionally accords with Lange and Houran's
(2001) contention that haunt experiences result from a focusing of attention
upon certain ambiguous experiences and Brown's (2004) hypothesis that medi-
cally unexplained symptoms result from the attention directed toward am-
biguous bodily sensations. In this sense, it is plausible that the impending
investigation led to the increased attention to haunt-specific representations of
ambiguous phenomena by the residents of the site, resulting in the recrudescence
of the phenomena.
   The outburst appears to have been dissociative in nature and thus also re-
affirms the relationship between dissociation and anomalous experiences
(Kumar & Pekala, 2001; Ross & Joshi, 1992). Indeed, this episode closely cor-
responds to Ross and Joshi's (1992) discussion of the dissociative nature of
poltergeist cases. In regard to a hypothetical case, they write:

If the disturbed adolescent in such a household has a dissociative disorder and is acting
out angrily, it is to be expected that the child will also report being possessed. The child's
anger is dissociated and experienced as an inner demon, while responsibility for angry
behavior is disavowed. The disavowal may include amnesia for angry behavior in a case
of somnambulistic possession (p. 360, italics in original).
                      A Field Study of Haunt Phenomena                         113

The resident in the present case, as mentioned previously, also reported to be
amnesic for the episode. One point of divergence between these two cases, how-
ever, which should not go unaddressed, is that the resident had been consuming
alcohol. In addition, the investigators later found out that this inhabitant had
a history of alcohol abuse and anger management problems. Although the
amount of alcohol consumed is unknown to the experimenters, this episode can
not be ruled out as an instance of alcoholic stupor. Nevertheless, this event and
the descriptions of the experiences as presented to us strongly indicate that the
present case can be understood as a series of psychological disturbances that
were interpreted as paranormal phenomena because of the occupants' interest-
and possible belief-in paranormal phenomena, which may have been further
exasperated by the presence of strangers (both experimenters and research
participants) in their home. Unfortunately, the truncated nature of the case does
not allow us to delve more deeply into its nuances, especially with respect to the
triggering mechanisms of particular experiences.
   The contextual variable hypotheses were largely unsupported. Although the
lower mean ambient temperature and higher humidity at the haunt site were
highly significant and may have mediated the experience of anomalous per-
ceptions at the site, the failure to replicate such differences between the active
and inactive areas of the haunt site suggests that this was not the case. In fact,
while the relationships did not achieve statistical significance, the converse was
found at the target site. Alternatively, it may be that the lower temperature and
higher humidity created a general ambiance that was conducive to haunt expe-
riences and yet did not vary considerably between areas, but the active areas
possessed further features which exceeded the inhabitants' tolerance thresholds
and concomitantly triggered anomalous experiences. As it stands, the extent to
which temperature and humidity influenced the occurrence of anomalous expe-
riences at the target site is not clear, but the role of such variables in the inci-
dence of haunt effects warrants further investigation.
   The findings of greater peak ambient EM field magnitudes and greater vari-
ability in EM field magnitudes in active areas than inactive areas at the target
site were suggestively significant and conceptually replicate previous findings
(e.g., Roll et al., 1992). The non-significance (after a Bonferroni correction) of
these findings is likely to have stemmed from the lower power of the analyses.
At the very least, they can be interpreted as being broadly congruent with the
extant literature on this hypothesis. The master bedroom at the target site ex-
hibited the highest variability in EM field strengths (0.97 yT), apparently
because this room is where the power lines enter the site, as found in a previous
case (Terhune, 2004). Notably, this room also played host to a disproportion-
ately greater amount of the phenomena reported at the site, relative to the
two other target areas. However, this may be because the couple spent
a sizeable proportion of their time in this room. It is worth pointing out that
this area, along with two others, also received the highest average anomaly
114                            D. B. Terhune et al.

   It similarly remains to be seen whether the high CO count documented in the
active basement of the target site played a functional role in the reporting of
haunt experiences in that particular area. It is salient that the neurological symp-
toms of CO poisoning parallel some of the experiences reported in hauntings.
For instance, Christinat (1998) reported that disorientation and hallucinations
can result from exposure to high CO counts. CO poisoning may have caused
neurological damage, which in turn led to the experience of anomalous
perceptions in the site, the content of which was then informed by contextual
variables such as paranormal belief. Although this finding is interesting and
corresponds with Christinat (1998), it may just as well be a spurious observation.
Nevertheless, future research should measure CO at haunt sites.
   The small sample sizes in the analyses of the presence of contextual variables,
especially the intra-site contrasts, possessed considerably low power, which may
have inflated the possibility of Type I1 errors. One variable that did not sig-
nificantly differ between the active and inactive areas of the target site, yet is
worth considering here, is the number of mirrors. The active areas averaged two
mirrors while the inactive areas averaged 0.25 mirrors, a difference which
resulted in an F statistic of 3.89 and a p-value of 0.11 (non-parametric: Z = 1.38,
p = 0.17). This analysis is particularly interesting because multiple visual
apparitions were reported in reflective surfaces at the site. Moreover, reflective
media, or specula, have previously been found to be conducive to the induction
of visual apparitions in spontaneous (Green, & McCreery, 1975; Moody, 1992;
Tyrrell, 195311963) and laboratory settings (Foltin, & Alluisi, 1969; Moody,
1994; Roll, 2004).
   The contextual variable analyses present evidence for the potential influence
of multiple contextual cues. Although the analyses and observations implicating
these variables are mostly suggestive, a few points are worthy of brief mention
here. Putative haunt-conducive contextual variables did not tend to be present
across all of the active areas. For instance, the high CO count was only recorded
in the basement, high EM field strength variability was most pronounced in the
master bedroom, and the number of mirrors, though high in the master bedroom
and hallway, was zero in the basement. Given this variability in the distribution
of potentially haunt-conducive context variables, we might expect different
types of experiences or different phenomenological features of kindred phe-
nomena to have occurred in the different areas. The different types of expe-
riences which have previously been attributed to haunts are large in number and
range from unusual sounds to visual apparitions to erratic behavior displayed by
pets (e.g., Tandy & Lawrence, 1998). It is likely that variability in the type and
phenomenology of experiences may come to form a unified representation of
a haunting through the attribution of paranormal agency to the phenomena.
Similarly, certain experiences may initially be evoked in one area by a particular
cue or variable, but concurrently predispose one to further experiences in other
areas which lack the original contextual stimulus. Through this process of
'cognitive kindling' (Persinger, 1993), the maintenance of haunt experiences
                      A Field Study of Haunt Phenomena                         115

over time may come to be mediated by different mechanisms than those initially
implicated in the generation of the experience. That is, while certain envi-
ronmental variables may have been responsible for the onset of the experiences,
subsequent experiences may have been triggered by other variables, such as
paranormal belief. These speculations were unable to be investigated in the
present experiment due to its abrupt termination, but we consider these to be
worthwhile considerations for future research.
   Turning to the photographic print analyses, the finding of Maher and Hansen
(1997), namely that active areas within an alleged haunt are more likely to
display photographic anomalies than inactive areas, was replicated. Further-
more, the present experiment circumvented many confounds in their study
through its employment of a randomized protocol, blind recording methods,
blind assessment of the prints by professionals, and a larger sample size. A
second, parallel hypothesis, conjecturing a greater incidence of photographic
anomalies at the haunt site, relative to the control site, was also supported. While
these analyses suggest the presence of anomalous agencies in the target areas of
the haunt site, a number of findings indicate that this conclusion may be
premature. Only seven prints of 338 received a rating of '3' ('somewhat anom-
alous') (six prints) or '4' ('definitely anomalous') (one print), and in none of
these cases did more than one consultant assign such a rating to the respective
print. That is, no items in the catalog of photographic prints were considered to
be anomalous by a consensus of professional photographers, as previously found
in a catalog of photographic prints captured during a randomized protocol and
evaluated by amateur photographers (Houran, 1997). Similarly, anomaly ratings
appear to be related in part to the demographics of the sample of professional
photographic consultants. For instance, a moderate, albeit non-significant, nega-
tive correlation (r = -0.60, p = 0.12) between consultants' years of experience
and average anomaly ratings was found. The non-significance of this statistic
may derive from the low power of the analysis which had a sample comprised of
only eight respondents. The two consultants who had over twenty years of
photographic experience yielded average anomaly ratings of 1.00. One con-
sultant assigned no ratings higher than 'l', and the other assigned only one print
a rating of '2'. Furthermore, the consultant with the fewest years of photography
experience (four), yielded the highest anomaly print rating average, 1.06, and
was the lone photographer to assign a rating of '4' to a print. This suggests that
the items identified by novice photographers were defects which seasoned
photographers did not consider to be unusual.
   The findings supporting both tested hypotheses remain ambiguous. As it
stands, photographs taken in a random fashion across fourteen sites by a
professional photographer, who was blind to the purpose of the study and the
identity of the sites, indicated that areas where haunt perceptions had previously
been reported were more likely to receive greater anomaly print ratings, both
across sites and within the target site. Accordingly, the statistically significant
116                            D. B. Terhune et al.

such defects was unexplained by the respective measured contextual variables.
However, a moderate negative correlation (n = 8, r = -0.43, p =0.12) was found
between lighting levels and anomaly ratings. This seems to suggest that
photographic defects were more common in rooms with lower lighting levels.
Moreover, lighting levels were lower in the target site than the control site and
lower in the active areas than the inactive areas of the target site. The non-
significance of these relationships precludes us from asserting that lighting
levels are responsible for the presence of photographic defects, but given the size
of this correlation, this remains a worthwhile hypothesis to test in future
research. More generally, the small sample size (n = 14) of contextual variable
data disallows us from adequately testing whether any of the measured con-
textual variables mediated the occurrence of photographic anomalies in the
catalog of prints. Given this inability to rule out potential confounding variables,
we remain reluctant to posit that we have documented a genuine effect with
respect to the concordance in the location of previously reported haunt effects
and ratings of anomalous images across the two sites and within the target site.
Rather, we think that the support for these hypotheses should be interpreted with
caution until further controlled research with sufficient statistical power to
eliminate all known potential confounding variables has been conducted and
replicated or negated our findings.
   One unexpected yet interesting finding from the analysis of the photographic
prints is that color prints received greater anomaly ratings than all other media
types. This may suggest that artifacts are more common with this media type.
Alternatively, the higher anomaly ratings may stem from the types of artifacts
that are unique to color prints. Lange and Houran (1997b), for instance, pre-
viously found higher incidence rates of amorphous forms in color prints. It is
plausible that amorphous forms were considered to be more anomalous by our
sample of photographic consultants than other types of anomalies, and that this
attentional bias resulted in a greater number of assigned anomaly ratings to color
prints. Given the relatively small number of photographic defects, consultants
were not asked to distinguish between different types of anomalies across media
types and sites. Therefore, this question remains unanswered. Nevertheless, we
urge researchers in this area to consider how the type and incidence of artifacts
vary across media types.
   Two potential confounds in the analysis of the photographic prints, dif-
ferences in the sizes of camera lenses and the use of the flash, are worthy of brief
consideration. It is apparent from the catalog of prints that there is variability
due to these features, which undoubtedly deflates the standardized status of the
protocol. This in turn may have increased the incidence of photographic defects
in particular areas. Future research should strive to maintain greater consistency
in the utilization of camera features across media types and trials so as to lessen
the impact that variability in such utilization may have on the incidence of
photographic artifacts. There are a variety of cameras available on the market in
35-mm and medium format with interchangeable backs, which allow the use of
                      A Field Study of Haunt Phenomena                        117

several different film types on the same camera in a single session, thereby
minimizing the variability brought about by using different cameras. It is also
possible to adequately photograph lighted interiors without the use of the flash.
Investigators should request that their professional photographers work without
the flash, given that it has been demonstrated (Schwartz & Creath, 2005) that
flash usage might be responsible for many of the types of photographic images
that have been culturally interpreted as anomalous.
   An additional potential confound was discovered following the consultants'
assessment of the print catalog. A library of occult and new age books was
visible in two separate trials (ten prints) taken in an inactive area of the target
site. The presence of such contextual information may have influenced con-
sultants' assessments of these prints in particular and those at the site more
generally. Upon inspection, it was found that none of these ten prints received
ratings greater than '1'. However, along with an active area of the target site and
an area of the control site, this area in general received the highest average
anomaly rating, 1.05. It remains to be seen whether consultants anomaly ratings
were influenced by the presence of the occult library, but this does remain
a potential confound especially given the anomaly ratings that were assigned to
prints taken in that area.
   One final limitation of the photographic analyses is the low consistency and
consensus between the print anomaly ratings of the photographic consultants.
The inter-rater reliability of the consultants' ratings was low, and consensus
between consultants, as indexed by the kappa statistic, varied, but was generally
low. Specifically, the highest kappa statistic between two consultants fell short
of what is considered to indicate moderate consensus (Stemler, 2004). These
results are likely to have been hindered by the severity of the negative skew
found in this dataset. However, it is undoubtedly true that photographic con-
sultants did not agree on what constituted a photographic anomaly. The impli-
cations of this finding, given the controlled nature of these analyses, extend
beyond the boundaries of this study and pose problems for the assessment of
photographic anomalies generally and in research contexts in particular. Despite
these problems, we hope that these results encourage others to conduct research
on this understudied topic.

                            Summary & Conclusion
   The present study was abruptly terminated because of aberrant behavior
displayed by one of the occupants of the haunt site. This termination resulted in
our failure to conduct a comprehensive investigation of the site. The data
collected, however, are consistent with previous findings in the literature sur-
rounding haunt phenomena. Analyses weakly suggest the involvement of
various physical contextual variables, such as temperature, humidity, high
magnitude, and variability in EM fields, and high CO counts, in the experience
118                                   D. B. Terhune et al.

and the presence of specula may have mediated the incidence of anomalous
experiences. Photographic analyses demonstrated no compelling photographic
anomalies but did show that the haunt and control sites, as well as the active and
inactive areas of the target site, exhibited differential anomaly print ratings,
a relationship which we were unable to satisfactorily explain. Color prints,
relative to other media types, were also found to yield greater anomaly ratings.
The identification of photographic defects as anomalies, generally, may have
been related to photographic consultants' years of experience. The presence of
such defects may be due to the lighting levels in the respective areas in which
prints were captured, or variability in lens and flash features of cameras across
media types or trials. Finally, and perhaps most importantly, the present study
demonstrates the highly sensitive and distressing nature of haunt phenomena and
reaffirms the ethical obligations of investigators to have a mental health
professional on call and to abide by recognized ethical codes of conduct when
interacting with experients of haunt phenomena and conducting an investigation
of an alleged haunting.

  While this would not affect contrasts between active and inactive areas, it may
  lead to an increase in photographic defects at the fault of the photographer.

  This study was supported in full by a grant from the Society for Scientific
Exploration under the auspices of its Young Investigators Program. Correspon-
dence can be addressed to any of the authors.

American Psychological Association. (2002). Ethical principles of psychologists and code of conduct.
   Available at: http://www.apa.org/ethics/code2002.html. Accessed 28 January 2006.
Baker, I. S., & O'Keeffe, C. (2005). Ethical guidelines for the investigation of haunting experiences.
   Unpublished manuscript, University of Edinburgh.
Barber, T. X. (1969). Hypnosis: A Scientific Approach. Van Nostrand Reinhold.
Bentall, R. P. (2000). Hallucinatory experiences. In Cardefia, E., Lynn, S. J., & Krippner, S. (Eds.),
   Varieties of Anomalous Experience (pp. 85-120). American Psychological Association.
Braithwaite, J. J., & Townsend, M. (2006). Good vibrations: The case for a specific effect of
   infrasound in instances of anomalous experience has yet to be empirically demonstrated. Journal
   of the Society for Psychical Research, 70, 211-224.
Brown, R. J. (2004). Psychological mechanisms of medically unexplained symptoms: An integrative
   conceptual model. Psychological Bulletin, 130, 793-812.
Christinat, C. (1998). Neuropsychological Sequelae of Chronic Carbon Monoxide Poisoning within
   a Family: A Case Study. Ph.D. Dissertation, Miami Institute of Psychology of the Caribbean
   Center for Advanced Studies, USA. [Abstract].
Cook, C. M., & Persinger, M. A. (1997). Experimental induction of the 'sensed presence' in normal
   subjects and an exceptional subject. Perceptual and Motor Skills, 85, 683-693.
Cook, C. M., & Persinger, M. A. (2001). Geophysical variables and behavior: XCII. Experimental
   elicitation of the experience of a sentient being by right hemispheric, weak magnetic fields:
   Interaction with temporal lobe sensitivity. Perceptual and Motor Skills, 92, 447448.
                            A Field Study of Haunt Phenomena                                      119

Foltin, E. M., & Alluisi, E. A. (1969). Crystal gazing: Subjective responses to one of four unstructured
    projective media. Journal of Psychology, 73, 53-62.
Granqvist, P., Fredrikson, M., Unge, P., Hagenfeldt, A., Valind, S., Larhammar, D., & Larsson, M.
    (2005). Sensed presence and mystical experiences are predicted by suggestibility, not by the
    application of transcranial weak complex magnetic fields. Neuroscience Letters, 379, 1-6.
Green, C., & McCreery, C. (1975). Apparitions. Bristol, UK: Westem Printing Services.
Houran, J. (1997). Predicting anomalous effects on film: An empirical test. Perceptual and Motor
    Skills, 84, 691-694.
Houran, J. (2000). Toward a psychology of 'entity encounter experiences'. Journal of the Society for
    Psychical Research, 64, 141-158.
Houran, J. (2002). Analysis of haunt experiences at a historical Illinois landmark. Australian Journal
    of Parapsychology, 2, 97-124.
Houran, J., & Brugger, P. (2000). The need for independent control sites: A methodological
    suggestion with special reference to haunting and poltergeist field research. European Journal of
    Parapsychology, 15, 30-45.
Houran, J., Wiseman, R., & Thalboume, M. A. (2002). Perceptual personality characteristics
    associated with naturalistic haunt experiences. European Journal of Parapsychology, 17,
Kelly, E. F., & Locke, R. G. (1981). A note on scrying. Journal of the American Society for Psychical
    Research, 75, 221-227.
Kumar, V. K., & Pekala, R. J. (2001). Relation of hypnosis-specific attitudes and behaviors to
    paranormal beliefs and experiences. In Houran, J. & Lange, R. (Eds.), Hauntings and Poltergeists:
    Multidisciplinary Perspectives (pp. 260-279). Jefferson, NC: McFarland.
Lange, R., & Houran, J. (1997a). Context-induced paranormal experiences: Support for Houran and
    Lange's model of haunting phenomena. Perceptual and Motor Skills, 84, 1455-1458.
Lange, R., & Houran, J. (1997b). Fortean phenomena caught on film: Evidence or artifact? Journal of
    Scientific Exploration, 11, 4 1 4 6 .
Lange, R., & Houran, J. (2001). Ambiguous stimuli brought to life: The psychological dynamics of
    hauntings and poltergeists. In Houran, J. & Lange, R. (Eds.), Hauntings and Poltergeists:
    Multidisciplinary Perspectives (pp. 280-306). Jefferson, NC: McFarland.
Lange, R., Houran, J., Harte, T. M., & Havens, R. A. (1996). Contextual mediation of perceptions in
    hauntings and poltergeist-like experiences. Perceptual and Motor Skills, 82, 755-762.
Lange, R., Irwin, H. J., & Houran, J. (2000). Top-down purification of Tobacyk's Revised Paranormal
    Belief Scale. Personality and Individual Differences, 29, 131- 56.
Larsson, M., Larhammar, D., Fredrikson, M., & Granqvist, P. (2005). Reply to M.A. Persinger and
    S.A. Koren's response to Granqvist et al. "Sensed presence and mystical experiences are predicted
    by suggestibility, not by the application of transcranial magnetic fields". Neuroscience Letters,
    380, 348-350.
Maher, M. C. (1999). Riding the waves: A modem study of ghosts and apparitions. Journal of
    Parapsychology, 63, 47-80.
Maher, M. C. (2000). Investigation of the General Wayne Inn. Journal of Parapsychology, 64, 365-
Maher, M. C., & Hansen, G. P. (1997). Quantitative investigation of a legally disputed "haunted
    house." Paper Presented at the Proceedings of the Parapsychological Association 40th Annual
    Convention held in conjunction with the Society for Psychical Research, Brighton, England (pp.
Moody, R. A. (1992). Family reunions: Visionary encounters with the departed in a modem-day
    psychomanteum. Journal of Near-Death Studies, 11, 83-121.
Moody, R. A. (1994). A latter-day psychomanteum. Paper presented at the Proceedings of the
    37th Annual Convention of the Parapsychological Association, Amsterdam, the Netherlands (pp.
Nickell, J. (2001). Phantoms, frauds, or fantasies? In Houran, J. & Lange, R. (Eds.), Hauntings and
    Poltergeists: Multidisciplinary Perspectives (pp. 214-223). Jefferson, NC: McFarland.
Persinger, M. A. (1993). Transcendental meditation (TM) and general meditation are associated with
    enhanced complex partial epileptic signs: Evidence for 'cognitive' kindling? Perceptual and
    Motor Skills, 76, 444-446.
Persinger, M. A., & Cameron, R. A. (1986). Are earth faults at fault in some poltergeist-like episodes?
    Journal of the American Society for Psychical Research, 80, 49-73.
120                                    D. B. Terhune et al.

Persinger, M. A., & Healey, F. (2002). Experimental facilitation of the sensed presence: Possible
    intercalation between the hemispheres induced by complex magnetic fields. Journal of Newous
    and Mental Disease, 190, 533-541.
Persinger, M. A., & Koren, S. A. (2001). Predicting the characteristics of haunt phenomena from
    geomagnetic factors and brain sensitivity: Evidence from field and experimental studies. In
    Houran, J. & Lange, R. (Eds.), Hauntings and Poltergeists: Multidisciplinary Perspectives (pp.
    179-194). Jefferson, NC: McFarland.
Persinger, M. A., & Koren, S. A. (2005). A response to Granqvist et al. "Sensed presence and
    mystical experiences are predicted by suggestibility, not by the application of transcranial weak
    magnetic fields". Neuroscience Letters, 380, 346-347.
Persinger, M. A., Tiller, S. G., & Koren, S. A. (2000). Experimental simulation of a haunt experience
    and elicitation of paroxysmal electroencephalographic activity by transcerebral complex magnetic
    fields: Induction of a synthetic "ghost"? Perceptual and Motor Skills, 90, 659-674.
Potts, J. (2004). Ghost hunting in the twenty-first century. In Houran, J. (Ed.), From Shaman to Scientist:
    Essays on Humanity's Search for Spirits (pp. 211-232). Lanham, MD: Scarecrow Press, Inc.
Radin, D., & Roll, W. G. (1994). A radioactive ghost in a music hall. Paper Presented at the
   Proceedings of the 37th Annual Conference of the Parapsychological Association, Amsterdam, the
    Netherlands (pp. 337-346).
Roll, W. G. (1977). Poltergeists. In Wolman, B. B. (Ed.), Handbook of parapsychology (pp. 382-
    413). Van Nostrand Reinhold.
Roll, W. G. (2004). Psychomanteum research: A pilot study. Journal of Near-Death Studies, 22,251-260.
Roll, W. G., Maher, M., & Brown, B. (1992). An investigation of reported haunting occurrences in
   a Japanese restaurant in Georgia. Paper Presented at the Proceedings of the Parapsychological
    Association 35th Annual Convention, Las Vegas, NV (pp. 151-168).
Roll, W. G., & Persinger, M. A. (2001). Investigations of poltergeists and haunts: A review and
   interpretation. In Houran, J. & Lange, R. (Eds.), Hauntings and Poltergeists: Multidisciplinary
   Perspectives (pp. 123-163). Jefferson, NC: McFarland.
Ross, C. A., & Joshi, S. (1992). Paranormal experiences in the general population. Journal of Nervous
   and Mental Disease, 180, 357-361.
Schwartz, G. E., & Creath, K. (2005). Anomalous orbic "spirit" photographs? A conventional optical
   explanation. Journal of Scientific Exploration, 19, 343-358.
Skirrow, P., Jones, C., Griffiths, R. D., & Kaney, S. (2002). The impact of current media events on
   hallucinatory content: The experience of the intensive care unit patient. British Journal of Clinical
   Psychology, 41, 87-9 1.
Stemler, S. E. (2004). A comparison of consensus, consistency, and measurement approaches to
   estimating interrater reliability. Practical Assessment, Research & Evaluation, 9. Available at:
   http://PAREonline.net/getvn.asp?v=9&n=4. Accessed I1 May 2006.
Stevenson, I. (1972). Are poltergeists living or are they dead? Journal of American Society for
   Psychical Research, 66, 233-252.
Tandy, V. (2000). Something in the cellar. Journal of the Society for Psychical Research, 63, 129-140.
Tandy, V., & Lawrence, T. R. (1998). The ghost in the machine. Journal of the Society for Psychical
   Research, 62, 360-364.
Terhune, D. B. (2004). Investigation of reports of a recurrent sensed presence: Assessing recent
   conventional hypotheses. Journal of the Society for Psychical Research, 68, 153-167.
Tobacyk, J. J. (1988). A Revised Paranormal Belief Scale. Unpublished manuscript. Louisiana Tech.
   University, Ruston, LA.
Tobacyk, J., & Milford, G. (1983). Belief in paranormal phenomena: Assessment instrument
   development and implications for personality functioning. Journal of Personality and Social
   Psychology, 44, 1029-1037.
Tyrrell, G. N. M. (1963). Apparitions. Collier Books. (Originally published in 1953).
Wiseman, R., Watt, C., Greening, E., Stevens, P., & O'Keeffe, C. (2002). An investigation into the
   alleged haunting of Harnpton Court Palace: Psychological variables and magnetic fields. Journal
   of Parapsychology, 66, 387-408.
Wiseman, R., Watt, C., Stevens, P., Greening, E., & O'Keefe, C. (2003). An investigation into alleged
    'hauntings'. British Journal of Psychology, 94, 195-21 1.
Zimbardo, P. G., LaBerge, S., & Butler, L. D. (1993). Psychophysiological consequences of
    unexplained arousal: A posthypnotic suggestion paradigm. Journal of Abnormal Psychology,
   102, 466-473.
Journal of   Scientific Exploration, Vol. 21, No. 1, p. 121, 2007

                   Comment on "An Analysis of Contextual
                  Variables and the Incidence of Photographic
                      Anomalies at an Alleged Haunt and
                                a Control Site"

Since some readers may wish for further information about this case, as one of
our reviewers did, we publish the relevant exchange:
   I felt a definite lack of a fuller account of what the experiencers had expe-
rienced. A brief summary of their nature would, I believe, help the reader (1) to
understand why this case was considered to justify investigation; (2) to appre-
ciate why the experiencers felt as they did-particularly why one of them
reacted as strongly as he did. Moreover, it would help the reader understand the
significance of the findings if s h e was given information, such as "the kitchen,
where a visitor allegedly saw the figure of an old woman" or whatever is the
case. As it is, the investigation has a strangely abstract quality, as though
divorced from the incidents thats triggered it. This may have been intentional,
as regards the participants in the investigation: but the reader may well feel he
is not being given sufficient insight into the circumstances of the case.
   Author's Response:
   Although we appreciate this comment and wish we could provide more
information regarding the experiences reported at the site, we were unable to do
so given the methodological components of the study (the experimenters were
double-blind) and the fact that the experiment was abruptly discontinued along
with all contact with the inhabitants. We believe that this is outlined clearly in
the paper. However, we have added an additional paragraph to the article to
    Journal of Scientific Exploration, Vol. 21, N O . 1, pp. 123-133, 2007

                 The Function of Book Reviews in ~nomalistics'

                                           Hovelmann Communication
                                 Carl-Strehl-Str. 16, 0-35039 Marburg, Germany
                                  E-Mail: hoevelmann.communication@ krnpx.de

          Abstract-Cooperative intradisciplinary communication, including the recog-
          nition and critical discussion of the current literature, is essential for the success
          of any scientific endeavor. For at least two interrelated reasons, this is
          a particularly demanding task in the context of anomalistics. While the
          notorious "explosion" and diversification of accessible knowledge forms
          a serious problem for all scientific disciplines, a transdisciplinary endeavor
          such as anomalistics that is organized across established disciplinary
          boundaries must cope with a particularly heavy load of new, potentially
          relevant information and publications that must be seriously considered.
          Therefore, within the framework of the periodical literature, reviews of book
          publications on topics relevant to anomalistics fulfill an important task of
          individual and reciprocal information and education. Book reviews have the
          function of simultaneously widening and focusing the perspective of interested
          scientists. Analyses of the book review frequencies in two leading periodicals in
          the field of anomalistics (the Journal of Scientijic Exploration and the German
          Zeitschriftfur Anomalistik) reveal marked increases of the book review sections
          for both journals over recent years. This indicates that these journals and their
          respective editorial teams have developed a clear recognition of the important
          guiding function of book reviews. However, publishing reviews for the sake of
          reviews is insufficient and not basically scientific. Therefore, the two final
          sections of this essay explore the differences between and the respective
          scientific merits (or lack thereof) of two types of book reviews - analytical vs.
          descriptive - and discusses various editorial criteria and structural requirements
          pertaining to book reviews as scientific publications in their own right.

          Keywords: communication in science-anomalistics-transdisciplinarity-
                    book reviews (descriptive, analytical, objective)-book review

1                                                   Introduction
    It is an undisputed fact that science is ultimately dependent on cooperation and
    organised according to the social division of labor. What is equally indubitable is
    that a tremendous variety of communication types are vital for the success of
    any scientific exercise organised on this cooperative basis. Furthermore, it is
    easy to understand that attempts at communication in the field of scientific
124                             G. H. Hovelmann

endeavor, just as in every other area where people interact, tend to meet with
different amounts of success. That is reason enough to submit the various forms
of communication prevalent in science to self-reflective scientific examination,
together with the circumstances, causes and effects of successful and unsuc-
cessful attempts at achieving mutual comprehension. In recent decades, the
significance of the relevant science studies has appreciably increased, as can be
easily ascertained from the corresponding host of publications.
   When considering scientific communication, it is advisable to keep two
important aspects carefully separate from one another. These are communi-
cation in science and communication o science. The first concept refers to
communication within the scientific community, between researchers person-
ally involved in the topic or those interested in it, whilst the second refers to
the act of conveying science to a general, scientifically more or less uneducated
public, which, however, is always affected to some extent by the results and
developments of scientific research. In the current context we shall only be
concerned with the first aspect, that of internal communication between
   Within the framework of professional "science of science", as already
mentioned, such an unwieldy mass of literature has been created during the
last forty to fifty years that it is no longer of manageable proportions (for an
extensive interim inventory, see Hovelmann, 1987). Sections of this literature
highlight many aspects of scientific periodical literature from the multiple
perspectives of communication theory, psychology, sociology, linguistics,
scientometrics, politics and economics, to name but a few. When dealing with
specialist scientific periodicals, these studies have concentrated on three
intricate topics in particular. Firstly, they have focused on the logic, and
especially on the doubts and imponderables concerning the processes involved
in the external appraisal of manuscripts (peer review), which serves to bring
about or to simplify publication decisions. Secondly, they have looked at the
"publish or perish" rule and the conditions and consequences bound up with
following it. Thirdly, much consideration has been given to the practices
involved in scientific citation and to the motivations and complexities or even
entanglements of citation cartels. It is apparent that all of these aspects of
scientific communication are of great possible relevance for publishers, editors
and readers of specialist scientific journals (Armstrong, 1982). However,
comparable studies specifically looking at the book review sections of
scientific journals, which one could also expect in the environment described,
are found only sporadically. Focusing on particular themes, they generally go
down to the tiniest individual detail, and prove to be rather unproductive from
a more general, systematic point of view. Added to this comes the fact that
such studies only ever concern themselves with the review sections of
scientific journals that have a solid intradisciplinary basis, i.e., those that are
associated with one set science, if not indeed with a narrowly defined branch
of a discipline.
                   Function of Book Reviews in Anomalistics                     125

                Anomalistics Is Essentially Transdisciplinary
   Recently, within the context of a lengthy obituary of the sociologist Marcello
Truzzi published, in German, in the Zeitschrift ffi Anornalistik, I attempted
a description of the subject matter and fields of inquiry involved in anomalistics,
as well as of its interdisciplinary or, to put it more precisely, transdisciplinary
(i.e., discipline-encompassing) nature (Hovelmann, 2005a, pp. 15-20). Although
short, that description is also sufficient for the present purpose. One of the facets
of anomalistics that is explained in detail there and that also has many
methodological consequences is its very inter-, multi- or transdisciplinary
nature, which results from the subject matter itself. This subject matter is
comprised of unusual claims or presumptions regarding existence, effects or
correlations, whose examination and appraisal often requires a solid basic un-
derstanding of scientific work and argument beyond the limits of an actual
specialist branch of science, in addition to sound knowledge and ability in the
discipline itself.
   Academically established science studies that have been carried out since the
late 1950s2 frequently claim that there has been a so-called "exponential"
increase in specialist scientific literature and scientific knowledge, which some
like to describe as a doubling in the scope of knowledge (however this may be
calculated) within ever decreasing periods of time (Brookes, 1970; Campbell &
Halliday, 1985; Drubba, 1976; Edge, 1979; Moravcsik, 1973; Price, 1969;
Stuhlhofer, 1983). When examined more closely, it is safe to say that this now
notorious "explosion of knowledge" is more an increase in what can potentially
be known than in what is actually known. Meanwhile, contrasting with this (and
sometimes also standing in its way) is a self-inflicted disciplinary modesty,
almost unavoidable in scientific education, which, to put it less generously, also
characterises large sections of research and teaching as a self-confident one-
track specialism. Against this background, any work carried out by aspiring
young scientists which crosses the boundaries to other disciplines, let alone
strays into the areas with which anomalistics professionally concerns itself-
areas that are not (as yet) firmly defined and are also not currently sufficiently
empirically secured-is, at first, generally not scheduled in the curriculum, later
expressly undesired, being dangerous to the scientist's career, and then at some
point mostly no longer even possible.
   This diagnosed increase and diversification in the scientific knowledge that is
principally accessible can now be clearly seen to pose unavoidable problems
even, and indeed particularly, for an undertaking such as anomalistics that is
necessarily constructed in a transdisciplinary fashion. It must be expected that
the scientists and other experts from an extensive number of disciplines (ranging
from the "hard" sciences such as physics, astronomy and geology to "softer"
ones such as anthropology, literature and the history of religion) who are
involved in studies, discussions and consideration within the context of
126                             G. H. Hovelmann

of their own fields and possibly also that of a neighboring discipline. Rather,
they must also have acquired competence in a wide spectrum of other branches
of science-and ideally also in the philosophy and history of science and
cultural history-or must at least be sufficiently competent to carry out studies
or assess supposed anomalies in interaction with their scientific colleagues, and
ultimately to explain these convincingly, as far as is possible. In addition, they
should take note in their specific area of interest of all the anomalistic claims
and findings, sometimes subject to revision at short notice, and should be able to
retain an overview. All this places very high demands on the scientists involved
in this way, particularly since they generally have to deal with this work load in
addition to their usual research andlor teaching obligations. However, as
Gertrude Schmeidler once remarked, referring to scientists with ambitions in
parapsychology, which doubtlessly is one of the most down-to-earth sectors of
anomalistics: "Unless you're very good, you're not good enough" (Schmeidler,
1987, p. 86).
   Consequently, reviews of pertinent book publications, particularly in spe-
cialized journals on anomalistics but also in the various periodicals for certain
anomalistic specialties, fulfil an important task of individual and reciprocal
information and education, indeed one that is almost vital against the afore-
mentioned background. Although anomalistics is also structured on the im-
plementation of empirical research work, this is not its principal focus. Rather,
its main task is to systematically consider and appraise the research done by
other scientists and the argumentation bound up with this as far as they relate
to anomalistic issues (Hovelmann, 2005a). As a result, this work has more of
the character of a critical, disinterested review of, comment on and appraisal
of descriptions of the empirical studies, field research reports or theoretical
reflections of other scientists (or even laypersons submitting reports [see
Hovelmann, 2005bl) than of personal empirical research. Faced with the ex-
tremely broad spectrum of anomalistic or anomaly-relevant topics, ranging from
well-known but currently unexplained anomalies within the context of estab-
lished science itself (of which there are many today, even if they are mentioned
only reluctantly or on the quiet) to sometimes obscure reports of singular
extraordinary experiences, a single observer can certainly not be expected to
maintain a personal overview of all the principal literature available, especially
all new publications, and to succeed in separating the chaff from the wheat.
Whilst it is permissible to demand that a scientist have a comprehensive overview
of the pertinent specialist literature in his or her own discipline, whether that be
a matter of, say, the Doppler effect in acoustics and astronomy or of Neolithic
Bandkeramik culture in archaeology, the idea that an individual should be fully
up to date with the vast literature relevant to anomalistics and its transdisciplin-
ary branches is, although it is legitimate to aim high, rather too much to ask.
   In this context, the book review sections of anomalistics journals have the
extremely important dual function of simultaneously widening and focusing the
perspective of interested scientists. Indeed, it appears undeniable that the rel-
                        Function of Book Reviews in Anomalistics                                 127

                                             TABLE 1
                  Proportion of Book Reviews in the Journal of Scientific Exploration

                                                                 Average length         Percentage of
                      Total       Pages of       Number of        of reviews            reviews in the
  Volume*             pages       reviews         reviews          in pages             entire volume

Vol.   1, 1987
Vol.   2, 1988
Vol.   3, 1989
Vol.   4, 1990
Vol.   5, 1991
Vol.   6, 1992
Vol.   7, 1993
Vol.   8, 1994
Vol.   9, 1995
Vol.   10, 1996
Vol.   11, 1997
Vol.   12, 1998
Vol.   13, 1999
Vol.   14, 2000
Vol.   15, 2001
Vol.   16, 2002
Vol.   17, 2003
Vol.   18, 2004
Vol.   19, 2005
Vol.   20, 2006

* From 1987 to 1991 (Vol. 1-5), two issues of the JSE were published each year, since then there
have been four issues. At the time of this survey only three issues for 2006 (Vol. 20) had appeared.

evant periodicals and their respective editorial teams have, by now, a very clear
recognition of this important guiding function of book reviews and show this
understanding by giving reviews a proportionately large amount of space.

             Book Review Frequency in Periodicals on Anomalistics
   This premise is confirmed by a more in-depth look at what are, as far as I am
aware, currently the only two specialist journals worldwide that deal explicitly
with scientific discussion throughout the entire spectrum of anomalistics-the
relatively new Zeitschrift fur Anomalistik (ZfA), published in German with
abstracts in English and occasional English contributions and, more particularly,
its "big sister", the excellent Journal of Scientific Exploration (JSE), published
in the United States by the Society for Scientific Exploration and boasting 20
increasingly comprehensive periodical volumes since 1987. For both periodicals
in this study, both the total length of the publication and the number and length
of the book reviews included in each volume were noted. In this process, those
book reviews discussing multiple thematically-linked books within a single
review were counted as only one review. However, two reviews contrasting
linked discussions of the same book by two different reviewers with disparate
128                                    G. H. Hovelmann

                                          TABLE 2
                  Proportion of Book Reviews in the Zeitschrift f i r Anomalistik

                                                               Average length        Percentage of
                   Total       Pages of       Number of         of reviews           reviews in the
  Volume*          pages       reviews         reviews           in pages            entire volume

Vol. 1,    2001     118           7                3                  2.3                  6.0
Vol. 2,    2002     324           11               3                  3.7                  3.4
Vol. 3,    2003     291           18               6                  3.0                  6.2
Vol. 4,    2004     292           19               6                  3.2                  6.5
Vol. 5 ,   2005     376           74              18                  4.1                  20

* In general, the ZfA publishes one single and one double issue each year. In 2004 it produced
a collected volume for the year instead. In 2001, only the initial issue came out, around the end of
the year.

perspectives or approaches (only the case once in the ZfA up to now, but already
seen at least a dozen times in the JSE) were regarded as two reviews. The section
"Further Books of Note", which has appeared, with short book reviews, in
almost every issue of the JSE for many              was not taken into account in
these calculations. Consequently, the total number of books discussed in the JSE
and the amount of space taken up with these within its pages are actually slightly
higher than is apparent under the aforementioned aspects of the calculation,
displayed in tabular form.
   The overview for the JSE in Table 1 shows-besides the fact that the total
length of this periodical has increased continually since the beginning of the
1990s-that both the length of the book review section overall and the number of
books discussed have grown by a markedly disproportionate amount. Whilst the
average length of text for individual book reviews has also risen during this period
of nearly 20 years, although only slightly, the relative proportion of the review
section within the entire periodical has increased very significantly. For the first
seven years this proportion was always a percentage in single figures (between
1.4% and 8.0%), yet even an average taken across the entire period shows it
reaching 14%. If we take only the last almost seven years into account (2000 to fall
2006), since Henry H. Bauer has taken over as editor-in-chief and David Moncrief
as editor of the book review section, then the relative proportion of reviews
measured against the entire JSE publication reaches 21%, more than a fifth. This
is sufficient evidence that the aforementioned weighting of book reviews for
anomalistics is at least implicitly understood by the JSE's editorial board.
   In contrast, the tabular overview of the book reviews in the ZfA (Table 2) is
currently still relatively uninformative, as it can, inevitably, only draw on five
years work-or more precisely on a mere four and a third periodical volumes.
However, even here a marked growth in the relative proportion of reviews
within the entire body of the periodical can be noted.4 Taken as an average, this
proportion has already reached nearly 10% (9.2% to be precise), and for the last
volume considered (2005), it stood at around a fifth (20%). Consequently, it is
                  Function of Book Reviews in Anomalistics                     129

obvious that the ZfA is taking a very similar course to the JSE as regards the
relative proportion of reviews within the body of the periodical.5

                  A Book Review Is a Scientific Publication
   In the everyday business of academia, and consequently in one's own and
others' lists of publications, book reviews do not enjoy a particularly high status.
This is largely due to the fact that the distinction between two possible sorts of
book reviews--descriptive vs. analytical ones-is not made with sufficient care.
Even scientific periodicals themselves occasionally contribute significantly to
this undesirable state of affairs.

Descriptive Book Reviews
   The following instructions to reviewers were issued by a scientific journal, the
periodical of a renowned society whose name shall remain mercifully un-
mentioned. They are quoted almost in full, and clearly illustrate what is meant
by this type of review. "For our 'Book Reviews' column, we want exciting and
informative appraisals of books in the subject area covered by our [. . .I. They
can also reflect personal impressions gained from reading, but should not be
longer than two pages of print. Do not be afraid to be critical!!! We cannot
consider reviews longer than two pages of print." Anyone who encourages his or
her authors to write book reviews in this way clearly does not take either his
work as editor of a scientific journal, his authors or indeed his readers seriously
(although it may be a different matter for the publishers advertising in his
journal). He would do just as well to abstain from printing a review section at all.
It is quite understandable if a scientific reviewer is not keen to show off a book
review created in accordance with these requirements. The approval from the
readership and academic colleagues (ideally eager for knowledge) will also be
correspondingly muted.
   Even the so-called "academic review journals", which, in many disciplines,
fulfil the function of informing professional scientists about new publications,
contain, besides sometimes comprehensive and highly instructive essay reviews,
a great number of descriptive short (and extremely short) reviews. These are
usually restricted to sparse summaries of the text, which sometimes are not even
sufficient to reproduce the flap texts of the books. In all respects, it is rather
questionable what useful purpose they fulfil. It is possible that purely descriptive
brief synopses may be in the interest of the publisher, and perhaps even that of
the author, as they promote sales. However, even in the best cases, they
represent only a competent piece of journalism, no actual scientific achievement.
In an era when it is easy to access readers' reviews on the sites of wholesale
dispatchers such as Amazon, and when publishers' websites contain an
increasing amount of information, the aforementioned descriptive reviews are
generally a waste of time for those who write or read them.
130                             G. H. Hovelmann

Analytical Book Reviews
   In the literature, a second type of book review is generally referred to by the
term "critical review". However, I prefer to call them "analytical reviews", as,
in view of the often negligent misuse of its ambiguous meaning, the adjective
"critical" has been appreciably devalued and has become unusable, particularly
in the context of discussions of anomalistics. An analytical discussion of
a recently published book also requires (1) a concise description of its content
(without retelling the entire story) and (2) a short characterisation of the author
and his or her professional background. However, there are some further
requirements for a review that aims additionally to satisfy scientific demands.
The following provides some brief examples of these, although they must
always be adapted to suit specific cases: (3) an elucidation of the argumentative
perspectives that are expressly formulated in the book, or that can safely be
drawn from it; (4) an explanation of the level of difficulty or complexity; (5) an
analysis and classification of these perspectives against the background of
previous findings or discussion; (6) an appraisal of the methodological
sophistication of the study plans or analytic procedures used and of the logical
coherence of the arguments; (7) a balanced appraisal of the success of the
author's work, measured against (a) what objectives the author aimed to achieve
and, possibly, (b) what the reviewer judges would have actually been necessary
for the author to achieve; (8) the provision by the reviewer of sufficient
documentation to support his or her own statements, if necessary, and (9)
a conclusive evaluation. It is usually just as desirable-and sometimes vital-to
include representative key quotations from the work discussed (perhaps in the
form of paraphrases), as it is to consult (and compare) previous similar or indeed
alternative pieces of work, backed up by quotations and references.
   It is mostly left to the discretion of the reviewer dealing with the specific book
to be discussed to decide whether all this can be suitably achieved over two,
three or only over 10 pages of print. The Zeitschriftfur Anomalistik (and, to the
best of my knowledge, also the Journal of Scientific Exploration) does not, in
principle, place any blanket space limits on reviews, as long as the scientific
discussion and possible contribution to knowledge that are the objectives of the
book review justify the expenditure of time, effort and journal space. If one
understands an analytical review to be principally a scientific piece of work,
then it follows consequently that this type of review, just as every other pertinent
specialist publication, should, if necessary, be allowed to include supplementary
scientific apparatus (bibliography, tables, diagrams, etc.).

                   Unattainable Expectations of Objectivity
  In principle, there can be no such thing as an "objective" book review, and
tedious arguments can result from the question of whether they would actually
be desirable should they be possible. Henry H. Bauer, the editor-in-chief of the
Journal of Scientific Exploration, recently cut these unattainable expectations of
                   Function of Book Reviews in Anomalistics                       131

objectivity down to size in an editorial: "[Tlhere is surely no such thing as an
'objective' review of a book, unless it were merely a compendium of data like
a table of logarithms. Surely books are interesting to readers for their particular
take on a given set of facts. If reviewers are to say what is interesting about
a book, then they must inject their own take on that in some manner. It is
interpretations of facts, the meaning of facts, that is significant; and inter-
pretations, being the product of human minds, are never strictly objective.
Interpretations are bound to differ." (Bauer, 2005, p. 397; italics in original).
    However, what can actually be emphatically demanded of a reviewer, as
opposed to strict objectivity, is the attempt (incidentally in the best tradition of
anomalistics) to present the content of a book impartially and to reflect its key
statements as representatively as possible, firstly giving the author as much
credit as is possible and is justifiable, before expressing any criticism in
a balanced and comprehensible fashion, if necessary with appropriate emphasis,
 whilst providing sufficient justification and documentation for any necessary
counterclaims. Using the example of book reviews in parapsychological
scientific periodicals, Scott Rogo showed that this does not always succeed as
 well as would be desirable (Rogo, 1977). His study clearly highlights some of
 the more questionable strategies employed by reviewers, including excessively
shortened quotations or selective summaries, long critiques of rather in-
significant details, assumptions that are not covered by the content of the book
discussed, criticism of the fact that questions have not been clarified in cases
 where the author never set out to answer them in the first place, criticism that the
 work is not in line with standards (of whatever type), although the author never
 announced or even intended that they would be met, and much more.
    Admittedly, parapsychology consists of a very small international community
 made up of fewer than two hundred natural and social scientists, almost all of
 whom are personally acquainted. There are some very close personal relation-
 ships, both friendships and animosities. However, after decades of reading the
 literature in this field extensively, including perhaps several thousand book
reviews, I have the impression that, in spite of what is sometimes very frank
 mutual criticism, even among colleagues who are friends, an amazingly small
 number of matters end up getting out of hand. Furthermore, in scientific groupings
 and fields of interest where there are larger numbers, including anomalistics
(which principally contains the subject of parapsychology), potential influences
 resulting from personal obligations are not a prominent peril.
    What is more, the onus is on the book review editors of journals in the field of
 anomalistics and other scientific disciplines to ensure that no personal or indeed
institutional interests threaten to conflict with the obligations of any reviewer.
They can achieve this by making an appropriate selection of competent reviewers
for the books to be discussed. Faced with a pool of potential reviewers that is
relatively small, and given that reviewers should bring, on the one hand, a suitable
scientific qualification in the relevant discipline and, on the other, at least a basic
132                             G. H. Hovelmann

field such as anomalistics, this task is certainly not always easy. However, in the
end, it is successfully achieved more often than one might fear.
   The book review editor of any scientific journal, including one in the field of
anomalistics, can not exactly enforce upon reviewers that all the structural
requirements for an analytical review be met, or that the reviewer take all the
criteria mentioned into account (in many cases unlike the editor of a public
organ with a high turnover, who works with paid authors). It is hardly possible
for him to undertake, of his own accord, changes to the text of a book review that
go beyond minor editorial measures to smooth out the text, adapt it to linguistic
standards or remove formulations that are too rustic, at least not without
consulting the reviewer. At most, he can, if necessary, completely refuse to print
a review that was requested or submitted uninvited (something that has already
happened with the Zeitschrift fur Anomalistik). In any case, the usual rule
applies: Everyone is responsible for what he or she writes.

 This is the invited, updated and very slightly revised English version of an
 essay that was originally published, in German, under the title "Die Rolle von
 Rezensionen in der Anomalistik", in the Zeitschrift fur Anomalistik, 5, 2005,
 I do not want to dissimulate the fact that I certainly see considerable problems
 with established science studies, the overwhelming majority of which have
 a purely descriptive focus. Although they sometimes provide significant
 empirical findings, these studies frequently, and without need, dispense with
 legitimate science-critical and, even more so, with all normative approaches
 and interests. This happens because, in a widespread naturalistic self-
 limitation, they treat the objects of their studies as if they were naturally
 occurring things or events, rather than cultural undertakings with a focus on
 the determination of human aims (Hovelmann, 1988).
 Starting with the upcoming 2006 volume, the Zeitschrifr fiir Anomalistik will
 also introduce an additional section with short reviews of books that-
 although they may not focus on issues and topics of anomalistics-provide
 material or arguments that are important for discussions within the framework
 of anomalistics, or could become so, and do this, incidentally, in such a way
 that interested persons could easily miss it.
 Moreover, the type area in the ZfA is a little more generous than that in the
 JSE. As a consequence, the latter can fit slightly more text on a printed page.
 As I am writing this, double issue 1/2 of the 2006 volume of the ZfA is about to
 go to the printers. It will contain approximately 70 pages of book reviews,
 covering 19 recent books, plus a 10-page "Further Books of Note" section.
                       Function of Book Reviews in Anomalistics                                   133

Armstrong, J. S. (1982). Research on scientific joumals: Implications for editors and authors. Journal
    of Forecasting, I , 83-104.
Bauer, H. H. (2005). Editorial about book reviews and letters. Journal of Scientific Exploration, 19,
Brookes, B. C. (1970). The growth, utility and obsolescence of scientific periodical literature. Journal
    of Documentation, 26, 283-294.
Campbell, R., & Halliday, T. (1985). Why so many papers? Scholarly Publishing, 16, 313-316.
Drubba, H. (1976). 90.000 wissenschaftliche Zeitschriften? [90,000 scientific joumals?] Nachrichten
    fur Dokumentation, 27, 115-117.
Edge, D. 0 . (1979). Quantitative measures of communication in science: A critical review. History of
    Science, 17, 102-134.
Hovelmann, G. H. (1987). Bibliographie zur Selbstthematisierung der Wissenschaft [Bibliography on
    the Self-Reflection of Science]. Erlangen: Institut fiir Gesellschaft und Wissenschaft, Universitat
    Erlangen-Niimberg (567 pp.).
Hovelmann, G. H. (1988). The Science of Science: Some Neglected Problems. Invited lecture at the
    Hoger Instituut voor Wijsbegeerte, Centrum voor Logica, Filosofie van de Wetenschappen en
    Taalfilosofie, Katholieke Universiteit Leuven, Belgium.
Hovelmann, G. H. (2005a). Devianz und Anomalistik: Bewiihrungsproben der Wissenschaft. Prof.
    Dr. Marcello Truzzi (1935-2003) [Deviance and anomalistics: Probing the limits of science.
    Prof. Dr. Marcello Truzzi (1935-2003)l. Zeitschri'fir Anomalistik, 5, 5-30.
Hovelmann, G. H. (2005b). Laienforschung und Wissenschaftsanspruch [Amateur research and claims
    to scientific respectability]. Zeitschrift fur Anomalistik, 5, 126-1 35.
Moravcsik, M. J. (1973). Measures of scientific growth. Research Policy, 2, 266-275.
Price, D. J. de S. (1969). Measuring the size of science. Proceedings of the Israel Academy of Sciences
    and Humanities, 4 , 98-111.
Rogo, D. S. (1977). Understanding book reviews in parapsychology. Parapsychology Review, 8(1),
Schmeidler, G. R. (1987). Questions and attempts at answers (pp. 76-88). In Pilkington, R. (Ed.),
    Men and Women of Parapsychology: Personal Reflections. Jefferson, NC & London: McFarland
    and Co.
Stuhlhofer, F. (1983). Unser Wissen verdoppelt sich alle 100 Jahre. Grundlegung einer
    "Wissensmessung" [Our knowledge doubles every 100 years: Foundations of "knowledge
    measurement"]. Berichte zur Wissenschaftsgeschichte,6, 169-193.
Journal of Scientific Exploration, Vol. 21, No. 1, pp. 135-140, 2007

                    Ockham's Razor and Its Improper Use

                                   Technische Universitaet Muenchen
                             Arcisstrasse 21, 0-80992 Muenchen, Germany
                                e-mail: t414lax@mail.lrz-muenchen.de

      Abstract-"Ockham's Razor" is a methodological principle, due to the medi-
      eval philosopher William of Ockham, who mainly opposed an unjustified cre-
      ation of new terms in philosophy. Since this principle and its later versions are
      frequently quoted in discussions about anomalies, it will be discussed here in
      some detail. After a short look at the historic roots, the principal modern
      formulations are summarized. It will be shown that a demand for "simplicity"
      cannot be generally sustained. Rather, striving for simplicity can conflict with
      other essentials of scientific method. Ockham's principle-no matter whether in
      its original or in a modified version--cannot help toward a rational decision
      between competing explanations of the same empirical facts. An incorrect use
      of Ockham's Razor only leads to a perpetuation and corroboration of existing
      prejudice, and this principle should not be used to easily get rid of unwelcome
      data or concepts.
      Keywords: Ockham's Razor-anomalies-misinterpretation of empirical
                facts-principle of simplicity--economy of thinking-perpetuation
                of prejudice

         1. Misinterpretation of Empirical Facts-a Recurrent Pattern
In discussions about the existence or non-existence of classes of controversial
phenomena, about a correct interpretation of empirical data, but also about the
adequacy of terms newly created for purposes of explanation, a principle is
persistently quoted which is generally known as "Ockham's Razor". In the fol-
lowing, after a short glimpse at the historic roots, the scope and the limitations of
this principle, particularly in its modem guise, will be explored.
   Among the many dysfunctions of science, one specific pattern will be scru-
tinized in more detail. A significant proportion of the errors and misunderstand-
ings in the history of science-until very recent times--can be understood as
misinterpretations of empirical facts, in two possible ways:
    The erroneous acceptance of phenomena (e.g., N-rays, polywater, Piltdown
a   The unjustified rejection of phenomena (e.g., meteorites, ball lightning,
    continental drift, reverse transcriptase).
                                   D. Gernert

          2. Ockham's Principle-Original and Revised Versions
   William of Ockham (about 1280-1349) is deemed one of the most important
philosophers of the 14th century. Ockharn's Razor is a "methodological prin-
ciple, particularly in the context of ontological issues, according to which
philosophy and science should assume as few theoretical entities as possible for
purposes of explanation, explication, definition, etc." (Gethmann, 2004). It ap-
pears in two versions: "Pluralitas non est ponenda sine necessitate" and "Frustra
fit per plura, quod potest fieri per pauciora"; the frequently cited form "Entia
non sunt multiplicanda praeter necessitatem (sine necessitate)" (entities must not
be multiplied beyond necessity) does not occur in Ockham (Schwemmer, 2004;
Thorburn, 1918).
   The original meaning of this principle can be understood only in the context
of the philosophical and theological debates of that time, especially on the
"problem of universals". Above all, Ockham opposes pseudo-explanatory or
otherwise meaningless and superfluous terms. But a clear view of the authentic
intention is blocked by later modifications and re-interpretations not consonant
with the primary source. Essentially three basic patterns of the later versions can
be identified, which of course partially overlap:
  The principle of parsimony comes closest to the original version by
  demanding cautious discretion before creating new terms and concepts.
  The principle of simplicity (economy of thinking, according to Ernst Mach)
  aims at explanations, reasons, theories, etc., which should be as simple as
  Closely related with the latter is the demand for an exclusion of unnecessary
  additional hypotheses.

                 3. Simple or Suffkient Systems of Terms?
   Already in Ockham's lifetime, his fellow in the Franciscan order, Walter of
Chatton, voiced contradiction: "If three things are not enough to verify an
affirmative proposition about things, a fourth must be added, and so on". Later,
other authors in a similar manner advocated a "principle of plenitude" (Maurer,
1984). The mathematician Karl Menger (1960) formulates a "law against mi-
serliness" and demonstrates that occasionally too many different concepts are
united under one single term (e.g., "variable").
   The demand for a functional, sufficiently differentiated system of terms is
now generally accepted, as well as the warning against neologisms "beyond
a necessary scale". Stupidities in terminology sometimes occur as a "show
vocabulary" within new fields of science struggling for recognition, and as in-
group slang motivated by group dynamics and not scientific logic. Still un-
resolved, however, are concepts like "simpler theory" and "unnecessary
additional hypothesis".
                      Ockham's Razor and Its Improper Use

                            4. The Myth of Simplicity

4.1. The Scientist and the "Unknown Unknown"
   Normally, an explanation becomes necessary when a surprising and
unexpected phenomenon is observed, and an explanation has to do away with
this element of surprise (Kim, 1967: 162). Bauer (1992: 74-76) disputes the
common view that scientists are open-minded and strive for new cognition and
insight. By way of contrast, he states that open-mindedness for the new exists
only so long as the new things are not too new. Bauer makes a distinction
between the "known unknown" which can be derived from secured knowledge
(and hence is suitable for research proposals), and the "unknown unknown" that
cannot be expected on the basis of the state of knowledge. Based upon
psychological experience, Krelle (1968: 344-347) characterizes a limitation to
the human capacity of information processing under the term "conservative
distortion". Particular and deviant features are perceived insufficiently, and
"valuations accepted before" are maintained.
   So it becomes understandable why the existence of meteorites and ball
lightnings originally was rejected. The scepticism against reports supplied by
laymen (Westrum, 1978) induced a persistent deterioration of the faculty of
judgement, such that even substantiated evidence and experts' reports-actual
specimens of meteorites and chemical analyses-were dismissed under the same
prejudice. Being accustomed to categorize phenomena within the usual conceptual
and explanatory schemes, scientists easily run the risk of a reductionist trap, finally
being content with such a sloppy categorization, however wrong it may be.
   As victims of this characteristic mechanism, scientists have acted dramatically
against their own interests. We find the recurrent pattern of the "discovery
before the discovery". At least three renowned chemists produced oxygen before
Lavoisier, but erroneously classified it as some well-known gas. In at least 17
cases a new celestial object was reported before it was finally recognized as
a new planet (Uranus), and similar errors happened before the "definitive dis-
covery" of the planet Neptune and of X-rays (Kuhn, 1962). In 1995, two
American astronomers made observations suggestive of a planet outside our
solar system, but did not further pursue their discovery. So other astronomers
could be the first to publish their independent discovery and claim to have
identified the first extrasolar planet (Schneider, 1997).

4.2. Nearness Distortion-a Characteristic Pattern of Misunderstanding
  When trying to interpret a phenomenon, humans are always at risk of "falling
short", of adopting explanations close to their individual range of prior
experience. This can be documented by a series of episodes from history.
  Galilei categorically opposed the idea that the tides have something to do with
138                                  D. Gernert

terrestrial, theory instead (Harris, 1967: 228). An explanation was highly desired
also in the meteorite controversy. The true debate began in 1794 when the
German physicist Chladni published a small book advocating the reality of
meteorites, and in the same year a widely publicized observation took place in
Siena, Italy. But Chladni and all other advocates of the reality of meteorites were
under permanent attack. Even scholars who were up to the standards of their
time tried to contrive explanations to circumvent the idea that material can fall
from the sky, e.g., meteorites "were caused by the ignition of long trains of gas
in the atmosphere" or by "hurricanes and volcanic explosions" (Westrum,
   The "Nordlinger Ries" is a singular geological formation in Bavaria
(Southern Germany). In our modem understanding it is an impact crater, nearly
circular, with a diameter of about 24 kilometers. For a long time the problem of
its origin had puzzled the experts. For this puzzle, too, a lot of possible terrestrial
interpretations were thought up, e.g., a volcano that had meanwhile disappeared,
an "explosion hypothesis", a "glacier-grinding theory", etc. Only after 1960
was the impact of a cosmic object ("meteorite theoryv)-the now generally
accepted theory-seriously discussed (Dehm, 1969).
   This recurrent type of misinterpretation can be dubbed nearness distortion. As
a matter of symmetry, there is also a trend towards "far-fetched reasons",
particularly in some groups that are inclined to quickly assume extraterrestrial or
subterranean origins.

4.3. In Search of a Simplicity Criterion
   The symmetrical terms "simplicity" and "complexity" areperspective notions:
their meaning in a single case depends-beyond the well-known context-
dependence of any word meaning-on the context of application and the user's
prior understanding (Gernert, 2000). For the present purpose, a comparative
measure would suffice that marks one of two possible explanations of an empirical
fact as the "simpler" one. But even such a comparative measure is feasible only in
limited contexts within a formal science (e.g., comparing two formulas of a logic
calculus); a measure of complexity will immediately provoke reservations as soon
as relationships with empirical data come into play.
   The degree of simplicity of a curve equation can be defined by the number of
free parameters: a circle in the plane gets the measure 3, and an ellipse gets the
number 5. On the basis of simplicity we would have to prefer the circular
planetary orbits of Copernicus to Kepler's ellipses. Simplicity and precision are
conflicting demands. Furthermore, a measure of simplicity depends upon a
predefined scheme. In a task of curve fitting, given a set of measurement points,
a reasonable curve is to be determined. If a fixed task requires, in a first step, to
express such a curve by a polynomial, whereas in a second step also sin (x), log
(x), etc., will be permitted, then the latter representation will be "simpler", but at
the price of more complex means of expression. On the other hand, the simplest
                         Ockham's Razor and Its Improper Use                       139

    answer-maybe a straight or slightly curved line-is not always useful: for the
    quantum Hall effect, just the extrema of the curve are relevant. The theory of
    complexity is not helpful here. In the literature we find various definitions of
    "complexity", each of which is tailored to a specific application; each of them is
    related to its specific class of formally defined constructs, like algorithms or
    series of signs.
       The problem of deciding between competing explanations for empirical facts
    cannot be solved by formal tools. Can a neutral procedure be imagined, to assess
    the issue of ball lightning, still controversial some decades ago: Is ball lightning
    real, or are the reports by laymen altogether based upon deception?
       In an extensive monograph, Mario Bunge (1963) reveals the divers short-
    comings and limitations of a principle of simplicity. In detail he demonstrates
    that a demand for simplicity (in any of its facets) will conflict with other
    essentials of science (as exemplified above by a conflict between simplicity and
    precision). Finally he speaks of a "cult" or "myth of simplicity". With respect to
    Ockham's Razor, Bunge recommends caution: "In science, as in the barber shop,
    better alive and bearded than cleanly shaven but dead" (p. 115).

    4.4. What Ockham's Principle Cannot Accomplish
       The principle of simplicity, no matter in which version, does not make
    a contribution to the selection of theories. Beyond trivial cases, the term
    "simplicity'' remains a subjective term. What is compatible with somebody's
    own pre-existing world-view, will be considered simple, clear, logical, and
~   evident, whereas what is contradicting that world-view will quickly be rejected as
    an unnecessarily complex explanation and a senseless additional hypothesis. In
    this way, the principle of simplicity becomes a mirror of prejudice, and, still
    worse, a distorting mirror, since this origin is camouflaged.
       As an example, an advocate of the geocentric system could argue: some
    easiness in the calculation of planetary orbits is irrelevant, because we are not
    obliged to adapt our world system to the mathematicians' wishes for comfort,
    and the hypothesis of a moving Earth is an unnecessary-and adventurous-
    additional hypothesis, not at all supported by any sensual perception.
       Walach and Schmidt (2005) propose to complement Ockham's Razor by
    "Plato's lifeboat". This principle, with its origin in the Platonic Academy,
    claims that a theory must be comprehensive enough "to save the phenomena";
    this was triggered by observed anomalies in planetary motion.
       Our world is more multi-faceted than some people may imagine. The critical
    point is not only the frequently cited "more things in heaven and earth", but
    simply the adequate explanation of material at hand. Further misinterpretations
    are certain to come. But the principle of that honourable mediaeval philosopher
    should not be misused as a secret weapon destined to smuggle prejudice into the
140                                         D. Gernert

  The author is grateful to two anonymous referees. This text is a revised and
extended translation of a German text in press for the journal Erwagung-
Wissen-Ethik, with kind permission of Lucius & Lucius Publ. Co., Stuttgart.
Bauer, H. H. (1992). Scientific Literacy and the Myth of the Scientific Method. University of Illinois
Bunge, M. (1963). The Myth of Simplicity. Prentice-Hall.
Dehm, R. (1969). Geschichte der Riesforschung. Geologica Bavarica, 61, 25-35.
Gernert, D. (2000). Towards a closed description of observation processes. BioSystems, 54, 165-180.
Gethmann, C. F. (2004). Ockham's razor. In MittelstraO (Ed.), Enzyklopadie Philosophie und
   Wissenschaftstheorie (Vol. 2) (pp. 1063-1064). Metzler.
Harris, H. S. (1967). Italian philosophy. In Edwards (Ed.), The Encyclopedia of Philosophy (Vol. 4)
   (pp. 225-234). Macmillan.
Kim, J. (1967). Explanation in science. In Edwards (Ed.), The Encyclopedia of Philosophy (Vol. 3)
   (pp. 159-163). Macmillan.
Krelle, W. (1968). Praferenz- und Entscheidungstheorie. Tiibingen: Mohr.
Kuhn, T. S. (1962). Historical structure of scientific discovery. Science, 136, 760-764.
Maurer, A. (1984). Ockham's razor and Chatton's anti-razor. Mediaeval Studies, 46, 463475.
Menger, K. (1960). A counterpart of Ockham's razor in pure and applied mathematics. Synthese, 12,
Schneider, R. U. (1997). Planetenjager. Die aufregende Entdeckung fremder Welten. Basel:
Schwemmer, 0 . (2004). Ockham. In Mittelstral3 (Ed.), Enzyklopadie Philosophie und Wissenschafts-
   theorie (Vol. 2) (pp. 1057-1063). Metzler.
Thorbum, W. M. (1918). The myth of Ockham's razor. Mind, 27, 345-353.
Walach, H., & Schmidt, S. (2005). Repairing Plato's life boat with Ockham's razor. Journal of
   Consciousness Studies, 12(2), 52-70.
Westrum, R. (1978). Science and social intelligence about anomalies: the case of meteorites. Social
   Studies of Science, 8 , 461-493.
Journal of Scient$c Exploration, Vol. 21, No. 1, pp. 141-155, 2007


                       Science: Past, Present, and ~uture'

                          Professor Emeritus of Chemistry & Science Studies
                                  Dean Emeritus of Arts & Sciences
                           Virginia Polytechnic Institute & State University
                                       e-mail: hhbauer@vt.edu

      Abstract-When someone says "science", we think "physics". The reasons
      for that are rooted in the history of science and in the historical development of
      philosophy of science. Science-as-physics has countless implications for the
      public image of science, the conventional wisdom about scientific method, the
      notion of "hard" versus "soft" sciences, and the belief that science means
      repeatability, predictability, falsifiability. But the age of physics is at an end,
      and the age of biology has begun. As biology becomes the most prominent
      among the sciences, the conception of what it means to be "scientific" will also
      change. Parapsychology will morph into a mainstream science.
      Keywords: science in the future-future of science-biology as epitome
                of science

Scientific knowledge and methods are very different now than a century or two
or three ago; nevertheless, the science of our day remains shaped by the sciences
of the past. Our understanding of the nature of science and its role in society has
not kept up with the rapid changes within science itself. The conventional
wisdom about science is still based on the first centuries of so-called "modern"
science-approximately, the 17th century to the middle of the 20th century.
   The science of the future will differ from that of the past and present in at least
two major respects: Science will be more a corporate enterprise than the sum
of independent individual efforts; and the epitome of science will be biology
instead of physics. These changes will affect in important ways how science is
carried on. But the effect will be even more significant, on how society thinks
about science and makes use of science.
   These are not predictions, or even extrapolations, but certainties, for the
changes have already begun, even if not much note has yet been taken of them.
I shall discuss chiefly the second, the move from physics-inspired science to
142                                H. H. Bauer

biology-inspired science. The change from an individualist science to a corporate
one has been treated authoritatively and comprehensively by Ziman (1994) and
I have suggested some rather gloomy corollaries (Bauer, 2004).

                     Intellectual Development of Science
   At the outset I should note that some implications of the term "science" are
peculiar to the English language and its cultural environment (Bauer, 2001:
chap. 2). "Naturwissenschaft" in German and "sciences" in French do not carry
the same baggage of connotations as "science" does in English. However, it is
universally agreed that at least physics, chemistry, geology, and biology are
sciences; so by "science" I shall explicitly mean those, the generalized body of
subjects like these four main ones, in other words what are often called the
natural sciences2.
   For well over a century, (this) science has been widely regarded as hu-
mankind's best-or even only-source of reliable, even certain, knowledge
about the material world (Knight, 1986). Often this has been further extrapolated
to reliable or even certain knowledge about everything, on the presumption that
the material world encompasses all existence (Bauer, 2001: chap. 1, 3, 6). In
popular usage-again, in English!-the adjective "scientific" is a virtual
synonym for "proven true" (Bauer, 2001: chap. 2, 6).
   This high status of science-initially in the Western world, and still chiefly in
Western culture--came about through a process that historians have traced back
many centuries, millennia even. The earliest generally recognized development
was in ancient Mesopotamia, to which we owe our division of hours into 60
minutes and circles into 360 degrees. Later contributions came from Greek
philosophy and geometry and astronomy, from Indian mathematics, and from
Islamic scholarship over a wide range of fields3. Modem science was shaped
by the Renaissance and the Reformation, by interactions of intellectuals and
artisans, and by social and political circumstances that made room for a
necessary freedom of thought and action (Marks, 1983).

                          The Conception of Science
   Generalizing from individual sciences to "science" as a whole has been
anything but egalitarian. The scholarly image of science has not been shaped at
all equally by insights into what is done by physicists, by chemists, by
geologists, by biologists, and by other scientists. Physics was the first of the
sciences to become modem, and- doubt for that reason-history and
philosophy and sociology of science (increasingly grouped together as "science
studies") have, until very recently, made their studies of, and conclusions about,
"science" synonymous with their studies of and conclusions about physics.
This has yielded a biased, misleading view of what science as a whole really
is and has left us with mistaken ideas about how science should be done.
Most unfortunately of all, science-as-physics is responsible for a vastly and
                       Science: Past, Present, and Future                     143

mistakenly inflated opinion about how certain are the conclusions that
science can reach.
   The popular, public image of science, continually reinforced by the media,
reflects this scholarly distortion. Physics is seen as the most scientific science.
Non-quantitative sciences-sciences that are not like physics-are called "soft",
weak, imperfect. Almost all the presidential science advisers in the USA have
been physicists. Physics is the epitome and the very model of science (Bauer,
1992: 37-38).
   What are the most impressive things about physics? Simple, quantitative laws
that afford accurate predictions. But highly accurate predictions can only be
made about highly repeatable phenomena. And highly repeatable events are
found only with simple systems-only with simple non-living systems. So the
precision and reliability of laws and predictions are pre-eminent in physics not
because physicists have developed a so-called scientific method most fully, nor
because physics is the basis of all other sciences, but just because physics deals
with simple, inanimate systems, for which it is relatively straightforward to
construct mathematical models and to test hypotheses. Doubtless it is also
because of its relative simplicity that physics was the first science to become
modern: The Mesopotamians and the Greeks and the Chinese and the Maya,
among others, knew much more about physics and astronomy than they did
about geology or biology or even chemistry.
   The methods of physics, however, are not applicable in most of the rest of
science. Every field of scholarship and every field of science develops ap-
proaches and methods best suited to studying the particular phenomena that are
its concern (Bauer, 1992: chap. 2). Furthermore, any substantial discipline
investigates phenomena that are not reducible to those of other fields. As
systems become more complex, emergent properties are encountered, phenom-
ena not predictable by the laws that govern the separate, individual parts of the
system. The study of such unprecedented properties inevitably requires new
approaches. Michael Polanyi (1967) has been famously cited (and also famously
misappropriated) for pointing out that even the actions of simple machines
cannot be predicted from the Newtonian laws of mechanics, since questions of
function and design arise that have no basis in physics: "Machines are not
formed by physical-chemical equilibration . . .. The functional terms needed for
characterizing a machine cannot be defined in terms of physics and chemistry".
Many others, too, have pointed out that such reductionism is untenable4.
   Reductionism is an illegitimate child of science-as-physics, but it is far
from its only bastard child. Another is the myth of the scientific method (Bauer,
1992) that, supposedly most highly refined in physics, is applicable to all
investigations. Thus some social scientists have sought to make their own fields
"scientific" by attempting to model their methods on those of "science", by
which they mean physical science; introductory college textbooks of psychology
and sociology, at least in the USA, typically insist that science must be done by
144                                H. H. Bauer

   Yet another unfortunate consequence of science-as-physics is the notion that
scientific theories can be proved and that they somehow represent scientific
knowledge. The simple phenomena of physics can so often seem to be so fully
described by its theories as to tempt us to call those theories "true", even though
philosophy of science has long been crystal clear that no theory can ever be
proved finally valid for all time. Theories are always underdetermined by
whatever evidence is available-no theory is absolutely required by any given
set of facts. And one can never exclude the possibility that some not-yet-
conceived theory could be better5 than any current one.
   No matter how repeatable and predictable a phenomenon may be, its
explanation can only be a matter of opinion-highly informed opinion, perhaps,
and constrained by facts and context, but nevertheless opinion. No more support
for this statement should be needed than the historical fact that scientific theories
have a limited life-time before they are abandoned, modified, or subsumed into
other theories. Scientific theories are useful tools, short-hand for organizing
knowledge, and heuristic guides to further investigation; but scientific theories
can never be proved, and they should never be accorded the status of "truth".
But that will become more widely appreciated only when science is no longer
equated with physics.

                            Social Context of Science
   After its birth in the 16th or 17th century, modern science was nurtured in
particular social circumstances in Europe. Following Galileo's unhappy
experiences, advances came in the Protestant North-West rather than in the
Catholic culture of southern Europe. In England, the Royal Society and the
Lunar Society exemplified the freedom of thought and association in which
artisans and craftsmen and thinkers could interact fruitfully to spur intellectual
and material progress. Such freedom allowed people from every social class to
become entrepreneurs and capitalists and midwives to the Industrial Revolution.
   That period of history bequeathed us the view that science is done by self-
motivated individuals freely associating with one another, convinced that it is
right to expand human understanding and reaping material benefits for them-
selves as a by-product (only!). Such voluntary and disinterested interactions
were fore-runners to the system of peer review that has been primarily respon-
sible for the continuing and increasing soundness of scientific knowledge.
The scientific method is not some abstract protocol for posing and answering
questions, it is the concrete interactions among interested people who keep each
other honest through mutual criticism based on substantive criteria. Those inter-
actions create a "knowledge filter" that winnows the valid from the unreliable
among the mass of competing claims (Bauer, 1992: chap. 3). This knowledge filter
has worked so well because of the prevailing scientific ethos: Scientific
knowledge is the same in all cultures, it is universal; it is publicly available,
communally shared; scientists practice skepticism and disinterestedness6.
                       Science: Past, Present, and Future                      145

   Those are not descriptions of actual practice, of course; they are ideals that
scientists have sought to live by. In the early days of modern science, when
science was done by dedicated amateurs, there were fewer hindrances to ideal
behavior than after science became a profession: It is hardly possible to be
entirely disinterested as to the validity and significance of the results of one's
investigations if one's career and livelihood are affected by them. Still, for many
decades and into the latter part of the 20th century, peer review and other
practices of science were carried on with a very high degree of integrity and
concern for substance by contrast to personal preferment7. Perhaps the most
notable barrier to progress during that time was just intellectual conservatism
(Barber, 1961), which accompanies naturally a remarkably reliable body of
knowledge consensually accepted by almost all competent practitioners.
   Because the fruits of science were so prized by the wider society, and because
the practitioners of science had conducted themselves so admirably, science was
well supported by society while also being allowed a huge measure of self-
governance. Society provided funds for research while permitting science itself
largely to choose how to spend those funds.
   The second part of the 20th century saw a progressive change in these
circumstances. A decisive event was the success of the Manhattan Project
that created the atomic bomb: Science and scientists had brought a World
War to an earlier conclusion than would have been possible without their
efforts8, and it was widely presumed that they could bring equivalent peace-
time marvels. Society began to support even basic scientific research with
unprecedented largesse. Spectacular scientific-technologic achievements be-
came part of the competition between nations: Who can first place an
artificial satellite around the Earth? Who can first set a human foot on the
Moon? Even the social sciences and the humanities were given unprece-
dented, tangible public support in the belief that they could deliver social
fruits as beneficial as the material fruits that science and technology were
   All this patronage, given in good faith but with enormously high expectations,
carried a price that is beginning to be recognized only in retrospect. The
expectations were not realistic in several ways: in believing that the speed of
scientific progress could be increased just by having more people do more
science-whereas more quantity inevitably meant lower average quality. As
a career in science became increasingly attractive for its material benefits, so the
reasons for becoming a scientist became less that of having a vocation for
knowledge-seeking and more that of just doing well for oneself. Universities
began to measure and reward their faculty not according to their intellectual
quality and dedication to disinterested scholarship but according to how many
research dollars they could inveigle out of the patrons of science. Research
grants were increasingly awarded not for the most original ideas but for the most
faddish, those so obvious that everyone could agree-no matter how
146                                 H. H. Bauer

treated as necessary pairs of hands rather than as budding intellects to be
disinterestedly and conscientiously helped to develop independence.
   In a word, science has become increasingly corrupted by conflicts of interest,
a possibly inevitable consequence of formal organization and external influence.
Decisions have come to be made increasingly for political reasons as well as-        or
even instead of-intellectual ones.
   It is clearer in retrospect that the useful social spin-offs of science had resulted
as by-products of a largely self-governing community of people driven largely
by curiosity about the workings of the world. That is very little appreciated even
nowadays. The most wonderful advances have come under circumstances where
the right degree of intellectual freedom was allowed, as the best long-term
guarantee that golden eggs would be laid. The contemporary belief that
economic markets are the best social decision-makers has brought a focus on the
short rather than the long term, with such inestimable losses for science and
society as the dissolution of the Bell Laboratories, which had brought
humankind transistors and lasers, among other things. That is all spilled milk,
to be regretted but not recovered. But it is important to recognize the extent to
which science has changed from an activity of self-governing, curiosity-driven,
disinterested and skeptical amateurs to a highly organized, bureaucratically
directed enterprise held accountable for its short-term performance by those who
pay for it. That change and its implications have been underscored by Ziman
(1994) in Prometheus Bound; anyone wishing to understand both classic and
contemporary science could do no better than to read that book, as well as
Ziman's most recent overview of science (Ziman, 2000).
   The important thing for the present purpose is to note that science is not what
it was, and that assessing scientific activities calls for the sort of approach
practiced by students of politics, as well as the attention of philosophers and
historians of science.

                  Dissatisfaction with Contemporary Science
   Criticisms similar to those just made have come from a variety of directions
over the last several decades. New-Age idealists have pointed out that science
has not fulfilled and cannot fulfill its 19th-century promise of answering all the
questions that matter to human beings9. It should not be idolized as the be-all of
human understanding but rather seen as a Glorious Entertainment for human
beings (Barzun, 1964). Some critics have gone quite overboard, pushing such
ideologies as post-modernism, relativism, constructivism, and the like, an in-
tellectual Luddism that has itself been exposed and deconstructed, succinctly
and tellingly by Alan Sokal's (1996a,b) wonderful spoof as well as in discursive
scholarly argumentation by, for example, Gross and Levitt (1994) in their book
Higher Superstition or by Susan Haack (1998, 2003: especially chap. 7, 8, 11).
But dissenters from New-Age notions also include naive defenders of science-
as-it-is, holding forth on a variety of topics with the dogmatic, scientistic
                       Science: Past, Present, and Future                    147

certainty that belongs to the 19th-century Age of Science (Knight, 1986) and
early-20th-century positivism; some of these nalfs are prominent members of
the scientific mainstream1', others belong to such more populist groups as the
Committee for the Scientific Investigation of Claims of the Paranormal.
   New-Age critiques have been concerned chiefly with the social role and
influence of science. Others have been concerned about perceived intellectual
deficiencies of contemporary science. The Society for Scientific Exploration
(SSE) was formed to attend to certain phenomena ignored by contemporary
science-UFOs, parapsychology, cryptozoologyll, and the like12. As it turns
out, the SSE has also served as a forum for consideration of unorthodox views
well within mainstream science, for instance alternatives to plate tectonics in
geological hypothesizing, unorthodox views about the origin of hydrocarbons on
Earth, cold fusion, and others as well. Observers of science have begun to
recognize, implicitly at least, that science-as-physics has reached a dead end:
Historians and philosophers of science, together with sociologists and scientists
and others, have established such distinct specialties as history of geology13,
philosophy of chemistry14, and history, philosophy, and social studies of
biology 5 .
   The contemporary scene, in other words, is one of unrest and change. The
inadequacy of traditional science-as-physics is becoming evident, for a variety
of reasons, to more people and more various people, including members of the
scientific community. What then will the science of the future be like?

            The Future of Science and of Scientific Exploration
   Physics has had its day, even though few may have suspected it before the
Superconducting Super-Collider was abandoned. Biology has become the most
publicly visible science and the one from which the most is expected. Gene
therapy, cloning, genetically modified foods, stem cell research, are familiar
terms; British newspapers use the acronym "GM foods" without further
explanation. Increasingly as time goes by, biology will attain the pre-eminence
among sciences that presently still belongs to physics16.
   Though this is becoming well recognized, its implications are not; yet they
can hardly be overstated. Philosophers of science will be hard pressed not to
adopt a realist view17. Philosophy of science, to be followed by other punditry
and eventually by public opinion, will take a quite different view of the roles of
repeatability, predictability, falsifiability, and so on, within "the scientific
method". The illusion will dissipate, that "science" can deliver definite answers
on demand-or, for that matter, definite answers at all on such matters of central
human interest as health and longevity. "In this sprawling swamp of a science
called biology, the short list of physical variables, such as force, mass, and
energy, gives way to an endless catalogue of Latin taxonomy; prediction gives
way to retrospective analysis; universal laws give way to idiosyncratic natural
histories"; such generalizations or "laws" as natural selection "do not
148                                H. H. Bauer

encapsulate the transformations of life in quite the same way that Newton's laws
capture the motions of objects. They render evolution intelligible, but not
predictable or reducible" (Hirsh, 2003). Explanations become probabilistic
instead of precise.
   Though biology and biologists will become pre-eminent, they will not enjoy
the freedom of thought and research that accompanied the birth of modem
science, and that physicists and other scientists enjoyed well into the latter part
of the 20th century. Society will not return to that brief period when huge sums
were provided for scientists to do with as they wished. Not only will pressure
continue for quick results; biologists will experience even in democratic
societies strong ideological, political, social constraints on their activities. That
can already be glimpsed in furors over GM foods, stem cell research, and
cloning, not to speak of the continuing attempts to sabotage exposition of
evolutionary concepts in classrooms in the USA. For biology to progress op-
timally, its administrators will have to be the most adept, astute politicians that
science administration has ever brought forth.
   Returning to intellectual considerations: I am not suggesting that the science
of the future will take contemporary biology as its model. Just as present-day
science as a whole takes physics as its model, so present-day biology too
remains rather imitative of physics. Elucidating the structure of DNA was
greeted as though the very secret of life had been uncovered. Molecular biology,
the part of biology that is most akin to physics and chemistry, is widely viewed
as the most advanced, the most scientific biology; according to James
D. Watson, "There is only one science, physics" (Brown, 1999: 47). The study
of animal behavior remains almost as excluded from mainstream attention as
the search for the Loch Ness Monster. The biology of the future, by contrast,
will encompass the behavior of organisms as well as their biochemical and
physiological characteristics; Marjorie Grene (Depew & Grene, 2004) suggests
ethology or ecological psychology as perhaps the best guide for philosophy of
   Even at the molecular level, though, future biology will be less physics-like
than it now is. Molecular biologists and medicine men are coming to recognize
that the Double Helix was not the Philosopher's Stone or the Elixir of Life, just
the beginning of a very long and exceedingly intricate exploration. The newly
established field of bioinformatics reflects the realization that novel methods
are needed to extract humanly usable information from amounts of data so vast
that current procedures cannot uncover regularities among the simultaneous
interactions of the many variables. Though the goal of bioinformatics can be
conceived, its realization will take centuries rather than decades. Let me
illustrate it by contrasting contemporary medicine with that of the future.
Nowadays, my level of blood sugar and of cholesterol, and my pulse rate and my
blood pressure, my P S A ~and much else, are compared with average values for
the population, and my doctors seek to bring all my levels into that average
range by administering one or more drugs for the blood sugar, one or more for
                        Science: Past, Present, and Future                      149

the cholesterol, and so on. Far in the future, by contrast, the physician will
estimate each individual's healthy level of blood sugar, cholesterol, etc., given
that person's specific genome and specific body development and using an
understanding of the systems that interconnect blood sugar and cholesterol and
PSA and every other physiological property. Treatment will be holistic, not
some collection of individual "magic bullets".
   Bioinformatics is not just another tool. It portends a change in scientific style.
Instead of cleanly crucial experiments that can dispose of inadequate theories
in short order, enormous amounts of information will be examined by com-
puterized and statistical means to yield answers that are suggestive and
probabilistic rather than definite and precise. Inevitably the accumulation of
reliable knowledge will proceed slowly, no matter how highly automated the
information-gathering and information-analysis techniques may become.
   And still this is not the biggest transformation that future biology will
undergo; the most portentous will be direct, no longer avoidable, engagement
with the mind-body problem. Developmental biology is already coming up
against it: The human brain develops not only under instructions from the
genome and the influence of the environment but also according to the
intellectual tasks the brain is set and that it performs. Apparently, the continually
changing software of the mind is able to modify the instructions continuously
delivered by the hard-wired genome as it further hard-wires the brain. Already,
too, more attention is being paid to the placebo phenomenon, with its indis-
putable evidence that, at least sometimes, quite powerful physiological agents
can be overpowered by will or hope or suggestion-at any rate, by the
consciousness that activates the placebo response. So the study of consciousness
has to become part of biology, part of mainstream science.
   There are three chief ways of envisaging consciousness (or mind, or
perhaps soul):

   1. It is different and separate from matter-energy. This is the philosophical
      stance known as dualism.
   2. It is a fundamental property of matter-energy. This is the most natural
      view for a physics-like science: Consciousness of an observer collapses
      wave functions; wave functions may incorporate consciousness in some
      manner even at the level of atoms.
   3. It is an emergent property of a sufficiently complicated system with
      appropriate feedback capabilities. This seems the most natural view for
      a biology-like science to take, and may well become the mainstream
      scientific view of the nearer future.

   Dualism has enjoyed a long vogue. Perhaps it is time to discard it once and for
all, if only for the reason offered by Jacques Barzun in praising Robert Burton's
Anatomy of Melancholy: "Burton at least did not separate mind and body . . ..
150                                H. H. Bauer

any physician had ever seen a soma enter his office without a psyche, or the
psychiatrist a psyche without a soma" (Barzun, 2000: 224).

                            Concluding Comments
   The science of the future will take as its role model biology, not physics. The
wider society will come to acknowledge that science cannot deliver definite
answers in short order; like all other human activities, it can only do its
imperfect and fallible best at any given time. The role of consciousness will be
acknowledged and investigated. The knotty issue of subjectivity will be directly
addressed (Jahn & Dunne, 1997), and thereby the present gap between natural
science and behavioral science will be narr~wed'~-thou~hsome important
differences will remain, for example that the physical sciences (chemistry and
physics) are governed by a single, consensual, over-arching paradigm whereas
the social sciences are multi-paradigmatic.
   The concept of "scientific method" will change out of sight. Instead of the
hypothetico-deductive method-hypothesize, test, accept or reject the hypoth-
esis-science will rely on every scrap of useful evidence, including case studies
and anecdotes. That is already the case in medical science, of course, where
ethical considerations bar experiments on humans designed as they would be
for inanimate objects2'. In discussion at the Paris meeting where I presented
these ideas, Jacques Benveniste pointed out that the complexities of laboratory
work in much of biology entail so many levels of inference that it makes
no sense to talk glibly of testing a hypothesis. Even the experimental material
itself may not be easily defined or controlled. I was reminded of one of my
friends, who had made a genuinely major contribution relating to mitochondria
in certain strains of yeast. He was aghast when other workers questioned his
results and his students could not repeat their observations using "the same
material" as before. After months of nerve-wracking mental and laboratory
efforts, it was realized that the yeast had mutated as it was moved from one
university to another.
   So the criterion of "reproducibility" will be drastically re-defined, or applied
with more subtle sophistication. In Paris, Peter Wadhams suggested that there
can be reproducibility even in observational biology, for example 500 reports of
sea serpents would constitute reproducibility. I should have responded that this
actually illustrates my point. "Concordant" descriptions of a type of animal do
not refer to precisely the same sort of object in the way that concordant
descriptions of the spectrum of a molecule do. Instead, they describe what
philosophers call a "family resemblance": The various observed objects are not
identical, they are "the same" only in essential respects. That qualification,
"essential", allows room for argument. Thus there is a long-standing con-
troversy in biology (or perhaps in the philosophy of biology) over the definition
of a species, in large part because the individual members of a species are not
identical but bear only a family resemblance to one another.
                       Science: Past, Present, and Future                      151

   Again in Paris, Roderick Boes suggested that falsifiability could still be a
useful criterion in biology. True enough, in concrete everyday practice, in that
there are undoubtedly suggestions made about biological phenomena that
can be decisively disproved. But the Popperian suggestion that theories be
regarded as scientific only if they are falsifiable would find no basis in the
experience of biologists (and, in any case, few if any philosophers of science still
regard it as a good criterion; though some popularizers of science, and even
some scientists-as-physicists, have not yet discarded the idea). Even in everyday
practice, the difficulties of testing and disproving significant claims in biology
should not be underestimated, for the reason given earlier: Biological materials
and biological individuals are not "the same" in the manner that atoms of
deuterium are, which makes generalizing from specific instances appreciably
more hazardous.
   When all is said and done, "scientific" may come to be understood simply as
"rigorous and self-critical, whether quantitative or not", as Max Payne put it in
   A corollary of the present train of thought, a situation that may seem
unimaginable at the moment, is that some of the raison d'gtre for the SSE will
fade away. When mainstream science addresses consciousness and subjectivity,
it will find itself grappling with phenomena that are presently left to such
outsiders as parapsychologists. The placebo phenomenon, after all, offers an
entirely tangible protocol for investigating mind-body interactions, and its
magnitude is quite comparable to claims of macro-psi (poltergeist phenomena,
physical mediumship), whose spontaneous, irreproducible nature has tended to
make macro-psi persona nun grata among many serious investigators.
   Physics-like science sought to explain the cosmos in objective, impersonal
terms, formulas, and equations. Its goal was and remains an abstract, God's-eye
view of universe and man. Its unwarranted hubris has alienated a wide swath of
the public. But what we have called "modern science", and have regarded as
almost a final culmination of millennia of development, is really just adolescent
science: brash, contemptuous of older traditions, all too sure of itself, with glib,
dogmatic opinions and definite answers. The biology-like science of the future,
by contrast, with the mind-body question as a central focus, will have to take
a humbler, more realistic, human-scale view of the cosmos-the only view, after
all, that humans should aspire to. At the same time, values and meaning will be
seen to inhere in the world21, a marked and welcome contrast to the science-as-
physics view of, for example, Steven Weinberg (1993), that "The more the
universe seems comprehensible, the more it also seems pointless". It was said
long ago that the proper study of Man is Man; if so, then the proper tool of study
must be a biology-like science.
   Not, of course, that science-as-biology offers only improvements and no
dangers. Social Darwinism was, after all, an extrapolation from biology, as
dangerously wrong as the extrapolation of reductionist materialism from
152                                  H. H. Bauer

quite plain and certain is that biology will supersede physics as the exemplar of
what science is, and that science thereby has an opportunity to become more

 ' Based on the invited paper prepared for the 6'"         European Meeting of the
     Society for Scientific Exploration, Paris, 29-31 August, 2003.
     Thereby including such obvious additional subjects as astronomy or
     biochemistry, but explicitly excluding the social and behavioral sciences.
 '   This has long been well known to historians of science, yet it is still not
     common knowledge, for example one could read quite recently that "As Dick
     Teresi discovered [sic], the roots of much Western science reach back to
     India, Egypt, Mesopotamia and China" whereas the standard history of
     science "locates its birth around 600 B.C. in ancient Greece" (Hall, 2002).
     No matter how obvious it may seem that reductionism is unsustainable,
     prominent people continue to promulgate it, albeit more often implicitly
     rather than explicitly. Thus some physicists speak of seeking "Theories of
     Everything", thereby implying that such theories could entail all the laws of
     chemistry and all other sciences, tantamount to The Mind of God (Davies,
     1992) and explicating perhaps The Physics o Immortality (Tipler, 1994).
        The distinction is not always clearly made or adhered to, between
     materialism and reductionism. Reductionism treats human free will as an
     illusion; whereas materialism can contemplate the possibility of genuine free
     will as an emergent property made possible through the interactive orga-
     nization of the systems that make us human beings. (Of course this is a gross
     simplification, for the sake of emphasizing the distinction; philosophers
     recognize various shades and degrees of both materialism and reductionism.)
     But a physics-based materialism tends to be reductionist: "when materialists
     are up, physics is the 'model' and vitalists and idealists are down; when these
     last two are up, biology is strong and materialists muted" (Barzun, 2000:
     365). Barzun's description, that these two attitudes alternate "in seesaw
     fashion", is congruent with Stephen Brush's account of the historical
     alternation between Romanticism and Rationalism (Brush, 1978).
     "Better" not necessarily in the sense of fitting better the given corpus of data:
     It may fit those data about equally well while encompassing a greater range of
     phenomena. Thus Einstein's relativity theories are better than Newton's laws
     of gravity and of mechanics. Or considerations of aesthetics or range may lead
     to calling a theoretical treatment "better" even when its equations fit the data
     less well, as in the case of the theoretical chemist Dave (Bauer, 1992: 20).
     First described by sociologist Robert K. Merton, these ideals are often
     referred to as the Mertonian norms of science.
     At least within many Western cultures. In some societies, even fairly
     industrialized ones, social norms of deference to authority, to tradition, or to
                       Science: Past, Present, and Future                       153

    one's personal mentors have sometimes trumped the incisive, public
    critiquing that peer review calls for. In such totalitarian societies as Nazi
    Germany or the Soviet Union, ideology made scientific peer review
    essentially irrelevant.
    Not only through the atomic bomb, but also through development of radar,
    sonar, and many other technical advances, including the building of fore-
    runners of today's computers which made possible the breaking of previously
    invulnerable codes.
    A fine exposition is by Appleyard (1992). It has been much criticized by
    defenders of the status quo in science.
l o Consider for example that indiscriminate critic of anomalies, physicist Robert
    Park (Kauffman, 2001), or that hasty critic of cold fusion, physicist Frank
    Close (Bauer, 1991).
l 1 The International Society of Cryptozoology was founded at about the same
    time as the SSE.
l 2 Also initiated at the beginning of the 1980s was Correlation, the Astrological
    Association Journal of Research in Astrology. Interest in unorthodox science
    may be stimulated by the advance of established science (Bauer, 1986-87).
l3 The History of Earth Sciences Society was founded in 1982.
l4 See HYLE (International Journal for Philosophy of Chemistry), which grew
    out of the former bulletin of the group "Philosophie und Chemie", founded in
    Germany in 1993.
l5 The International Society for History, Philosophy, and Social Studies of
    Biology (ISHPSSB) was founded in 1989.
l 6 Biology "enters the twenty-first century as the most dynamic and far-reaching
    of all the scientific disciplines" (Miller, 1999: 168); "Einstein's century was
    the century of physics . . .. Our century is likely to become the age of biology"
    (Andreasen, 2002).
l7 Marjorie Grene (Depew & Grene, 2004) suggests that "if we take the
    biological sciences as our model for philosophy of science, we have a better
    chance of accepting a realist point of view . . . the hands-on realism of our
    everyday experience". One can easily question the reality of the "objects"
    with which physics deals-the likes of quarks or wave-functions-but "it is
    difficult for a biologist to deny the reality of living things".
l 8 Amount of Prostate Specific Antigen. High levels indicate enlargement of the
    gland (benign prostatic hyperplasia, BPH) that is merely a nuisance; rapid
    increases may be, but need not be, indicative of prostate cancer.
l9 In other words, a biology-like "science" will be a better model for social
    scientists than is the physics-like "science" of today. A similar notion
    underlies the recent suggestion that historians should take as their model the
    historical sciences of biology and geology (Brinkley, 2002). One might
    equally argue that science should take as its model the best work done by
154                                        H. H. Bauer

   "The world itself exhibits values, or meanings: relations between perceivers
   and features of their environments that offer them goals to seek or avoid. An
   animal's world is, from the beginning, a world full of meanings, and evolution
   has endowed that animal with the potentialities to respond to such a world"
   (Miller, 1999: 168).
22 In discussion at the Paris meeting, Peter Moddel pointed out that a biology-
   based science could nevertheless be reductionist, whose implications would
   be even more dangerous than those of a reductionist physics-based science.
   The contemporary infatuation with genomics and molecular biology is indeed
   reductionist, and certain trials and experiments carried out on this basis do
   seem to me to be exceedingly hazardous. But recent recognition that genomes
   are dynamic systems and not linear arrays of fixed genes, and that humans
   have fewer "genes" than does corn and only 25% more than flatworms (Ast,
   2005), should put some crimp into biological reductionism.

   I am deeply grateful to the organizers of the Paris meeting for the invitation
that stimulated me to organize these thoughts, and to the many people at the
meeting-only some of whose names appear above-who made pertinent and
useful comments.
   The mind-body question is central in all this, and my thoughts on that and on
much else have been greatly influenced by what I have learned, through
interactions over more than two decades, from my valued friend and colleague
Robert Jahn. I had planned to include this essay in a Festschrift mooted by
another journal to acknowledge the achievements of the PEAR program as it
comes to its close; but as so often, too many potential contributors promised but
did not deliver, so those of us who did deliver are now publishing separately.

Andreasen, N. (2002). Recorded remarks. Bulletin of the American Academy of Arts and Science,
   LVI #I, 18-21.
Appleyard, B. (1992). Understanding the Present: Science and the Soul of Modern Man. London:
   Pan Books (also New York: Doubleday, 1993).
Ast, G . (2005). The alternative genome. Scientific American, April, 58-65.
Barber, B. (1961). Resistance by scientists to scientific discovery. Science, 134, 596-602.
Barzun, J. (1964). Science: The Glorious Entertainment. Harper & Row.
Barzun, J. (2000). From Dawn to Decadence: 500 Years of Western Cultural Life-1500 to the
   Present. HarperCollins.
Bauer, H. H. (198687). The literature of fringe science. Skeptical Inquirer, 11 (#2, Winter), 205-210.
Bauer, H. H. (1991). Review of Frank Close, Too Hot to Handle: The Race for Cold Fusion.
   Journal of Scientific Exploration, 5, 267-270.
Bauer, H. H. (1992). Scientific Literacy and the Myth of the Scientific Method. Urbana: University of
   Illinois Press.
Bauer, H. H. (2001). Fatal Attractions: The Troubles with Science. New York: Paraview Press.
Bauer, H. H. (2004). Science in the 21" century: knowledge monopolies and research cartels.
   Journal of Scientific Exploration, 18, 643-660.
Brinkley, A. (2002). Review of John Lewis Gaddis, The Landscape of History: How Historians Map
   the Past. New York Times Book Review, 17 November, p. 50.
                           Science: Past, Present, and Future                                 155

Brown, A. (1999). The Darwin Wars: The Scientific Battle for the Soul of Man. London: Simon &
Brush, S. (1978). The Temperature of History. New York: Burt Franklin.
Davies, P. (1992). The Mind of God: The Scientific Basis for a Rational World. New York: Simon &
Depew, D., & Grene, M. (2004). A History of the Philosophy of Biology. Cambridge University Press,
   concluding chapter.
Gross, P. R., & Levitt, N. (1994). Higher Superstition: The Academic Left and Its Quarrels with
   Science. Johns Hopkins University Press.
Haack, S. (1998). Manifesto of a Passionate Moderate. University of Chicago Press.
Haack, S. (2003). Defending Science-Within Reason: Between Scientism and Cynicism. Amherst,
   NY: Prometheus.
Hall, S. S. (2002). Mapping the heavens, curing dandruff (review of Lost Discoveries: The Ancient
   Roots of Modern Science-$rom the Babylonians to the Maya, by Dick Teresi). New York Times
   Book Review, 1 December, pp. 13-14.
Hirsh, A. E. (2003). The scientific method-Signs of life. American Scholar, 72 (summer), 125-130.
Jahn, R. G., & Dume, B. (1997). Science of the subjective. Journal of Scientific Exploration, 11,
Kauffman, J. (2001). Review of Robert Park, Voodoo Science: The Road from Foolishness to Fraud.
   Journal of Scientific Exploration, 15, 281-287.
Knight, D. (1986). The Age of Science. Oxford & New York: Basil Blackwell.
Marks, J. (1983). Science and the Making of the Modern World. Oxford: Heinemann.
Miller, K. R. (1999). Finding Darwin's God. New York: Cliff Street Books.
Muller, R. A. (1980). Innovation and scientific funding. Science, 209, 880-883.
Polanyi, M. (1967). Life transcending physics and chemistry. Chemical & Engineering News,
   21 August, 54-66.
Sokal, A. (1996a). Transgressing the boundaries: Toward a transformative hermeneutics of quantum
   gravity. Social Text 46/47, 14 (#I & 2, SpringlSummer), 217-250.
Sokal, A. (1996b). A physicist experiments with cultural studies. Lingua Franca, MaylJune, 62-64.
Tipler, F. (1994). The Physics of Immortality: Modern Cosmology, God and the Resurrection.
Weinberg, S. (1993). Cited by John S. Rigden, "A reductionist in search of beauty (review of Steven
   Weinberg, Dreams of a Final Theory)". American Scientist, 82 (January-February 1994), 69.
Ziman, J. (1994). Prometheus Bound: Science in a Dynamic Steady State. Cambridge University
Ziman, J. (2000). Real Science-What it is, and What it Means. Cambridge University Press.
Journal of Scientific Exploration, Vol. 21, No. 1, pp. 157-161, 2007


                              Sword's Caution to Coleman

Peter Coleman (JSE 20 #2, 215-38) is to be congratulated for taking on
a difficult scientific mystery (ball lightning) and attempting to apply his insights
to the even more unspeakable anomalies of "ghost lights" and UFOs.
   I am a long-time student of the UFO phenomenon, and most of my colleagues
(and I) have generally felt that there are many reports of "nocturnal lights" in
our files that are not readily explainable and are also not part of the core UFO
phenomenon. Dr. Allen Hynek, long ago in 1952 when he was a meek, mostly
debunking, consultant for the Air Force's UFO project (Blue Book) felt brave
enough to give a talk to the Optical Society of America (later appearing in their
Journal) concerning his view that, no matter what one considered the majority of
the UFO phenomena, there were piles of unexplainable reports that pointed to
a "natural" phenomenon, which he tagged "nocturnal meandering lights"
(NMLs). NML cases co-exist alongside structured objects and close encounters
in ufological discussions, but many of us feel that in the usual cases we are
talking about two different things.
   Which brings me to my main point. Another "phenomenon" that ufologists
have endured regularly across the years is that of the New Comprehensive
Theory: the theory that will solve the UFO enigma at last. I am not saying that
Peter Coleman is claiming this. What I am saying is that his theory is precisely
the kind of idea that some people will use to dismiss the entirety of the remaining
"mysterious objects" reports that have bedeviled us to this day. I believe that
a good physical explanation of ball lighting could lead to a good physical
explanation of (apparently less energy-dense) "plasma" (or whatever) phenom-
ena, which could lead to a good physical explanation of certain experiences of
"ghost lights", "spook lights", "fairy balls", NMLs, etc. And, bravo to all that.
   But I can see the next step coming: "Therefore, all unexplained Balls of Light,
and all unexplained luminescent shapes, are explainable (now) by the Unified
Ball-of-Energy Theory". And (corollary) everything else people say about these
illuminated mysteries is the product of the unusual forces involved in their
emotions and on their brains. (All said with a respectful bow in the direction of
Laurentian University and Dr. Persinger.)
   Let me give one instance (absolutely ripe for someone to apply this new
theory to) wherein, I believe, the theorist will already take one step too far:
   The evening of November 2nd/3rd, 1957, Levelland (TX). Between about 11
p.m. and 3 a.m., the town was treated to a series of "UFO incidents" wherein
large, oval-shaped, brightly luminous objects (or one object moving about)
158                             Letter to the Editor

appeared on or over roads on the east, west, and north sides of the city, in each
case shutting down car or truck engines as well as their headlights. There were,
at a minimum, nine separate incidents, all with independent witnesses, and all
reported to the Levelland sheriff's office either that evening or the following
day. The described objects were in the range of 100 to 200 feet long. The
viewing times (from driver awareness to the object "taking off" and leaving)
were from several seconds to several minutes. No object just "disappeared". As
it (they) left the scene, all vehicles and their lights were again functional.
   The Air Force was happy to duck this case by writing in its explanations
column: "ball lighting" or perhaps "St. Elmo's Fire".
   So, here, I suspect, we go again. Z Peter Coleman, or someone else using his
model, wishes to begin an assault on UFO mysteries by taking on the Levelland
case, I would ask (at a minimum) the following:
  1. The model must accommodate extremely large objects that exist over an
     extended period of time in an environment that is not expressing any other
     particularly violent or energy-rich phenomena (visible vortices, lightning
     strikes, etc.).
  2. The model must accommodate a very repetitive series of nearly identical
     events occurring over four hours, and over several miles, in each case
     focusing directly on or over the roads.
  3. The model must accommodate a series of engine stoppages and headlight
     failures, despite the "coincident" object not being right next to or on top of
     the affected vehicles. When Ford Motor engineers Fred Hooven and David
     Moyers consulted with the University of Colorado Project about "vehicle
     interference" cases, their opinion was that condensed balls of electrical
     energy should not be able to seriously affect car engines unless that energy
     was right inside the mechanisms themselves. Also, energy densities
     needed should create appalling effects on the drivers' physiologies.
   I am aware that Peter Coleman is not (at least yet) claiming to be able to
explain away cases like Levelland. So why am I worried? On p. 230 of the
original article there are listed 9 properties characteristic of UFOs. The list is
from a much longer list of UFO attributes, and perhaps it is clear that these are
attributes only of the sub-set of UFO cases that consist of NMLs alone. That's
not quite obvious from the language, but to put things in the best light, that's
how I'll take it. As such, this new model is not to be construed to apply to UFO
reports that are not strictly and only NMLs. OK, so far.
   The list is stated as one that "will suffice". Yes and no. If we use the list to
sort cases of precisely these attributes from the rest of the UFO pile, well, sure-
all's fair there. But, as an example, note this: Levelland's object(s) made no
sound, didn't rotate, no wind rush reported, no flames exhausted, left no traces . . .
and had a bunch of characteristics not on the list. So, case by case, the list may
not "suffice". (Just a comment and a request for fair play-something we in the
                                Letter to the Editor                             159

field do not get often from the explainers.) The phenomena of sounds and wind
rush are, percentage-wise, quite rare in UFO cases.
   I am testing the limits of Letters-to-the-Editor and so must quit. I'll leave
other comments to our colleagues Marsha Adams, Erling Strand, Bruce
Maccabee, et al.

                                                               MICHAEL D. SWORDS
                                                                Professor Emeritus
                                                            Environmental Institute
                                                       Western Michigan University

                     Coleman's Response to Swords

First, I wish to thank Michael Swords for his challenging and thoughtful letter.
My primary aim was to find a defensible scientific theory of ball lightning
supported by experimental demonstration. As it happens, I am also claiming
that some UFO lights possess characteristics that are ball lightning-like and
might be applied to these events. I do not know what percentage of these UFOs
might be accounted for by the theory, but I have also made it clear that I do not
claim that the theory is valid for all UFO events, including the ET-like.
   It cannot be ruled out, however, that this hitherto unknown natural event might
be the root cause of some classic UFO encounters that have stirred up a media
flurry. As I made clear in the article, the theory could act as another filter to weed
out the natural causes; it is not 'debunking' genuine UFO cases. There might be
left a residue of cases that could point towards the ET hypothesis.
   I believe meteorology is missing some science connected with atmospheric
vortices that could explain a good proportion of the anomalous lights cited in my
paper. There is no compelling reason why the vortex fireball explanation of
these unknown lights can be immediately ruled out; the observations I have
collected suggests that vortex fireballs formed from the air flow of natural
vortices do exist. Combustion laboratory experiments support the idea, and the
science underpinning the theory is sound in principle.
   Some UFO investigators have dismissed ball lightning as a theory for some
UFOs because of their view of what constitutes ball lightning. Certainly the belief
that ball lightning is only a 20-30 centimeter fireball of electrical origin seen only
in thunderstorms would certainly not fit in with what UFO observers are seeing.
Yet ball lightning was invoked as an explanation (e.g., Phillip Klass) even with
this old definition of ball lightning. Both anti-UFOers and pro-UFOers need to be
aware of the current research on ball-lightning phenomenon that clearly shows
it to be far more diverse in regard to its properties (e.g., larger, longer lived,
different shaped, appears in fine weather, etc.), as I made clear in my article.
160                            Letter to the Editor

vortex fireball theory might be an appropriate explanation. I have found several
that seem to be consistent with the theory. Michael Swords's choice of the 1957
Levelland UFO as being "absolutely ripe for someone to apply this new theory
to" is correct, despite the three problems he stated. The engine stoppages,
headlights going out do present a special problem. However, even this difficulty
may be overcome in the future. This might involve an explanation based on how
the natural phenomenon is affecting the vehicle. One line of thought is to look at
how a hovering vortex with a strong electric field near the vehicle might transfer
charge to the metallic body of the vehicle connected to the positive battery
terminal. Research has shown that natural vortices can produce large electric
fields. Corona has been observed at the funnel tip (Montgomery, 1955).
Furthermore, spark discharges between the ground and the vortex have also been
reported (e.g., Vonnegut & Moore, 1959). Incidentally, the spark discharge will
have enough energy to ignite a combustible gas. Repeated ignition is possible
and this would explain the repeated onloff or blinking behavior of some events.
The funnel tip looks like an ideal location for ball lightning production because
sparks would ignite gas inside small vortices that are then shed from the parent
vortex and appear as fireballs. The theory postulates that the main phenomenon
involved in many events will be vortex combustion, but the electrical spark does
play a role in ignition, just as in a car where a sparkplug is required for ignition
of the vaporlair cloud.
   The other two problems do not present any difficulty (in contrast to such
things as witnessed abduction by alien creatures). The theory can accommodate
objects with large diameters (several meters) and extended periods. The
environment could be energy-rich. Combustible gases could have been present
at the time. Combustible gases like natural gas are colorless. Vortices are
difficult to see unless tracers are present and observation at night would be
doubly difficult, which was when the Levelland UFOs were seen (e.g., 11 p.m.-
3 a.m.). Vortices can be repeatedly generated given the right air flow conditions.
However, instead of the proposition that a number of fireballs were seen, it
could have been a case of one fireball seen by several observers as one news-
paper report suggests. UFO predilection for roads may be to do with the property
of the road itself. One suggestion is the tendency of vortices to follow along
raised ridges, mountain sides, etc. Roads normally have a camber that might
effectively act as a ridge line. These fine-weather vortices may be large and not
the usual tornado connected to a supercell storm.
   Michael Swords stated: "But, as an example note this: Levelland's object(s)
made no sound, didn't rotate, no wind rush reported, nojames exhausted, left no
traces . . ." . His statement contradicts an original newspaper report at that time
that was corroborated by an Air Force report. The Washington Evening Star
(4 November 1957) contains the testimony of the war veteran, Pedro Saucedo.
He stated on record: When it got near, the lights on my truck went out and the
motor died. I jumped out of the truck and hit the dirt because I was scared. I called
to Joe, but he didn't get out. The thing passed directly over the truck with
                                  Letter to the Editor                                 161

a great sound and a rush of wind. It sounded like thunder and my truck rocked
from the blast. I felt a lot of heat. Then I watched it go out of sight toward
Levelland. Other witnesses also reported sounds. James Long saw the elliptical
UFO at 1:30 a.m., and it had a sound like thunder. In the same article, another
witness, Ronald Martin, said that a big ball of fire dropped on the highway. Police
patrolman A.J. Fowler said that Mr. Saucedo and about 14 others agreed that it
was around 200 feet long and shaped like an egg and lit up like it was on fire. In
other reports, Newel1 Wright, a college student, saw a glowing blue-green flat
bottomed, oval object on the highway. The flat bottom could be a signature
indicating the funnel connecting to vortex breakdown as I mentioned in my
paper. The Hockley county Sheriff Weir Clem and his deputy saw a huge
football-shaped luminous object.
   The Levelland UFO is no different to many other UFOs that seem to fall into
the same category. For example, the large number of UFOs reported between
Long Beach and Santa Barbara appear to be connected with hydrocarbon
emission, both on shore and off shore seepage. Washburn and Clark from the
University of California measured hydrocarbon seepage fluxes in the Northern
Santa Barbara Channel. Preston Dennet, in the February 2006 Fate magazine,
collected more than 50 UFO sightings, including fireballs and cigar shapes from
the early 1950s. One such UFO seen at Playa Del Rey along the Pacific Coast
Highway was described as an egg-shaped object surrounded in blue haze.
Because of the frequency of sightings in the area, he pondered on the question
of whether there was an underwater UFO base. In the opinion of the author
the combusting vortex fireball is a more likely explanation.
   The combusting vortex fireball is a novel meteorological phenomenon that
could explain several related enigmas, including ball lightning and some UFO

1. Montgomery, F. C. 1955, Weather notes: Tornadoes at Blackwell, Oklahoma, May 25, 1955.
   Monthly Weather Review 83, 5, 109.
2. Vonnegut, B., Moore, C. B., 1959, Giant electrical storms, Recent advances in Atmospheric
Journal of Scientific Exploration, Vol. 21, No. 1, pp. 163-225, 2007    0892-3310107

Psicologia Psicoanalisis Parapsicologia [Psychology Psychoanalysis Para-
psychology], by Giulio Caratelli. Rome: Sovera, 1996. 255 pp. L 31.000 (paper).
ISBN 88-8124-015-7.
Del Hipnotismo a Freud: Origenes Hist6ricos de la Psicoterapia [From
Hypnotism to Freud: Historical Origins of Psychotherapy], by Jose Maria L6pez
Piiiero. Madrid: Alianza Editorial, 2002. 157 pp. 6.00 Euros (paper). ISBN 84-
The Bifurcation of the Self: The History and Theory of Dissociation and its
Disorders, by Robert W. Rieber. Springer, 2006. 304 pp. $69.95 (hardcover).
ISBN 0-387-27413-8.
Premiers ~ c r i t sPsychologiques (1885-1888) Oeuvres Choises I [First
Psychological Writings (1885-1888) Selected Works I], by Pierre Janet (edited
by Serge Nicolas). Paris: L'Harmattan, 2005. 157 pp. 14.50 Euros (paper). ISBN
L'Hypnose: Charcot Face a Bernheim: ~ ' ~ c o de la Salpetriere Face
1'~cole Nancy [Hypnosis: Charcot Faces Bernheim: The SalpEtri6re School
Faces the Nancy School], by Serge Nicolas. Paris: L'Harmattan, 2004. 149 pp.
14.00 Euros (paper). ISBN 2-7475-5971-8.
Hypnotisme, Double Conscience et Alterations de la PersonnalitC: Le Cas
FClida X (1887) [Hypnotism, Double Consciousness and Alterations of
Personality: The Felida X. Case (1887)], by ~ t i e n n e
                                                        Eug&neAzam (edited by
Serge Nicolas). Paris: L'Harmattan, 2004. 284 pp. 28.50 Euros (paper). ISBN 2-
Nineteenth-century psychology and psychical research shared many things, such
as a common interest in the workings of the subconscious mind and in the study
of special cases, be they hysterics, the hypnotized, or mediums. This is clear in
the historical writings of Carroy (1991), Crabtree (1993), Plas (2000),
Shamdasani (1993), and others. The early work of such psychical researchers
as Edmund Gurney, Frederic W. H. Myers, and Charles Richet contributed to
this trend, as well as to ideas about dissociation (Alvarado, 2002). The six books
reviewed here, some of which were published in French, Italian, English, and
Spanish, discuss these issues and provide us with the general psychological
contexts in which psychical research flourished during the nineteenth century.
   A good place to begin discussing some of the historical relationships between
psychology and psychical research is Giulio Caratelli's Psicologia Psicoanalisis
Parapsicologia, a book published a decade ago in Italy that deserves to be better
known. Caratelli presents previously published essays that cover general issues
and specific investigators. The first type includes such topics as the induction of
sleep at a distance, early uses of hypnosis in Italy, dreams and telepathy,
164                              Book Reviews

poltergeists, and the psychology of testimony. The second type consists of the
psychical research work of individuals such as Siindor Ferenczi, Theodore
Flournoy, Sigmund Freud, and Carl Gustav Jung.
   The biographical essays are particularly interesting, exploring, for example,
Freud's interests in psychic phenomena and his interactions with Ferenczi and
Jung. The chapter on poltergeists touches on the discussions Freud had with
Nandor Fodor about this phenomenon.
   The essay on Flournoy presents much of what Caratelli refers to as the
"various functions of dramatization, personification and creative imagination"
(p. 69; this, and other translations, are mine). This is a reference to Flournoy's
unique contribution to psychology and psychical research, his work with the
mediumistic romances of medium Hklhe Smith (the pseudonym of Catherine
~lise  Muller). Flournoy (1900) published a book based on this research that
became the most important psychological study of mediumship of the times. In
the book he studied mediumistic communications about previous lives in India
and France and about life on planet Mars. The latter included the subconscious
creation of paintings about Martian themes and of a Martian language.
   While there is no attempt to bring the essays together in a conclusion, the
work has much information on the interrelationship between psychology and
psychical research.
   The rest of the books reviewed here focus on the general nineteenth-century
interest in the subconscious mind and on the use of hypnosis and the concept of
dissociation, topics that have been discussed in many important studies (e.g.,
Crabtree, 1993; Ellenberger, 1970). A scholar that has made many contributions
to the literature is the Spanish historian of medicine Jose Maria L6pez Pifiero.
The book reviewed here, Del Hipnotismo a Freud: Origenes Histbricos de la
Psicoterapia, is designed to be a very short discussion of a vast literature about
hypnosis and the development of psychotherapy, which was discussed by the
author in much more detail in a previous publication that unfortunately is rarely
cited by non-Spanish speaking historians of psychology or by psychologists
(L6pez Pifiero & Morales Meseguer, 1970).
   Starting with selected pre-nineteenth century developments, Ldpez Pifiero
succinctly covers animal magnetism, which he considers to be the "immediate
antecedent of psychotherapy" (p. 29). In addition to Mesmer, the author
discusses other figures such as Armand J. de Chastenet, the Marquis of
Puyskgur. The author stated that Puyskgur changed mesmerism when he induced
in one of his subjects "a state similar, but different from natural sleep, since he
talked and walked as if he was awake, obeying automatically the orders of the
magnetizer. Because of its similarity to spontaneous somnambulism, he named
it 'induced somnambulism"' (p. 34). But Ldpez Pifiero also mentions other
phenomena observed by PuysCgur, such as the diagnosis of illnesses by apparent
non-sensory means.
   Mesmerism, Ldpez Pifiero argued, originated Spiritualism because "the
'magnetized could be an intermediary . . . to communicate with the spirits" (p.
                                  Book Reviews                                 165

37). Probably because the author wants to keep the account short he does not
consider other influential factors nor does he give particular examples to
illustrate his point. One case was Joseph W. Haddock's (1851) mesmeric subject
Emma. In addition to clairvoyance, in 1848 "Emma would, in the mesmeric
state, speak of the scenery and nature of the spirit world . . .." (p. 181). Haddock
also noticed that Emma seemed to be in contact with a lady who had been
related to him when she was alive (p. 187).
   Space is also devoted to the hypnosis work of figures such as JosC Custodio de
Faria, James Braid, William B. Carpenter, Daniel H. Tuke, Jean-Martin Charcot,
Ambroise August LiCbeault, Hippolyte Bernheim, Joseph Babinski, Pierre Janet,
and Sigmund Freud, among others. The discussion not only covers the ideas and
works of these individuals, but also some of the controversies related to them.
   Charcot's work is discussed up to 1882, when he presented his famous paper
at the French Academy of Sciences (Charcot, 1882). In this paper, to quote from
L6pez Pifiero's translation of Charcot, hypnosis was seen to involve "various
nervous states . . .. According to my observations, there are three of these ner-
vous states: l., the cataleptic state, 2., the lethargic state, and 3., the som-
nambulistic state" (pp. 60-61). Unfortunately the author does not mention other
important aspects of Charcot's work, such as his work with metalloscopy and
metallotherapy. Charcot was involved in these studies in the 1870s; his work
involved a topic that, in the mind of many, bordered on psychical research, or at
least projected a strong image of the "marvelous" (Plas, 2000). He presided over
a commission sponsored by the SociCtC de Biologie to study the phenomena,
publishing two papers with positive results (Charcot et al., 1877, 1878). The
omission of these studies is important because there is a literature that has
discussed this line of work as an important component in the development of
Charcot's ideas (see, for example, the well-known study of Gauchet & Swain
 [1997/2000]). During the course of these studies the members of the commission
found that the external and internal use of metals could cause such phenomena
as the arrest of paralysis and hysterical attacks. They also found that metals
could cause a transference from one side of the body to another so that anes-
thesia disappeared from a side and appeared on the other. This line of work was
followed up later by others at the SalpEtrikre, when Alfred Binet and Charles
FCrC used magnets to induce the transference of such phenomena as hal-
lucinations and paralyses from one side of the body to another (Binet & FCrC,
1885). In this work, and in later experiments in which the influence was ex-
tended to go from one person to another (Babinski, 1886), it was assumed that
the influence was a purely physical one caused by the magnet. The work of
Babinski was described by L6pez Pifiero as one of the most colorful
"exaggerations of the ideas of Charcot published by one of his disciples"
(p. 86). However, while the work may be seen as an exaggeration of Charcot's
opinions in terms of influences from one person to another, it was in some ways
an extension of Charcot's earlier work and it was consistent with the theoretical
framework of the SalpEtrikre school and the idea that hysterical attacks and
166                               Book Reviews

hypnosis could be induced using external stimuli affecting the nervous system,
of which the application of magnets was but one.
   L6pez Pifiero discusses many interesting aspects of Charcot's work, such as
the later views on the influence of ideas on paralyses. But he cautions the reader
against psychologizing Charcot's system too much. While this was seen as an
important step, Charcot still had a "'psychology without a subject,' according to
which 'ideas' act on the organism through a deterministic mechanism" (p. 81).
Bringing the person back to psychology, the author states, was the work of Janet
and Freud.
   There is much to recommend in this short book, particularly for psychologists
and psychiatrists who wish to have an outline of the topic in question. Among its
weaknesses, however, is the lack of a proper conclusion summarizing the book's
content, and there are many parts where more clear bibliographical references
would have been useful.
   Another book seemingly addressed to psychologists and psychiatrists is Robert
W. Rieber's The Bzfircation of the Self: The History and Theory of Dissociation
and its Disorders. The author states in the introduction that there are several
important aspects of the concept of dissociation, among them the notion that it is
a process active in everyone. Furthermore, Rieber states that dissociation is
a "mental activity which can be utilized by the individual for both creative as well
as destructive purposes" (p. 9). While many forms of dissociation are mentioned,
much of the discussion focuses on Dissociative Identity Disorder (DID).
   The first chapter is devoted to brief discussions of old cases of dissociation,
but the author also mentions students of the topic such as Abercrombie,
MacNish, Mitchell, and Wigan. Rieber states that "alienists and psychologists . . .
not only disputed the nature of the disorder but couldn't even come to any
agreement about the vocabulary that should be used to describe it" (p. 18).
Furthermore, the author presents summaries of the work and ideas of Pierre
Janet, Frederic W. H. Myers, Morton Prince, and Boris Sidis. However, I dis-
agree with some statements. Rieber contends that Janet was "the first person to
argue that people under hypnosis are not unconscious but rather have a kind of
divided consciousness" (p. 20). There were ideas of this sort before Janet
published his first papers on the subject, which appear in the compilation of his
early work discussed above. One such precedent situated close in time to the
work of Janet comprised the writings of Edmund Gurney (1884), in which
the stages of hypnosis were discussed, among other phenomena indicative of the
presence of a subconscious intelligence. Furthermore, while the author is to be
commended for including Myers in his discussion, we should not associate
Myers with the idea that "there are multiple centers of consciousness" (p. 24).
Instead, Myers defended the existence and action of a single subliminal self.
   After a chapter on Freud's use of hypnosis and aspects of his dynamic psy-
chology, Rieber discusses the Sybil case of DID in three chapters. In his view,
the case was overly published and misrepresented; in fact, it was, he says, "a
conscious misrepresentation of the facts" (p. 130). The author also presents short
                                 Book Reviews                                167

summaries of old dissociation cases, such as those of Rachel Baker, Ansel
Bourne, Doris Fischer, Mollie Fancher, Mary Reynolds, H6lkne Smith, and
F6lida X., among others. In addition, Rieber presents a table summarizing
aspects of the cases, such as demographics, presence of trauma (by type), and
therapeutic success, among many other variables.
   In the concluding chapter the author summarizes some trends. This includes
the idea that DID is a "reflection of the society and epoch from which it arose"
(p. 183). The topic of iatrogenesis and the skepticism of many contemporary
clinicians, which are important aspects of the modern literature on the subject,
are also discussed. In Rieber's view discussion of DID "is a risky enterprise;
notwithstanding the formulation of the DSM-IV, no consensus has ever been
reached as to what the term means or indeed, what kind of psychological,
physical, or social states it applies to" (p. 185).
   This book will be useful to mental health professionals looking for an
overview of selected past ideas and cases of dissociation, particularly DID.
Nonetheless it has some problems that are particularly important to those of us
interested in the historical study of the topic. While Spiritualism and medi-
umship, as well as the work of Charcot, are mentioned, the discussion hardly
analyzes the importance of these topics and figures for the study of dissociation.
There are also documentation problems. One looks in vain for primary sources
in the sections in which figures such as Braid, Charcot, Mesmer, Myers, and the
Marquis de Puys6gur are discussed, not to mention important French sources
such as Janet's L'Automatisme Psychologique (1889) (only briefly mentioned in
the text [p. 201, but not listed in the references for that chapter).
   Furthermore, the author misses many important developments. For example,
his overview of hypnosis does not include many examples of the appearance and
induction of secondary personalities in hypnotized individuals, a topic reviewed
by Crabtree (1993). Charcot did more than list what he believed were the
physiological stages of hypnosis, as seen in a lecture he presented in 1890 about
a patient showing instances of "doubling." This patient showed a "secondary
state" that presented different memories and physiological phenomena than
were exhibited in the primary state (Guinon, with Blocq et al., 1893: 171-176).
The discussion of this and other examples of hypnotic and hysterical secondary
personalities would have allowed Rieber to better defend what he referred to as
"animal magnetism and its links to MPD" (p. 43). Similarly, his interest in the
iatrogenic creation of DID could have been connected to the writings of
Delboeuf (1886) and Flournoy (1900) and to the criticisms Hippolyte Bernheim
presented of the hypnotic phenomena of the SalpEtriih-e researchers.
   Such ideas about iatrogenesis form an important part of L'Hypnose: Charcot
Face 2 Bernheim: ~ ' ~ c ode e Salp2triPre Face a 1'~cole Nancy. Authored
                               l la                            de
by French historian of psychology Serge Nicolas, the discussion focuses on the
clashes between the Salpetrikre and the Nancy schools of hypnosis lead by
Charcot and Bernheim.
168                               Book Reviews

that saw hypnosis as a particularly powerful technique to explore the human
psyche. One worker in the field compared the investigative power of hypnosis to
that of vivisection, qualifying the technique as a "moral vivisection" (Beaunis,
1887: 114). Both Janet (1889) and Richet (1883) actively used hypnosis in this
way. Nicolas states that Charcot "used hypnosis as an experimental technique to
study hysteria. In his clinical presentations of patients at the SalpStrikre he
reproduced their symptoms with hypnosis (experimental neurosis). Hypnotism
represented the experimental part of neurosis . . .." (pp. 15-16).
   Nicolas also covers metalloscopy, metallotherapy, and the controversial
transfer phenomena. Bernheim (1885), who is discussed in the second chapter,
ascribed his results to suggestion. He also questioned the relationship between
hysteria and hypnosis, as well as the stages of hypnosis that Charcot (1882) and
others postulated.
   In addition, Nicolas also discusses the writings of Joseph R. L. Delboeuf, who,
like Bernheim, was skeptical of the Salpetrikre approach to hypnosis. Delboeuf s
(1886) ideas are expressed to some extent in the title of a paper, "Influence of
Education and Imitation in Induced Somnambulism." He did not believe that the
famous stages of hypnosis nor the transfer phenomena were independent of
suggestion. Articles written by Delboeuf are reprinted in chapters four and five of
the book. The sixth chapter reproduces an article by Bernheim in which he
summarized his objections to the SalpEtri2re doctrine. Unfortunately, Nicolas does
not reprint long excerpts from the Salpetri2re school with replies to criticisms.
   While Nicolas's book is a good introduction to its topic, I feel that he could
have broadened the scope of his work somewhat to include more information
relevant to the iatrogenic ideas of Bernheim and Delboeuf. I am referring to an
older literature that discussed demonopathy as a condition that was "eminently
contagious" (Calmeil, 1845: Vol. 1, p. 86) and hallucinations as phenomena
influenced by social ideas, by the "force of example, by a true moral contagion"
(De Boismont, 1845: 308). Such concepts suggest that perhaps critiques of the
Salpztri&i-e work were part of a conceptual tradition that was older and wider in
scope than the one in the late nineteenth-century hypnosis literature.
   While Nicolas's focus is France, and this is not a criticism of his work, at this
point we need studies that extend the history of French hypnosis to the reception
of the ideas coming from the SalpStri2re and Nancy schools in other countries.
This could include, for instance, an analysis of discussions of the work with
metals by English physicians (e.g., Tuke, 1879). All in all, Nicolas is successful
in giving us a perspective of the battles between the rival schools, and, more
importantly, in the development of a critical mentality in which the
experimenters were not mere observers and the experimental subjects were
not seen as mere passive instruments.
   Other work by Nicolas includes the preparation of classic French materials on
the topics here discussed. Under the title Premiers ~ c r i t sPsychologiques, he
presents the first articles Pierre Janet published between 1885 and 1888. Nicolas
includes an introduction to the volume presenting interesting biographical
                                 Book Reviews                                 169

details and comments about Janet that puts the work reprinted here in context.
The first paper, published in 1885 and not in 1886 as it sometimes assumed, was
a report of tests to induce trance at a distance with Mme. B., whose real name
was L6onie Leboulanger, a topic discussed as well by Caratelli in the first book
reviewed here. The article, entitled "Note sur Quelques Phknom6nes de
Somnambulisme," was a presentation made at a meeting of the Soci6t6 de
Psychologie Physiologique in 1885. Written by Janet (who worked with phy-
sician Joseph Gibert), the paper was presented by his famous uncle, the phi-
losopher Paul Janet, in a session presided over by Jean-Martin Charcot. The
context was decidedly mainstream, but the topic was not, consisting of a return
to phenomena observed by old magnetizers. The results of these experiments,
and of later ones reported in an 1886 paper also reprinted by Nicolas, became
instant classics and were widely discussed in the contemporary literature by
individuals from such different countries as England (Myers, 1886) and Poland
(Ochorowicz, 1887; Janet's articles have been translated to English: Janet, 18851
 1968a, 188611968b). The second paper reports tests in which several other
people were present in addition to Janet and Gibert, including members of
Janet's family (his uncle Paul and his brother Jules) and L6on Marillier, Arthur
and Frederic W. H. Myers, Julian Ochorowicz, and Charles Richet. Soon after
the appearance of the first paper, others presented similar observations of effects
induced at a distance observed during the 1870s (e.g., Beaunis, 1886; Richet,
1886). It seems that Janet had taken the lid off a topic of great, but hidden,
interest for some people. However, this does not mean that everyone accepted
the research, something that is not discussed either by Caratelli or by Nicolas.
An example of negative opinion about this work can be found in French
philosopher Charles Renouvier, who wrote to William James on February 5,
1886: "It seems to me that the observations and experiments of Richet, Beaunis
and others are no more scientific, that they no more meet the condition of
verification and control, than do many of the accounts which fill the books on
animal magnetism" (Perry, 1935: Vol. 1, p. 700).
   Overall research on mental suggestion in France was ultimately unsuccessful
in its quest to gain acceptance in psychology. While Janet kept his interest on
different aspects of psychic phenomena in later years (e.g., Janet, 1892), he
stopped his work on mental suggestion and distanced himself from those studies
(Janet, 1930), something that has puzzled many. Caratelli (p. 162) mentions
possible psychological reasons, and hints at Janet's possible worry about his
future career. It seems to me that the latter must have been an important factor,
and one that deserves archival research. Janet was in his late 20s when the first
paper was presented and, while his work seems to have been well received, the
general topic of mental suggestion in France had its critics, among them George
Gilles de la Tourette, who believed that, from the perspective of science, "the
phenomena of mental suggestion do not exist . . . or are not proven" (Gilles de la
Tourette, 1887: 167-168). Mental suggestion certainly was unorthodox and it is
doubtful that, had Janet continued this work, he would have been able to
170                               Book Reviews

establish himself as a respectable clinician and as an influential author able to
publish works of great importance such as L'Automatisme Psychologique (1889).
    Premiers ~ c r i t Psychologiques is not limited to the above-mentioned papers.
It also includes other important articles, such as one entitled "Les Phases
IntermCdiaires de l'Hypnose," published in 1886 in the Revue Scientifique,
a general science journal edited by Richet. The other three are very important
papers reprinted from Thkodule Ribot's Revue Philosophique de la France et de
1 ' ~ t r a n ~ e r e which Janet (1886, 1887, 1888) explored dissociation using
hypnosis. These articles included "Les Actes Inconscientes et la Dkdoublement
de la PersonnalitC Pendant le Somnambulisme Provoque" (1886), "L'Anesthksie
SystkmatisCe et la Dissociation des Phknom2nes Psychiques" (1887), and "Les
Actes Inconscients et la MCmoire Pendant le Somnambulisme" (1888), which
included work with a patient referred to as L. (Lucie), described in the 1886
paper as a nineteen-year-old hysteric.
    The 1887 paper includes the first use of the term dissociation in Janet's
writings. Dissociation, Janet argued, is the main aspect of such conditions as
hysteria and hypnosis. A particular example of this process, and a phenomenon
observed with Lucie, was "systematic anesthesia." In this paper Janet also re-
ported using automatic writing to communicate with Adrienne, Lucie's sec-
ondary personality. Adrienne, he stated, was an "instrument of observation"
(p. 90, this, and other pages, refer to the book under review), and automatic
writing was a method of psychological analysis showing the existence of
unconscious operations in the somnambulist. This, Janet believed, was the same
as the writings of mediums.
    Some of the material reprinted here, as Crabtree (2003) has discussed, gives
us a window into the influence of psychical research on Janet's studies of
dissociation. In the 1886 paper Janet refers to automatic writing as an example
of unconscious action. Janet wrote that this phenomenon had been "studied very
well by an English psychologist of merit, M. Fr. Myers, who is dedicated to the
difficult study of unknown psychological phenomena" (pp. 80-8 1). A footnote
to this sentence added that Myers's "ingenious work demands serious study,"
and, significantly, Janet adds that in his article he was only mentioning facts
without getting into their interpretation (p. 81). If we turn to the article by Myers
(1885), we find that in addition to discussions of a subconscious self in charge
of automatic writing, Myers also discussed telepathy as the province of such
a hidden mind. Clearly Janet was not willing to follow Myers here. Instead he
"dissociated" Myers, taking the more accepted idea of unconscious action and
remaining aloof to the supernormal. This shows a pattern that is common in the
use of ideas, the selective appropriation of aspects of the material cited, which is
particularly significant in this case because it illustrated the marginal status of
psychical research in general, and ideas of telepathy in particular. A similar
point may be made by Janet's use of the mesmeric literature, such as his citation
of Deleuze (p. 92) in his 1887 paper.
    Janet's dependence on particular patients to discover, or to create, the
                                  Book Reviews                                   171

workings of the subconscious and of dissociation was typical of the French
clinical tradition explored by Carroy (1991), among others. This tradition is
illustrated in the studies of hysteria and hypnosis Charcot conducted at the
Salpetrikre. The photographs of hysterics from this hospital have left us with
many visual records of a variety of patients; it has been argued that these pho-
tographs created the phenomena in question as a function of the psychosocial
environment at the time (Didi-Huberman, 1982/2003). Included among these
patients is the famous Blanche Wittmann, immortalized by Andre Brouillet in
the painting in which she lies in Charcot's arms while being scrutinized by many
physicians (Signoret, 1983).
   Another important patient, and the main subject of the next book reviewed here,
was the Fklida X. case of double consciousness, probably the most influential and
widely cited case of its time, which was described in Scientijic American as
a "splendid chance for a sensational novelist" (Two Personalities in One Person,
1876). French physician ~ h a r l e s - ~ a r i e - ~ t i e n n e
                                                            Eugene Azam chronicled the
case in his classic 1887 compilation of previously published papers, Hypnotisme,
Double Conscience et Alte'rations de la Personnalite', reprinted under the direction
of Serge Nicolas as a facsimile of the original edition.
   Nicolas's introduction not only presents biographical information but also lists
Azam's published writings on the case and reprints an article and a review of the
book by philosopher Victor Egger, as well as Azam's 1890 paper that discusses
the "doubling of personality," and an excerpt from a revised edition of the book
(Azam, 1893). The book opens with a preface by Charcot. Praising the work of
his school, he affirms that hypnotism "has arrived thanks to the regular
application of the nosographic method" and has "definitively conquered a place
among the facts of positive science" (p. 5). Azam, Charcot argued, was one of
the persons who initiated such developments.
   The work has three chapters with several sections. The first chapter covers
many aspects of hypnosis. This includes Azam's use of hypnosis and spec-
ulations on hyperesthesia during the state. Presumably referring to clairvoyance
and other phenomena, Azam states that he has never seen the "marvels of
magnetism" (p. 13). Furthermore, he believes that hyperesthesia may be the
principle behind the "supposed magnetic fluid and its marvels . . . second sight,
etc." (p. 38). Although this is not pointed out in the introduction to the book,
Azam's ideas were consistent with many other nineteenth-century French at-
tempts to explain the phenomena of hypnosis using the idea of enhanced sensory
functions during the state, a topic studied by Bertrand Mkheust (1999: 163-174).
However, Azam was more positive about thought-transference in a letter he
wrote after the book was published (the letter was published by Dufay, 1889:
   Related to my comments above, the phenomena of contagion is seen by the
author to be behind hysteria and possession, among other phenomena. He points
out that if a woman has a hysterical attack in a hospital hall, it is common to see
172                                Book Reviews

says, is important in hypnosis. Interestingly, he did not connect this thought to
hysteria and hypnosis research at the Salp&ii?re. But perhaps this may be
explained by the fact that Azam was being political, because Charcot wrote
a preface for the book.
   The second chapter, and the main part of the book, focuses on the celebrated
F6lida X., a woman "tormented by an alteration of memory" producing "sort of
a double personality" (pp. 61-62). Early in life, and with no known cause, Fklida
manifested what looked like sleep and awoke in a secondary state that lasted one
or two hours. Azam first saw her in 1858, when her changes were preceded by
a strong headache. In the secondary state "She raises the head and, opening the
eyes, salutes smiling the persons around her, as if they just arrived; her
physiognomy, sad and silent before, is lightened and exudes gaiety . . .." (p. 67).
During the secondary state F6lida got pregnant, but claimed she was not aware
of the facts leading to her new "state."
   The case is too complex to summarize here in all its details. Suffice it to say
that Azam saw on a few occasions a third state and observed Fklida again in the
1870s, after having lost sight of her for many years, and then in 1882 and 1887.
He noticed that over time the secondary state became her main state, a process
that is illustrated graphically in the book (p. 119). However, writing about later
observations of Fklida in 1890, Azam said that her "secondary conditions . . . do
not last but a few hours" (Azam, 1890: 140).
   Part of the second chapter was devoted to theoretical ideas. Azam believed
that Fklida's changes were caused by a diminishment of the circulation of blood
in parts of the brain due to the hysterical state of the patient. Finally, in chapter 3
Azam discussed a variety of other issues related to alterations of personality,
such as the influence of morbid states and legal issues.
   All the work reviewed here is a reminder of the rich history of psychiatrists' and
psychologists' efforts to understand the subconscious and its phenomena. Whether
we focus on physiological, psychological, or social processes, on the interaction of
these studies with physiology or psychical research, on the conflicts of opposing
schools of thought, or on particular clinicians, researchers, or patients, the works
discussed here show the rich heritage of the old literature and the complexities of
historical research. Mimicking dissociation, the efforts to understand this past, or
at least some important contributing factors, have frequently been separated from
common knowledge, presenting a need for continuous explorations and
reconstructions using multiple approaches and points of view.

                                                              CARLOS S. ALVARADO
                                                    Division of Perceptual Studies
                                              University of Virginia Health System
                                                                 P. 0. Box 800152
                                                         Charlottesville, V A 22908
                                          Book Reviews                                            173

Alvarado, C. S. (2002). Dissociation in Britain during the late nineteenth century: The Society for
    Psychical Research, 1882-1900. Journal of Trauma and Dissociation, 3 , 9-33.
Azam, E. E. (1890). Le dkdoublement de la personnalitk et le somnambulisme. Revue Scientifique, 20,
Azam, E. E. (1893). Hypnotisme et Double Conscience: Origine de Leur Etude et Divers Travaux sur
    des Sujets Analogues. Paris: Ftlix Alcan.
Babinski, J. (1886). Recherches servat a Ctablir que certaines manifestations hystkriques peuvent Ctre
    transfCrCes d'un sujet a un autre sujet sous l'influence de l'aimant. Revue Philosophique de la
    France et de ltEtranger,22, 697-700.
Beaunis, H. (1886). Un fait de suggestion mentale. Revue Philosophique de la France et de
                  21, ,
    ~ ' ~ t r a n ~ e r204.
Beaunis, H . (1887). Le Somnamhulisme Provoque' (2nd ed.). Paris: J.-B. Baillikre.
BernFim, H. (1885). L'hypnotisme chez les hystkriques. Revue Philosophique de la France et de
    I'Etranger, 19, 311-316.
Binet, A., & FCrC, C. (!885). L'hypnotisme chez les hystkriques: Le transfert. Revue Philosophique de
    la France et de IfEtranger, 19, 1-25.
Calmeil, L.-F. (1845). De la Folie Conside're'e Sous le Point de Vue Pathologique, Philosophique,
    Historique et Judiciaire (Vols. 1-2). Paris: J.-B. Baillikre.
Carroy, J. (1991). Hypnose, Suggestion et Psychologie: L'invention de Sujets. Paris: Presses
    Universitaires de France.
Charcot, J.-M. (1882). Sur les divers Ctats nerveux d6terminCs par l'hypnotisation chez les hysteriques.
    Comptes-Rendus Hebdomadaires des Se'ances de I'Academie des Sciences, 94, 403-405.
Charcot, J.-M., Luys, J. B., & Dumontpallier, A. (1877). Rapport fait a la SociCtC de Biologie sur la
    mCtalloscopie du Docteur Burq. Comptes Rendus des Se'ances de la Socie'te' de Biologie, 30, 1-24.
Charcot, J.-M., Luys, J. B., & Dumontpallier, A. (1878). Second rapport fait a la Sociktk de Biologie
    sur la mCtalloscopie et la mCtallothCrapie du Docteur Burq. Comptes Rendus des Se'ances de la
    Socie'te' de Biologie, 31, I-XXXII.
Crabtree, A. (1993). From Mesmer to Freud: Magnetic Sleep and the Roots of Psychological Healing.
    Yale University Press.
Crabtree, A. (2003). "Automatism" and the emergence of dynamic psychiatry. Journal of the History
    of the Behavioral Sciences, 39, 51-70.
De Boismont, A. B. (1845). Des Hallucinations ou Histoire Raisonne'e des Apparitions, des Visions,
    des Sognes, de I'extase, du Magne'tisme et du Somnambulisme. Paris: Germer Baillikre.
Delboeuf, J. R. L. (1886). De l'influence de l'Cducation,et de l'imitation dans le somnambulisme
    provoquk. Revue Philosophique de la France et de I'Etranger, 22, 146171.
Didi-Huberman, G. (2003). The Invention of Hysteria: Charcot and the Photographic Iconography
    of the Salpe^tridi-e(A. Hartz, trans.). Cambridge, MA: MIT Press. (Original work published
Dufay, Dr. (1889). La vision mentale our double vue dans le somnambulisme provoquk et dans
                                                                                          27, ,
    le sornnambulisme spontanCe. Revue Philosophique de la France et de ~ ' ~ t r a n ~ e r205-224.
Ellenberger, H. F. (1970). The Discovery of the Unconscious: The History and Evolution of Dynamic
    Psychiatry. Basic Books.
Floumoy, T. (1900). From India to the Planet Mars: A Study of a Case of Somnabulism. New York:
    Harper & Brothers.
Gauchet, M., & Swain, G. (2000). El Verdadero Charcot: Los Caminos Imprevistos del Inconsciente
    (M. I. Fontao, trans.). Buenos Aires: Nueva Vision. (Original work published 1997).
Gilles de la Tourette, G. (1887). L'hypnotisme et les Etats Analogues au Point de Vue Me'dico-Legal.
    Paris: E. Plon, Noumt.
Guinon, G., with the collaboration of Blocq, [P. 0.1, Souques, [A.-A.], & Charcot, J.-B. (Eds.). (1893).
    Clinique des Maladies du Systdme Nerveux: M. le Professeur Charcot: Legons du Professeur,
    Me'moires, Notes et Observations (Vol. 2). Paris: Bureaux du Progrks MCdical.
Gurney, E. (1884). The stages of hypnotism. Proceedings of the Society for Psychical Research, 2,
Haddock, J. W. (1851). Somnolism & Psycheism; or, the Science of the Soul and the Phenomena of
    Nervation (2nd ed.). London: James S. Hodson.
174                                       Book Reviews

Janet, P. (1886). Les actes inconscientes et la dkdoublement de la personnalitk pendant le
    somnambulisme provoquk. Revue Philosophique de la France et de I'Etranger, 22, 577-592.
Janet, P. (1887). L'anesthksie systCmatiske et la dissociation des phenomenes psychiques. Revue
    Philosophique de la France et de l'Etranger, 23, 449472.
Janet, P. (1888). Les actes inconscitnts et la mkmoire pendant le somnambulisme. Revue
    Philosophique de la France et de l'Etranger, 25, 238-279.
Janet, P. (1889). L'Automatisme Psychologique: Essai de Psychologie Expe'rimentale sur les Formes
    Infe'rieures de I'activite' Humaine. Paris: Fklix Alcan.
Janet, P. (1892). Le spiritisme contemporain. Revue Philosophique de la France et de ~ ' ~ t r a n ~ e r ,
Janet, P. (1930). Autobiography of Pierre Janet. In Murchison, C. (Ed.), A History of Psychology in
    Autobiography (pp. 123-133). Worcester, MA: Clark University Press.
Janet, P. (1968a). Report on some phenomena of somnambulism. Journal of the History of the
    Behavioral Sciences, 4 , 124-131. (Original work published in 1885).
Janet, P. (1968b). Second observation of sleep provoked from a distance and the mental suggestion
    during the somnambulistic state. Journal of the History of the Behavioral Sciences, 4 , 258-
    267.(Original work published in 1886).
L6pez Piiiero, J. M., & Morales Meseguer, J. M. (1970). Neurosis y Psicoterapia: Un Estudio
   Histbrico. Madrid: Espasa-Calpa.
MCheust, B. (1999). Somnambulisme et Mediumnite' (1784-1930): Vol. I: Le De'ji du Magnktisme
    Animal. Le Plessis-Robinson: Institut SynthClabo pour de Progrks de la Connaissance.
Myers, F. W. H. (1885). Automatic writing- Proceedings of the Society for Psychical Research, 3 ,
Myers, F. W. H. (1886). On telepathic hypnotism, and its relation to other forms of hypnotic
   suggestion. Proceedings of the Society for Psychical Research, 4, 127-188.
Ochorowicz, J. (1887). De la Suggestion Mentale. Paris: Octave Doin.
Perry, R. B. (1935). The Thought and Character of William James (Vols. 1-2). Little, Brown.
Plas, R. (2000). Naissance dune Science Humaine: La Psychologie: Les Psychologues et le
   "Merveilleux Psychique." Rennes, France: Presses Universitaires de Rennes.
Richet, C. (1883). La personnalitk et la memoire dans le somnambulisme. Revue Philosophique de la
    France et de I'Etranger, 15, 225-242.
Richet, C. (1886). Un fait de somnambulisme a distance. Revue Philosophique de la France et de
                 21, ,
    ~ ~ t r a n ~ e r199-200.
Shamdasani, S. (1993). Automatic writing and the discovery of the unconscious. Spring, 54,
Signoret, J. L. (1983). Une l e ~ o nclinique a la Salpgtrikre (1887) par Andrk Brouillet. Revue
    Neurologique, 139, 687-701.
Tuke, D. H. (1879). Metalloscopy and expectant attention. Journal of Mental Science, 24, 598-609.
Two personalities in one person. (1876). Scientific American, 35, 192.

A Casebook of Otherworldly Music: Vol. 1 of Paranormal Music
Experiences by D. Scott Rogo. San Antonio, TX: Anomalist Books, 2005. 176
pp. $12.95 (paper). ISBN 1-933665033.
A Psychic Study of the Music of the Spheres: Vol. 2 of Paranormal Music
Experiences by D. Scott. Rogo. San Antonio, TX: Anomalist Books, 2005. 176
pp. $12.50 (paper). ISBN 1-933665041.

   Anomalist Books has re-released a number of books by the late para-
psychologist, D. Scott Rogo, including the first two books of his writing career.
Originally published in 1970 by University Books under the title, NAD: A Study
of Some Unusual "Other World" Experiences, the re-released and re-titled book,
A Casebook of Otherworldly Music: Vol. 1 of Paranormal Music Experiences
                                 Book Reviews                                 175

was written with the purpose of providing "enough case material to reinstate
celestial music as a phenomenon worthy of parapsychology's concern" (p. 146).
Rogo's efforts were followed up in 1972 with his second book, formerly titled A
Psychic Study of the Music of the Spheres: NAD Vol. 2, in which his purpose was
to relate paranormal music experiences to the general body of psychical
phenomena. In the re-titled release, A Psychic Study of the Music of the Spheres:
Vol. 2 of Paranormal Music Experiences, Rogo fleshes out his thesis and
examines the relation of celestial music to out of body experiences, survival
after death, and other psychic phenomena.
   D. Scott Rogo studied at the University of Cincinnati and San Fernando
Valley State College, from which he graduated in 1972 summa cum laude with
a B.A. Music. He played the English horn for two seasons with the San Diego
Symphony and also performed occasionally with the Honolulu Symphony. He
played the oboe as well. Being both a musician and a student of psychical
research, Rogo was in a unique position to provide an original contribution to
the field, and did so by the age of twenty with the publication of the first volume
in this set. Much in the way that parapsychologists use the general blanket term
"psi" from the Greek alphabet to denote paranormal processes and causation,
Rogo chose the Sanskrit word NAD (also written NADA with the final "a"
being silent) as a blanket term to represent the subject of his study. Sometimes
the phenomenon is called psychic music, astral music, celestial music, or
transcendental music, but the term N A D simply expresses the idea of music that
is heard from no apparent source.
   There are a number of criticisms that could be leveled against these books.
First, Rogo reveals some naivety about what constitutes proof of survival as well
as the proper uses of certain statistical terms. Such writing comes across as
stilted at best or pseudoscientific at worst. Second, even though Rogo attempts
to maintain some neutrality by prefacing out-of-body experiences with the term
"ostensible", he makes no apologies for his survivalist beliefs and appeals to
the "psychic ether" as an explanatory framework for some of the phenomena.
These criticisms, however, will not be the focus of this review. Reading
chronologically through the rest of the Rogo series, as released by Anomalist
Books, one may witness the developing maturity of the author both in the sense
of his writing style and his methods of critical analysis, thus rendering any
extended discussion of these criticisms moot.
   Throughout both of these volumes, Rogo presents case material taken from
many different sources such as Phantasms of the Living by Gurney et al, books
by Ernesto Bozzano and Robert Crookall, as well as accounts of N A D
experiences as presented in the popular paranormal magazines of his day.
However, the bulk of material comes from personal correspondence between
Rogo and the experients of such phenomena, who had responded to his calls for
such accounts in the magazines Fate and Psychic News. Rogo states that upon
the commencement of the study in Vol. 1, he knew virtually nothing about the
176                              Book Reviews

would lead. His initial plan was to present cases quoted in toto, with points of
coincidence and recurrent patterns emphasized (p. 129). However, along the
way, he noticed a number of commonalities between NAD experiences and
OOBEs (out of body experiences), and this apparent relationship became the
focus of the books.
   By the concluding section of Vol. 2, Rogo affirms his belief that "tran-
scendental music is but another characteristic of the OOBE and is in no way
independent of it, even when the relationship is vague" (p. 96). According to
his content analysis, both experiences manifest during similar mental states,
and the type of music heard (i.e., choral vs. instrumental or melodic music vs.
music without a discernable melody) coincides with the type of OOBE (natural
vs. enforced) reported. Rogo s analysis also uncovers what he calls a "crescendo
effect" in the majority of the collected cases, in which experients report the
volume of mysterious music gradually being heard, rising to full power, and
receding again.
   As much as Rogo would like to say that his study was written without bias
(Vol. 1 p. 129), one may suspect that his prior interest in OOBEs as well as his
choice of secondary sources might have tempered his conclusions. Additionally,
a call for accounts of experiences concerning "astral music" (p. 27) is likely to
elicit reports from out-of-body experients. Still, it is admirable that someone so
young, without having yet completed a formal education, would not only have
the initiative to collect reports about little understood or discussed phenomena,
but also have something meaningful to say about them. Thanks to the re-release
of these books, which were long out of print, interest in NAD experiences might
be renewed, and D. Scott Rogo might not have the first and final word on them.

                                                              Columbus, Ohio

Poltergeists: Examining Mysteries of the Paranormal by Michael Clarkson.
Buffalo, NY: Firefly Books, 2006. 220 pp. $14.95 (paper). ISBN 1-55407-159-3

   The German word poltergeist (literally meaning "noisy spirit") has tradi-
tionally been used to both label and describe a short-lived series of anomalous
physical phenomena primarily involving the movement of objects without any
apparent force acting upon them, which occurs repeatedly and spontaneously
(knocking and rapping sounds are sometimes reported, as well). Parapsychol-
ogists instead use the term recurrent spontaneous psychokinesis (RSPK) based
on the observation that the phenomena tend to occur in the presence of a certain
individual (called the agent) and are, therefore, thought to involve an invol-
                                  Book Reviews                                   177

untary form of mind-matter interaction on the macroscopic scale. The phe-
nomena seem to symbolically reflect some aspect of the strained relationship
often found between the agent and others within their surroundings, further
suggesting a psychological aspect to the poltergeist (Roll, 197212004, 2003).
   The apparently "mysterious" nature of the poltergeist has captivated the
general public for years through various depictions in television, film, and the
print media, leading the public to continually raise questions on whether pol-
tergeists are genuinely anomalous or elaborately fraudulent. Michael Clarkson,
an investigative journalist from Toronto, aims this book toward the general
public in attempt to help give it better answers. Clarkson is a healthy skeptic,
approaching his topic with caution (apparent in his frequent use of words such
as "reportedly" and "allegedly"), but in a way that is also open enough to
consider the available evidence for informed debate. He is also the author of
four previous books on stress and fear management, two topics that also happen
to tie-in rather closely with the psychology of the RSPK agent, thereby leading
him to this "unconventional" topic.
   The book is divided into six main sections. The first section, consisting of four
chapters, summarizes the current issues and theories on poltergeist disturbances,
and discusses the possibility of fraud. The second section describes two rela-
tively obscure poltergeist cases, one of which Clarkson became familiar with
while working as a reporter in the Niagara Falls area. The third section describes
the Tina Resch case, investigated in 1984 by William Roll (Roll & Storey,
2004). Section four discusses poltergeist cases in the UK, including Enfield and
Sauchie. The fifth section describes cases from other parts of the world,
including the Rosenheim case and three notable cases also investigated by Roll
(1972/2004): Seaford, Olive Hill, and Miami. The last section of the book
briefly reviews cases of macroscopic psychokinesis, including mention of
well-known subjects such as Nina Kulagina, D. D. Home, Uri Geller, and
Matthew Manning. Clarkson also reviews ostensibly paranormal cases that
received a great deal of publicity (such as "The Amityville Horror" case, the
"Exorcist" case, and "The Entity" case), not all of which clearly involved
RSPK. He also lists popular films dealing with the paranormal, most of which
again are not clearly poltergeist-related.
   In general, Clarkson's book is a step up from other popular books on poltergeists
that have been written directly for the general public (e.g., Stander & Schmolling,
1996) in terms of the information it contains. It does not tend to confuse poltergeist
phenomena with haunting phenomena, which differ from each other in several
respects despite their apparent similarity on the surface (see the Appendix in Roll,
197212004), nor does it overemphasize the traditional discarnate agency approach
to the poltergeist, a bygone notion that should well be left behind. It also discusses
well-documented cases that have not been given adequate treatment or even any
mention in other popular books, such as the Tina Resch case. Although labeling
poltergeists as "paranormal" (as in the subtitle of the book) is a typical method
178                                     Book Reviews

and may also evoke an impression of spookiness to the poltergeist that does not
seem fitting given what is currently known about its nature. This is only a minor
drawback and does not take away from the adequate introduction to the topic that
the book provides to the popular reader. Those readers with a more scholarly
interest in poltergeist phenomena should also read William Roll's (197212004)
classic book The Poltergeist.
   Some of the best evidence so far for an apparent interaction between mind and
matter comes from the three-decade microscopic psychokinesis (PK) database
with random event generators, though interpretation of some of this evidence is
still under debate (Bosch, Steinkamp, & Boller, 2006; Jahn et al., 1997; Radin &
Nelson, 1989,2003; Radin, Nelson, Dobyns, & Houtkooper, 2006a, 2006b; Schub,
2006). Cases of recurrent spontaneous PK add another line of consideration to
the mind-matter issue from a much larger, macroscopic scale. If the cases pre-
sented in Clarkson's book are what they seem to be, then they may give the
general public some inkling of a possible relation between mind and matter that it
seems parapsychologists are only just beginning to understand on many
different scales.

                                                                         BRYAN J. WILLIAMS
                                                                  University of New Mexico
                                                                  Albuquerque, New Mexico

Bosch, H., Steinkamp, F., & Boller, E. (2006). Examining psychokinesis: The interaction of human
   intention with random number generators-A meta-analysis. Psychological Bulletin 132(4), July.
   pp. 497-523.
Jahn, R. G., Dunne, B. J., Nelson, R. D., Dobyns, Y. H., & Bradish, G. J. (1997). Correlations of
   random binary sequences with pre-stated operator intention: A review of a 12-year program.
   Journal of Scientific Exploration 1 1.3, pp. 345-367.
Radin, D. I., & Nelson, R. D. (1989). Evidence for consciousness-related anomalies in random
   physical systems. Foundations of Physics 19(12), December. pp. 1499-1514.
Radin, D. I., & Nelson, R. D. (2003). A meta-analysis of mind-matter interaction experiments from
   1959 to 2000. In W. B. Jonas & C. C. Crawford (Eds.) Healing, Intention, and Energy Medicine:
   Science, Research Methods and Clinical Implications (pp. 39-48). Edinburgh, UK: Churchill
Radin, D., Nelson, R., Dobyns, Y., & Houtkooper, J. (2006a). Reexamining psychokinesis:
   Comment on Bosch, Steinkamp, and Boller (2006). Psychological Bulletin 132(4), July. pp. 529-
Radin, D., Nelson, R., Dobyns, Y., & Houtkooper, J. (2006b). Assessing the evidence for mind-matter
   interaction effects. Journal of Scientific Exploration 20(3), Fall. pp. 361-374.
Roll, W. G. (1972f2004). The Poltergeist. Garden City, NY: Nelson Doubleday, Inc. (Reprinted by
   Paraview Special Editions)
Roll, W. G. (2003). Poltergeists, electromagnetism, and consciousness. Journal of Scientific
   Exploration 17(1), Spring. pp. 75-86.
Roll, W. G., & Storey, V. (2004). Unleashed-Of Poltergeists and Murder: The Curious Story of Tina
   Resch. New York: Paraview Pocket Books.
                                         Book Reviews                                          179

Schub, M. H. (2006). A critique of the parapsychological random number generator meta-analyses
   of Radin and Nelson. Journal of Scientific Exploration 20(3), Fall. pp. 402-419.
Stander, P., & Schmolling, P. (1996). Poltergeists and the Paranormal: Fact Beyond Fiction. St. Paul,
   MN: Llewellyn Publications.

Ghost Hunters: William James and the Search for Scientific Proof of
Life After Death by Deborah Blum. Penguin Press, 2006. 384 pp. $25.95.
(hardcover). ISBN 1-594-200-904.

   I have a good friend who earned his Ph.D. in chemistry from Harvard. He's
a college dean and professor of oceanography at a name-brand U.S. university.
He's authored textbooks in his field of research. In short, he's the very model of
a modern, major-league scientist. He tolerates my membership in SSE, but has no
interest himself in joining or reading my JSE. He doesn't believe in ghosts and
scoffs at mediums who claim to contact the dead. He steadfastly refuses to look at
any evidence offered to the contrary. To him, it's all unscientific bunkum.
   He's also a practicing Catholic. In church every Sunday, he fervently recites
a creed that affirms his belief in scientifically impossible phenomena-a virgin
birth, the magical changing of bulk table wine into real blood. More to the point,
he believes all people rise from the dead (along with their actual physical
bodies), and he believes in the existence of an invisible world populated with
angels, devils and demons who share it with his deceased grandmother and
assorted others.
   The contradiction eludes him, and frustrates me.
   He believes in an afterlife but won't look for, or at, collected scientific
evidence suggesting its reality. Compartmentalization is his solution to the
triumph of science over traditional religion, a process that started with the Re-
naissance, accelerated in the Victorian age, and ended in dominance in the early
20th century. Reason rules unchallenged from Monday through Saturday, faith
on Sunday. His disconnect epitomizes the uneasy accommodation existing today
between faith and science. The two protagonists divide up territory like Mafiosi
and try to avoid interfering in each others' business.
   Harvard professor William James, the father of American psychology, to-
gether with a small band of exceptional, Nobel Award-winning European
scientists and thinkers hoped to avoid this separate-boxes solution. They made
a valiant attempt at the turn of the last century to produce scientific proof for
religion's boldest assertion, which would allow faith and science to share
a common, consensual reality. Their melancholy story is told with admirable
skill by author and career science writer Deborah Blum in Ghost Hunters:
William James and the Search for ScientiJCic Proof of Life Afier Death.
   Three years of serious research shine through these pages. A professor of
science journalism at the University of Wisconsin, Blum read widely; worked
    180                                Book Reviews

    Piper, Margaret Verrall and the controversial Eusapia Palladino, seminal
    publications like Phantasms o the Living); serves up historical context (1842-
I   1910, the span of James' life, the early history of the Society for Psychical
I   Research (SPR) and American Society for Psychical Research (ASPR), the
    unstoppable march of science); extensively footnotes her quotes and sources;
I   and narrates her story with a scholarly grace approaching today's gold standard
    in historical writing, David McCollough.
       This is a book my Harvard scientist friend should read-not that he will. The
    fact that he and James share old school ties, or that James was a recognized giant
    in his field, won't be enough to entice them to spend a few hours together. Like
    too many scientists today, he lives and works in a confined mental cubbyhole,
    with little time to read anything outside his academic field, even if he were so
       If he did, what would he think of the evidence of the afterlife produced by
    James and his apostles?
       I am familiar with most of the evidence (fairly compelling), but I came away
    with new facts, information and insights I might have uncovered in my own
    three years spent researching the best scientific evidence of life after death, but
    never did. Example? Mark Twain's personal run-in with the paranormal and his
    subsequent endorsement of "mental telegraphy" (telepathy) in the December
    1891 issue of Harper's. Twain skewered organized religion repeatedly and
    acidly in his lesser-known writings, but his beef with religion didn't close his
    mind. He remains a hero of mine.
       Blum came away changed in the way she thought. William James and his
    colleagues "questioned and explored possibilities so accurately that it was
    impossible not to reevaluate my assumption." Along the way, "I read reports by
    psychical researchers that I couldn't explain away. I thought all over again about the
    shape of the world, about the limits of reality and who sets them, illuminated by
    history, philosophy, theology as well as science. There were days when I could feel
    the hinges of my brain, almost literally, creaking apart to make room for new ideas."
    She remains still grounded in the current, consensual definition of reality, but adds,
    "I'm just less smug than I was when I started, less positive of my rightness."
       The melancholy part? James came away with a tenuous epiphany he tried but
    ultimately failed to share with his fellow scientists, whose downright pigheaded
    prejudice and intellectual dishonesty allowed so few of them to look at-        much
~   less fairly judge-the intriguing evidence he uncovered. At the end of his career,
    his brilliance and towering achievements in the infant field of psychology
    forever tainted in the public eye and press by what they judged unwise dabbling
    in supernatural hokum, James felt betrayed and bitter towards many of his
    scoffing colleagues. "Let them perish in their ignorance and conceit," he
       SSE members wishing to avoid being lumped in with that cursed lot should
                                  Book Reviews                                  181

on life after death, if they haven't already. You can't do better than Blum, and
James' ghost-should you subsequently decide it exists-will rest easier.

                                                             MICHAEL SCHMICKER
                                                               Honolulu, Hawaii

Leaps in the Dark by John Waller. Oxford University Press, 2004.292 pp. $24.95
(hardback). ISBN 0-192804847.

   Understanding the history of science may be of little if any practical use to
most scientists, who are engaged in the routine puzzle-solving that Kuhn
described as "normal" science; but history of science does have important
lessons for those whose work is in any way unorthodox. For people who take
a watching brief over science, a proper understanding of the history of science is
essential. This book offers a necessary corrective for common misunderstand-
ings about the history of science. It features detailed analysis of some fascinating
cases, and should be required reading for anomalists, scientific explorers, so-
called "skeptics", and other pundits of science. Anomalists in particular should
note the evidence that belligerence and stridency against entrenched power are
   The chief point expounded by Waller is that scientific reputations, both high
and low, may be undeserved. Corollary points include:
      Like all history, the history of science is a complex story that becomes
      distorted when it is set out as a tale of heroes and villains.
      Since it is written by the victors, history may mislead in important ways.
      A proper understanding of the past requires that actions be assessed in the
      context of their own time.
    Waller points out, correctly, that there exists no infallible "scientific method"
(1). Science is always a matter of trial and error. When some counter-orthodox
claim is first made, the evidence for it is rarely conclusive, and the opposition to
it is not entirely misguided; "Those deemed right in the long run . . . have rarely
had a monopoly over reason and good sense" (p. 3).
    This book makes few demands on special background knowledge, apart from
occasional unelaborated references to "the Leaning Tower of Pisa experiment"
or the "tall poppy syndrome" (the latter is a common phrase in Australia). This
reviewer's chief reservation is that Waller-far from alone among pundits of
science-implicitly equates scientific theories with scientific knowledge, which
can be subtly misleading in various ways at various points (for example, pp. 239,
273, 274). Theories are always applicable only temporarily, and they are never
"true". Waller is also more sanguine than I am about self-correction in science:
182                              Book Reviews

empirical case has been made" (p. 6). One part of the problem is that new
theories do not enter a vacuum, and even well-supported and credible theories
have to overcome the inertia of prevailing views.
   A smaller reservation is that Waller concentrates on correcting the roles of
prominent people in science without remarking that history of science should
expound primarily the scientific issues (see, for example, pp. 166-7); though
he does point out that science can hardly ever be carved into neat segments
attributable to particular individuals (p. 238). An unfortunate factual error is
sending Waksman to Oslo rather than Stockholm for his Nobel Prize (pp. 259,
   Individuals whose low reputations are unwarranted include Joseph Glanvill,
Lazzaro Spallanzani, and Max von Pettenkofer; "all three erred largely because
of the extreme difficulties involved in studying the natural world", not because
of personal failings (p. 10).
   Glanvill has been painted retrospectively as a superstitious holdover in
an age of dawning science, because he maintained, in the heyday of the
Scientific Revolution and the blossoming of enlightened thought, that
witches exist. Waller shows, to the contrary, that Glanvill was empirically
oriented and that his approach to investigation was entirely consistent with that
of his colleagues in the Royal Society, people like Robert Boyle and Robert
Hooke. In those days it was by no means an indication of superstition to attempt
to decide as to witches as Glanvill tried to do, on the basis of data, albeit the
available data were anecdotes and testimonies. Waller reminds readers that
Leibniz could scoff at Newton's ideas about gravity because of the requirement
for some magical force acting at a distance, implying continuous divine
intervention in contrast to the non-superstitious Cartesian view of a clockwork
universe (p. 54).
   Spallanzani believed in preformation or ovism, that eggs contain infini-
tesimally small but fully formed precursors of living beings. Again, by dis-
cussing in detail the evidence then available and the observations and
experiments then being carried out, Waller shows that Spallanzani's views
were quite logical and empirically based. The opposing view, that eggs needed
to be fertilized by semen, was by no means obviously better at explaining the
data; moreover it encountered problems for which no good answers were then
evident. We have not necessarily progressed as to such fundamental issues. One
argument for ovism was that it was inconceivable that a complex being could
form itself from matter out of chaos (p. 57), not very different from Behe's
"irreducible complexity" or the claims made by Intelligent Designers.
   Pettenkofer had been unimpressed by Robert Koch's discovery of cholera
germs. He asked for a sample, drank it, and suffered no more than a slight bout
of diarrhea. Instead of being remembered as one of the heroes of medical science
who used himself as guinea pig, however, Pettenkofer is typically presented as
hindering the progress of medical science by opposing the germ theory of
disease. Waller points out that several of Pettenkofer's students also drank the
                                   Book Reviews                                    183

cholera infusion without serious effect, so there was strong empirical evidence
against this extract as the cause of epidemics in which many had died. In
retrospect, one presumes that what Koch had sent "must have been" a weakened
strain of cholera, but we presume it only because we believe the germ theory and
have come to know about attenuation and weakened strains. Pettenkofer had
a workable theory of moist soils as incubators of epidemics, a view by no means
unique to him, and able to accommodate more empirical data than the germ
theory could, in those days.
   Waller is also able to describe how Robert Koch benefited from contemporary
political and social circumstances. But as to substantive science, the
convergence of bacteriology, immunology, and epidemiology in the modern
science of public health "is as much the legacy of Max von Pettenkofer as it is of
Robert Koch" (p. 81).
   I am one of some unknown number of teachers who have told students about
Newton's conclusive demonstration that white light is composed of the rainbow
of colors. He split the light into a spectrum; showed that once split, it could not
be further split; and that the rainbow could be recombined into white light. What
could be more definitive?
   It turns out that Newton had not found it easy to repeat his experiments; that
prisms and lenses were often defective in a number of ways; that others had
often been unable to duplicate Newton's results; that some of the prismatic
deficiencies led to being able occasionally to "split further" the rays of a "single
color". Newton told his opponents that they would know when their apparatus
was working properly when they got the same results as he did, a circular and
unjustifiable argument (p. 107). On the other hand, what Newton's opponents
could not explain was why he was ever able to obtain his "decisive" results at
all, if his theory was not correct. But the chief point, again, is that at the time, the
plain evidence alone was not decisive. Newton's victory was owing in good part
to his status and role in the Royal Society, which enabled him to impose what I
have dubbed a "knowledge monopoly" (2).
   The supposed demonstration by James Lind that citrus fruit is the cure for
scurvy also turns out to be less simple than most stories would have it. Illnesses
were generally believed to have multifactorial causes, and single remedies-
"specifics"-were distrusted or even seen as quackery (pp. 124-5). Lind himself
thought scurvy could have a variety of causes and cures; hardly illogical, since
many people who rarely ate fresh fruit or vegetables remained free of scurvy.
Neither Lind nor anyone else could possibly have got it all right a century or two
before knowledge had accumulated about vitamins and the range of foods in
which vitamin C occurs in useful amounts.
   The martyrdom of Semmelweis, whose advocacy of hygiene among health-
care workers saved many women from dying in childbirth, offers several
lessons. His frustrated impatience with a stick-in-the-mud professional superior
proved his downfall because Semmelweis was also on the wrong side politically
184                              Book Reviews

1848 revolution, while his boss was a loyalist and petty bureaucrat. One should
be reminded that a century later, Linus Pauling incurred professional difficulties
because of his activism against nuclear testing, being denied a passport by the
State Department and finding himself unwelcome at CalTech. But Pauling had
never offended his detractors personally, whereas Semmelweis called his
opponents murderers and assassins (pp. 151, 155). Once more, the limited state
of knowledge at the time meant that Semmelweis's experience of a drastically
lowered death rate in his own ward following the use of antiseptics was not
decisive, because it was not replicated in every other ward that tried it, and too
little was then known of the vagaries of the microbes concerned that one could
explain seasonal variations, for example, or the different experiences of different
    Johann Weyer courageously confronted those who were hunting witches by
inquisitional torturing, urging that the accused be treated as perhaps deluded
rather than deliberately evil. But, being a man of his times, he did not deny the
possibility of Satanic possession and the like. According to Waller, that history
has come to see Weyer as a modern-type proponent of psychiatric treatment of
people deluded into thinking they have been possessed is owing to the fact that
this view served the purpose of the medical profession: the iconic Weyer
illustrates that witches (and the like) are in the purview of psychiatry and not of
religion, that the Church and its officers should keep their hands off this aspect
of human activity. If Waller is sound on this point, then there would seem to be
an analogy with modem-day "skeptics", whose crusades against superstition
often spill over into activist atheism.
   The case of Philippe Pinel, too, is seen by Waller as becoming iconic
because it served the interests of the psychiatric profession. Pine1 was indeed
a humane individual who helped in abolishing the practice of chaining people
in asylums, but stories of his actually ordering the first breaking of chains
turns out to be an urban legend. Waller points out, too, that the sometime
success of medical treatments of the time was likely coincidental-blistering,
purging, etc., are not nowadays regarded as useful against mental illness. "In the
absence of controlled clinical trials, it was easy to interpret these recoveries as
due to the psychiatrist's efforts and failures as indicating that the case was too
serious or too deeply ingrained to respond to treatment". Are things so
different today? "With their professional existence at stake, . . . necessity
became the mother of delusion and circularity the lifebuoy of the desperate"
(p. 212). Or, as Bernard Shaw remarked, all professions are a conspiracy against
the laity.
   The concluding characters in this book are Robert Watson-Watt, who
deliberately campaigned to make himself seen, without adequate justification, as
the father of radar; and Selman Waksman, who progressively played down the
contributions of Albert Schatz, the co-worker whom he had originally
acknowledged as a full-fledged partner in discovering streptomycin, the cure
                                         Book Reviews                                            185

for tuberculosis, which remained the leading cause of death in the United States
as late as 1937 (p. 243).
   The last chapter of the book urges a sensibly balanced approach to history of
science in which social context is given its rightful but not excessive due. Then
for each chapter there are some suggestions for substantial further reading.
   Difficult as it may be to remain dispassionate when treated unfairly, we serve
ourselves best by sticking to the high ground of substantive discussion and leave
the mudslinging and the denigration of opponents to the other side and their
fellow-traveling "skeptics"-vide the sad cases of Semmelweis and Albert
Schatz. That is just one of the insights offered by this interesting, instructive,
thought-provoking book.

                                                                      HENRY H. BAUER
                                    Professor Emeritus of Chemistry & Science Studies
                                                    Dean Emeritus of Arts & Sciences
                                      Virginia Polytechnic Institute & State University

                                                           www .henryhbauer.homestead.com

Henry H. Bauer. 1992. Scientific Literacy and the Myth of the Scientific Method. University of Illinois
Henry H. Bauer. 2004. Science in the 21st century: knowledge monopolies and research cartels.
   Journal of Scientific Exploration 18, 643460.

Quantum Enigma: Physics Encounters Consciousness by Bruce Rosenblum
and Fred Kuttner. Oxford, England, Oxford University Press, 2006. 211 pp.
$29.95 (hardcover). ISBN 0-19-517559-X.

One of the most instructive books that I have ever read (and also simply one of
the best reads, as well) is Arthur Koestler's The Sleepwalkers, a marvelous
history of the Copernican Revolution. Koestler argues (and convincingly) that
Copernicus, Brahe, Kepler, and Galileo did what they did without any real
understanding of what it was that they were doing. Scientists today could easily
be (and, I think, largely are) in precisely the same boat.
   In that most excellent book, Koestler also makes the point that Copernicus
and Galileo were extremely reluctant to have it publicly known that they held
unorthodox scientific beliefs, and he emphasizes that it was fear of ridicule, not
fear of the Church, that constricted them both. Copernicus only published his
great book on his deathbed for that precise reason, and Galileo lied to his
students for decades before "coming out" with his Copernican beliefs.
186                              Book Reviews

   It is more than 80 years since the discovery of quantum mechanics gave us the
most fundamental insight ever into our nature: the overturning of the Copernican
Revolution, and the restoration of us human beings to centrality in the Universe.
   And yet, have you ever before read a sentence having meaning similar to that
of my preceding sentence? Likely you have not, and the reason you have not is,
in my opinion, that physicists are in a state of denial, and have fears and agonies
that are very similar to the fears and agonies that Copernicus and Galileo went
through with their perturbations of society.
   A case in point is the book Quantum Enigma. This book is a result of a course
for non-science majors (at the University of California, Santa Cruz) on the
meaning of quantum mechanics, and in particular the authors seek the role, if
any, of consciousness. The authors bring out, in pretty good fashion, the
experimental facts that show the Universe to be drastically different in its nature
than almost anyone thinks (usually, even after they have studied quantum
mechanics in detail). And they do note, and quite correctly, that quantum
mechanics easily accounts for every single one of these bizarre facts, and that it
does so completely. And yet, are our two authors able to come to an actual
conclusion?-No, they are NOT-          here is their concluding thought: "Does
quantum theory suggest that, in some mysterious sense, we are a cosmic
center?" The question is left hanging.
   In his Gifford lectures, very shortly after the 1925 discovery of quantum
mechanics, Arthur Stanley Eddington (who immediately, once quantum
mechanics was discovered, realized that this meant that the universe was purely
mental, and that indeed there was no such thing as "physical") said "it is
difficult for the matter-of-fact physicist to accept the view that the substratum
of everything is of mental character." What an understatement! On this
fundamental topic, physicists are mostly terrified wimps.
   And what are these "terrors" that prevent the acceptance of the obvious? I
think it is a combination of the fear of being ridiculed plus the fear of the
religious implications. Does that sound familiar?
   And yet, it is perfectly respectable for scientists to be religious. A notable
example is Charlie Townes, who is unapologetically Christian. So, what's there
to be afraid of?
   The authors are correctly emphatic that there is no controversy at all about
either the experimental facts, or quantum mechanics itself (that is, the
mathematical theory that completely accounts for the facts). Whether, like
me, you are convinced that the universe does not exist at all (except as mind), or
you are some kind of "realist," or you are that common creature the decoherence
evationist, or whatever, you irregardless do your calculations identically. It is
only when you attempt to articulate in English what quantum mechanics, and the
facts that it so easily explains, mean that you enter on dangerous grounds: the
authors give several quotes of nasty remarks by their departmental colleagues in
response to their teaching material of the kind that appears in their book.
                                  Book Reviews                                 187

public statement concerning the most important philosophical discovery, ever, in
the history of science, and I decided, therefore, that I must make such a public
statement myself-and I did so, in an essay in Nature, "The Mental
Universe,"-I knew that no such negative response could possibly occur in
my case, because of the fine character of my great university; and . . . indeed,
there was none.
   I did get one implicitly chiding email, from my Masters thesis adviser, who
mildly asked me what Einstein would have thought of my essay! Oh, poor, poor,
Einstein! If our individuality does survive death, well, my my; how poor
Einstein is blushing!
   I really do not understand how it can be that so little attention is directed to
what is acknowledged to be the deepest discovery ever in human intellectual
history: one that has changed our understanding of our own nature far more than
did the Copernican Revolution. Our two authors do address this important point,
saying that Copemicus (and Darwin) are "easier to comprehend-and much
easier to believe." (I don't agree with that. If you read Galileo's Dialogue you
will see that he himself found it almost impossible to believe that the Earth
rotated in 24 hours and went around the Sun annually. You don't find that
impossible to believe, and I don't; but he did. So, we owe him.)
   No, I think that the explanation lies in particular history. In the Copernican
case, the man of character and strong opinion, Galileo, came to the right
conclusion, and carried society with him. In the case of the quantum, the man of
character and strong opinion, Einstein, came to the wrong conclusion, yet
nevertheless, he carried society with him. Leaving me in the pickle that I find
myself in. As a person of iron integrity, I cannot participate in the dereliction of
social duty that is going on among scientists today. I must speak up, and, by
gum, I am!
   Despite the fact that I am heavily criticizing this book, above all for its
timidity, I do highly recommend it, if only because, except for Nick Herbert's
excellent Quantum Reality, it is about the only available book that clearly brings
out the amazing, the astounding, the utterly unbelievable simple facts. Although
quantum cryptography and quantum computing are gradually forcing people to
stop averting their eyes, there is still an amazing amount of ignorance about
these unbelievable experimentally established facts.
   "That's crazy" a physicist said to me just the other day, when I described the
quantum Zeno effect. Yet this physicist has worked lifelong in quantum-
intensive research!
   All I had mentioned was that, if you observe a quantum system with a short
half-life, it will not make the transition to the lower state. Your simply observing
it (not interacting with it in any way) causes it to remain in its higher-energy
state. (Just Google on "quantum Zeno effect," should it happen that you don't
believe me!)
   Quantum Enigma only mentions the quantum Zeno effect in passing, which
188                               Book Reviews

know darned well that mind is central-and nothing shows the truth of that more
clearly than does the quantum Zeno effect.
   The book does have one major defect, in my opinion, and that is that it does
not bring out why the world is quantum mechanical. There is no mystery about
this-it is because the observations have the character of numbers, and because
there are (for unknown reasons) symmetries in the observations-symmetries
that Emily Noether taught us result in conserved quantities. Because these are
conserved, they give the impression of something "really being there," so when
we study them, we get the incorrect impression of a real Universe being "out
there." These simple facts result in quantum mechanics, as I showed in 1990 in
my paper "Quantum Mechanics Made Transparent," in the American Journal of
   No, the mystery is not quantum mechanics. The mystery is our own existence.
Let me ask you: which would be easier to believe in (if you did not have
irrefutable evidence for one of them): life after death, or your own existence? I
do think that the latter is incomparably more improbable.
   So, what are your options? If you are not simply to be like a squirrel or
a rabbit, you must choose some quantum mechanics interpretation (as it is
called-it is not really "an interpretation," of course; it is your theory of yourself
and of your experience of observations). The authors offer nine choices. Let me
go through all nine, giving you my "take" on them. Our authors make the
important point that "while scientific theories must be testable, interpretations
need not be." I found the authors' discussion of these choices extremely
enlightening; in particular, I discovered that my own understanding of what
these various interpretations contend was in some cases quite defective. I hope I
can avoid major errors here, but I will be extremely brief:

  1. Copenhagen. The "majority" interpretation, for decades. Not really an
     interpretation at all, but rather a (clearly non-physical) segregation of the
     world into the microscopic (in which there is reality, but it is observer-
     created reality) and the macroscopic (which was taken to be real). A
     human observer is not needed; a geiger counter will do just fine. Our
     authors correctly point out that the advance of technology now forces
     retreat from this increasingly untenable "interpretation."
  2. Extreme Copenhagen. "In this view, there are no atoms" (attributed to
     physics Noble Prize winner Aage Bohr). The existence of the microworld
     is denied. And yet our authors say "This interpretation shows how far
     some physicists will go to evade the encounter with consciousness." I
     gather that the creators of this view identify reality with the world of
  3. Decoherence and Consistent Histories. Our authors correctly paint
     these as ineffective evasions of the real question. Decoherence is quite
  4. Many Worlds. Every observation with two possible outcomes results
                                   Book Reviews                                  189

       in the creation of an additional entire universe. Many observations have
       an infinite number of possible outcomes, so infinitely many universes
       (complete with a you in it) are made very often indeed. Our authors say
       "there is no single reality, which is essentially equivalent to no reality." At
       the Science and Ultimate Reality meeting in 2002 in Princeton, Bryce
       DeWitt (the most influential advocate of this interpretation) sat down next
       to me at lunch and told me that those other versions of the universe are as
       real as ours . . . and that in his opinion we will eventually communicate
       with them (I am not making this up). Many highly-regarded physicists
       accept "many worlds."
  5.   Transactional. A convoluted approach that "very much involves an
       encounter with a conscious observer."
  6.   Bohm. I had not appreciated that for Bohm "there is no physical world 'out
       there' separate from the observer." The authors bring out that Bohm did
       consider a role for consciousness. There is a "quantum potential" that has
       no role other than to allow this interpretation in which there is "a
       physically real, completely deterministic world."
  7.   GRW. Not an interpretation, as it proposes a change in quantum mechanics.
       Such a change could be tested, and it should be! But, don't invest your own
       money in such tests. Our authors quote Steven Weinberg, "the one part of
       today's physics that seems to me likely to survive unchanged in a final theory
       is quantum mechanics" and state that they share his intuition. Well, his was
       no "intuition!" Weinberg once attempted to change quantum mechanics, but
       Polchinsky showed him that it couldn't be done.
  8.   Ithaca. "Correlations have physical reality; that which they correlate
       do not." This is the interpretation that is advanced by David Mermin.
  9.   Quantum Logic. Change the rules of logic. Our authors are not happy
       with this approach.

   Finally, the authors note two additional "interpretations" that actually include
physical speculations involving consciousness.
   Do you find any of these interpretations satisfactory? I certainly do not.
And our authors clearly do not. So, let me offer the Henry interpretation: There
is no actually existing universe at all. The universe is purely mental.
   If you prefer to do so, you may call this the Eddington-Jeans interpretation.
   The only reason that it is difficult to accept the Henry interpretation is
that few except Henry believe it. We are social creatures, with a herd mentality.
But, Malcolm Gladwell has educated me that there can come a "tipping point,"
and I take it on myself to push toward broad acceptance of my simple thesis.
(Calendar reform is more difficult. There, I don't expect to succeed.)
   Let me ask my readers, does your own mind actually exist? Note that I am not
talking about your brain, I am talking about your mind. Well, of course it does!
Cogito ergo sum. After all our convoluted and ultimately entirely unsuccessful
190                                Book Reviews

of the observations (the so-called "universe"), here, first crack out of the box,
we have, with the Henry interpretation, a solid and irrefutable success!
Something that is real. And it is a success that you cannot arrive at fromphysics,
because physics does not include treatment of consciousness at all!
   But does the Henry interpretation actually say anything? Does it have any
meaning? It most certainly does! First, it means you can forget all the other
interpretations that are offered (and what a relief that is!). Second, once you
understand that there is no universe out there, you are forced to face up to your
personal responsibility. You now have a fundamental decision to make. You
know that other people do not exist. But, you must now decide whether
their minds exist, as yours unquestionably does. Physics cannot assist you
in this critical decision. Your stark choices are solipsism, or a leap of faith.
   Eddington was a Quaker, so the leap of faith was easy for him: "the stuff of
the world is mind-stuff. The mind-stuff is not spread in space and time; these are
part of the cyclic scheme ultimately derived out of it. But we must presume that
in some other way or aspect it can be differentiated into parts. Only here and
there does it rise to the level of consciousness . . .."
   For a person (such as me) who has never before been religious, this leap of
faith is not so easy. Indeed, I worry that my decision, which (let me relieve your
mind) is that the reader's mind does exist, is too much influenced by my
previous (but now seen to be utterly silly) belief that the reader's (as well as my
own) mind was created by real electrons.
   Physics does not require you to make the leap of faith. But, should you choose not
to leap, physics does then force you to believe that your mind alone is all that exists.
   What is it like, after taking the leap? Well, first, understand, what I say now
has nothing whatsoever to do with physics. Surely for, say, an Eddington, the
result was simply reinforcement of his Quaker beliefs (which needed no
reinforcement). For an atheist such as myself, the result is simultaneously
enormous, and minor. I have made the leap of faith that MY mind is not the
universe: well, you will not be surprised to learn that I sure don't accept that
YOURS is! So, I am forced to meet the Great omniscient Spirit, GoS. How do
you do! Pleased to meet you! I am here not at all joking; as I go for my hour of
walhng each day, I not infrequently hold hands with GoS.
   You can see what I mean by "enormous." Of fundamental importance to me.
But minor at the same time, because that is the end of it. The first ten Presidents
of the United States were all Deists, not Christians. As was Lincoln. I join them
in that belief.
   The authors make the critical point that religious belief flowing out of
quantum mechanics does not in any way validate "intelligent design" (ID).
(Indeed, in my view ID is insulting to GoS, who is surely not, as the authors
emphasize, a tinkerer.)
   Let me return now to physics, and to the book Quantum Enigma. "Einstein
believed quantum theory denied the existence of the real world." "This seems to
deny the existence of a physically real world." "If unobserved atoms are
                                  Book Reviews                                 191

somehow not physically real things, what does it say of chairs, for example?"
"You're denying the existence of a physically real world." ". . . told his cat story
to show that quantum theory denied the existence of a physically real world."
All quotes from this book! Why, then, does the list of interpretations in this book
not include the Henry interpretation?
   Or perhaps I should call it the Rees interpretation-I have not read Martin's
book but the authors quote him: "The universe could only come into existence if
someone observed it. It does not matter that the observers turned up several
billion years later. The universe exists because we are aware of it."
   Does any of this matter? It most certainly does. The authors point out that
"Principia ignited the intellectual movement known as the Enlightenment." It
was the Enlightenment that inspired the American founding fathers to create the
Constitution, a landmark in human history. Note that Newtonian physics is
deterministic, and yet, nonetheless, the founding fathers were all Deists. They,
noble souls, had a much larger leap of faith to make than we do today (thanks to
quantum mechanics), yet they all managed it. Our authors ask "Can it be that out
there in our future there is a quantum impact on our worldview?"
   Bruce? . . . Fred? . . . Hello-o? Have you read your own fine book?

                                                           RICHARD CONN HENRY
                                             Professor of Physics and Astronomy
                                                   The Johns Hopkins University
                                                            Baltimore, Maryland

The Cult of Personality: How Personality Tests Are Leading us to
Miseducate our Children, Mismanage our Companies, and Misunderstand
Ourselves by Annie Murphy Paul. New York: Free Press, 2004. 320 pp. $26.00
(hardcover). ISBN 0-7432-4356-0. Republished as The Cult of Personality
Testing: How Personality Tests Are Leading us to Miseducate our Children,
Mismanage our Companies, and Misunderstand Ourselves. 2005. $14.00
(paper). ISBN 0-7432-8072-5.

  To understand others is a desire that has persisted throughout the ages. From
Hippocrates' four temperaments and Galen's corresponding body fluids, to
physiognomist Johann Kaspar Lavater's use of facial structures, expressions and
colorations and German physician Franz Joseph Gall's determination of char-
acter on the basis of the shape (and bumps) of the head (phrenology), cultures
have sought to capture the essence of individuals-to describe and classify
people and predict their behaviour.
  During the 20th Century, a belief in the ability of psychology to subject
192                              Book Reviews

human nature to classification and measurement took such a strong hold that
today we rely on personality testing to serve as a means to this end.
   The Cult of Personality, later republished in paperback with the content
unaltered but the title curiously (and without explanation) changed to The Cult of
Personality Testing, is Annie Murphy Paul's journalistic foray into this modern
version of an ancient inquisitiveness.
   As one might expect of a previous senior editor of the popular magazine
Psychology Today, Paul's book is engagingly written. When she delves into the
personal lives and idiosyncrasies of the creators of an array of popular tests and
illustrates how these tests are overused or misused, she does so with a well-tuned
flair. For instance, she titillates the reader with details of how Isabel Myers's
marriage suffered from years of inattention as she became enraptured by the test
that bears her (and her mother's) name, and she shocks us with her portrayal of
Raymond Cattell as a eugenicist who sought to replace established religions with
his own invention-"Beyondism," which he claimed was "based on the prin-
ciple that evolution is good" (p. 181). As for the misuses, she tells us that Brad
Seligman, a Berkeley lawyer, has successfully won million dollar damage
awards in a number of cases involving corporate abuse of personality tests; that
the Rorschach was administered to Nazi officers awaiting trial in Nuremberg;
and that Rent-A-Center managers were required to complete the Minnesota
Multiphasic Personality Inventory (MMPI), with their results being used "to
determine the course of their careers" (p. 61).
   Such stories delivered, sometimes to excess, intermingled with tantalizing
tidbits and liberally peppered with Paul's own scathing comments make for an
entertaining read on a rainy weekend.
   My expectations of this book had, however, been higher. Because I laud both
her expressed objective: to ask "whether the answers [that the tests provide] are
correct-or just ones their users want to hear" (p. 14), and her promise, made in
the book's subtitle, to tell us "how personality tests are leading us to miseducate
our children, mismanage our companies and misunderstand ourselves," I had
looked forward to a more serious read. I had expected, or perhaps simply hoped
for, a thoughtful book that would engage readers in a critical examination of the
science and business of personality testing. Instead, what I found was a puz-
zlingly disturbing and disappointing book.
   Given the sensationalized use of the word "cult" in the book's title, I might
have anticipated my disappointment. While the term may be well chosen for
marketing purposes, it is a poor choice if one values accuracy. Cults are devoted
to beliefs or practices that society generally renounces; they are, by definition,
the antithesis of mainstream. Personality testing is, as people generally would
agree and as Paul both declares and demonstrates in her book, mainstream; so,
the title gets it wrong and that's hardly a promising beginning.
   This book is, as I noted above, engagingly written; however, what is good
about it-   what makes it enjoyable to read-is also one of its major faults.
Instead of addressing the topic directly and displaying reasoned judgment, it
                                   Book Reviews                                  193

relies on a story-telling strategy that focuses more on the lives of the test creators
than on the tests themselves. At first, I assumed that this sometimes entertaining,
seemingly irrelevant, often salacious material was intended to denigrate
particular tests through the all-too-popular ploy of 'character assassination.'
For example, in the chapter on the Myers-Briggs Type Indicator (MBTI), Paul
repeatedly reminds us that Isabel Myers was merely a "housewife," implying
that her lack of professional credentials somehow diminishes her test as a
psychological instrument. Elsewhere, she reports at length how Henry Murray
conspired with his mistress in the creation of the Thematic Apperception Test,
suggesting that his adulterous behavior somehow makes the instrument suspect.
However, as I read on I discovered, as I shall explain momentarily, that Paul's
handling of this story was part of a larger and unstated agenda that runs through
the entire book.
   Another indication of this agenda appears in Paul's discussion of the tests she
chose to review. The core of her book is comprised of six chapters, each of
which is devoted to some particularly well-known test or to a familiar variety of
testing methods. When one puts aside the story-telling, anecdotal material in
these chapters, what one finds is remarkably little substance, and what there is of
substance demonstrates, at best, a superficial understanding and, at worst, a
sweeping misunderstanding of test construction and usage.
   I will refer briefly to one chapter that addresses the MMPI to illustrate this
point.1 While certainly it can be argued that the MMPI is grossly overused and
that all too often it is misused, I see no justification for Paul's outright con-
demnation of it. She describes it as "heartless," "potentially offensive" and
"without doubt one of the weirdest creations in the history of man's attempt to
understand himself." And she criticizes its items for their "flat, affectless tone
and careless alteration of the weightiest subjects with the most banal" (p. 53).
All of this suggests that Paul lacks an appreciation, or even a basic under-
standing, of the nature of actuarial tests (of which the MMPI might be con-
sidered the prototype). She fails to grasp that these test questions are statistically
selected according to their ability to contribute to the test's overall
discriminative ability and not for any cuddly, emotionally warm, feel-good
quality. It is easy to persuade a general readership that the test is somehow
offensive by citing such items as "I have never had any black, tarry-looking
bowel movements" or "There is something wrong with my sex organs." But the
real question of whether the test does what its creators say it does (i.e. to
differentiate personality factors) is lost in the sensational. Such pursuit of the
dramatic appears again in her accusation that the MMPI has "helped to create,
and continues to reinforce, a culture in which our unique and varied personalities
are subject to the petty tyranny of the average" (p. 71). While the accusation
itself is highly questionable, another hint of Paul's agenda is found in the words
"our unique and varied personalities." The author does not believe in a stable
and enduring personality that can be measured-the fundamental assumption of
194                               Book Reviews

don't work, it is that they are doomed to failure because "there can be no
universal key to personality, only unique, particular personalities, and shifting,
evolving ones at that" (p. 219).
   What Paul does believe in is eventually revealed when she states that "the
vista (psychological tests) afford are too restricted, obscured by the objectives
and agendas of others." To which point she asks "Is there another way?"
Presented as a rhetorical question, this would have been an effective final line,
but here it serves only as an opportunity for the author to give her own answer.
Rising up with an emphatic "yes," she starts to tell us about what she knows to
be a better way.
   That better way-     the alternative for which she would have us discard
personality testing-is "the telling of one's own story." Presumably to inspire
us, she tells the arduous tale of Dodge Morgan, a 51-year-old man who sails off
alone on a voyage around the world in order to sort out what is important in his
own life. Each day he completes a different personality test and, on his return, no
thanks (apparently) to the results of these tests, he enthusiastically takes his
place within the human race, presumably having discovered his own (human)
nature by virtue of his own experience-his own story.
   Somehow she connects this mariner tale to the work of psychologist Gordon
Allport, to whom Paul attributes the discovery of the story-telling approach,
something that she believes should have joined the ranks of mainstream per-
sonality testing long ago. From Allport she moves to Dan McAdams, a con-
temporary psychologist she describes in heroic terms as championing this
worthy cause. She relies primarily, and I would expect exclusively, on his book,
The Stories We Live By: Personal Myths and the Making of the Self, to argue her
point. After stating that McAdams "has worked out an objective system of
coding the narratives" gained through a structured two-hour interview, she
confusingly describes the approach as, "in many ways, the un-test. It has no
norms; subjects are not assigned numbers or types . . . [the results are] almost
defiantly resistant to the requirements of institutions, just about useless for the
purpose of sorting and screening and labelingW(p.   219--emphasis added). While
such claims (or disclaimers) might rightfully leave the reader wondering what
purpose this life-story approach actually serves (and why it is even mentioned in
a book on personality testing), Paul justifies her eagerness by explaining that
psychological tests should aid "our advance toward self-discovery and self-
awareness" (p. 222). While this may be a popular goal in our psychologically-
obsessed society, it is not the purpose of psychological testing; that of
identifying and classifying personality factors in order to explain past actions or
predict future behavior. But for Paul, there is no stable personality to measure,
no factors to categorize-just stories to tell, impressions to make and the 'self' to
   Paul's enthusiasm for 'life stories' explains her fascination with the personal
lives of the test creators and explains why this book unfolds as it does.
Unfortunately, this enthusiasm may tell us more about this author and her cultish
                                 Book Reviews                                 195

fascination with self-awareness than it does about the practice of personality
   My review of her book might well have ended here had Paul not chosen to
interject, toward the end of her Epilogue, a brief, peculiar and seriously
misleading caveat: "When some kind of formal assessment is necessary," she
writes, "(as evidence in a court case, for example), personality tests are not the
only option" (p. 222). She proceeds to recommend the general use of "structured
interviews," "the collection of relevant biographical information" and
"behavioural observation" and to suggest that, for custody evaluations, one
should rely on such instruments as "the Parent-Child Relationship Inventory or
the Parenting Stress Index" (p. 222).
   How can she, one wonders, recommend these alternatives without bothering
to ask whether the answers they give are any better than those of the tests she
condemns. I am not a fan of personality testing but neither am I a fan of any
unexamined psychological alternative.
   A critical examination of personality testing and its impact on our society is
long overdue. I would welcome a book that does what this one has failed to do.

                                                                    T ANA DINEEN
                                              Victoria, British Columbia, Canada

   For a more detailed critical examination of another chapter-the chapter on
   the MBTI, see Geyer, Peter. Glibly attractive: Reading Annie Murphy Paul's
   "The Cult of Personality." Available at: http://www.personalitypathways.
   cornMBT1-cult.htm1. Accessed 7 December 2006.
196                              Book Reviews

The Hundred Year Lie: How Food and Medicine Are Destroying your
Health by Randall Fitzgerald. Dutton, 2006. vii + 294 pp. $24.95. (hardcover).
SBN 0-525-94951-8.

  While not much more than a chemophobic technophobic polemic, The
Hundred Year Lie (IOOYL) does come to a number of conclusions with which I
agree . . .
      Many of the chemical compounds in commercial foods, drugs, cosmetics,
      etc., have never been tested for long-term safety in humans.
      Since it is impractical to test all possible mixtures of compounds, we do
      not know how many or which combinations will affect health.
      Newly introduced prescription drugs are not necessarily safe or effective.
      Smaller-than-standard doses of most drugs would have a better overall
      effect in many people.
      The increase in lifespan in the U. S. in the 20th century from 40 to 76 years
      was accomplished by immunizations, better sanitation, better food pre-
      servation, and much better emergency care, not mostly by drugs.
      Antibiotics are prescribed too often and this leads to bacterial resistance.
      Many toxic compounds were banned only after damage had been done.
      Levels of important nutrients in our food, such as magnesium ion, have
      dropped to the point where taking them as supplements makes sense. This
      is from depletion of the nutrients in farmland.
      Fluoridation of municipal water supplies has been a colossal error.
      Testing compounds in lab animals often does not give the same result seen
      later in humans, and vice versa.
      Placebos really heal some fraction of people given them, and should be
      used deliberately more often.

   While the book is well-written and easy-to-read, very well-edited, and con-
tains an index, the citations are grouped by chapter, and not numbered in the text
by superscripts or in the Harvard system, as JSE is (Day, 1979: 39). As most of
you readers know, one of the great conventions of all time to communicate non-
fiction and avoid plagiarism has been the use of individually numbered or
Harvard system citations. Without this admittedly labor-intensive convention,
a mass of citations as endnotes cannot be checked effectively, since it is never
certain which one the author has used to back up a claim. Most of the references
are to newspaper and magazine articles, newsletters and websites, with very few
to peer-reviewed original research journals. So many claims in lOOYL are false,
as exemplified ad nauseum below, that this author's credibility must be doubted
and cannot be verified with a reasonable amount of labor.
   lOOYL makes dozens of claims on the horrors of modern life, Throughout the
book, claims are made that we in the U. S. and other industrialized countries are
                                      Book Reviews                                       197

losing lifespan because of gross chemical contamination, processed food with
additives, stress, and other features of industrialized life. The multi-thousand-
year tradition of herbal remedies, especially in India and China, is held up as
superior to synthetic drug use (p. 212ff), partly because these herbals are
complex mixtures (p. 214). Author Fitzgerald has missed the number of drug
combinations in use, such as for tuberculosis, lowering blood pressure, and to
reduce HIV mixtures. Meatless diets and other facets of "detoxification" are
touted. Why, then, do Indians have a life expectancy that is 13 years less than in
the U. S., and the Chinese one that is 5 years less? As shown in Table 1, most
industrialized countries have life expectancies at birth in 2006 of 78 years or
more. The majority of Indians are vegetarians.
   The findings of Dr. Weston A. Price on which diets were the healthiest for
primitive peoples were said in the lOOYL to be diets characterized by the absence
of processed food (p. 98). In fact, besides eating whole andlor raw plant foods, the
healthiest groups prized animal fats, meats, fish, and dairy products (Fallon, 2001).
   Many of the topics in 100YL are also covered in Malignant Medical Myths
(MMM), by Joel M. Kauffman (2006), and by The Great Cholesterol Con
(GCC), by Anthony Colpo (2006). These books are well-referenced and will be
cited in order to keep the total number of citations to a minimum.
   A typical theme of 1 OOYL, very common in pseudoenvironmentalist circles,
is that any substance shown to be toxic at any high level in any organism is
automatically said to be toxic in humans at any low level (p. 4ff). Fitzgerald
does not realize that modem instruments can detect substances in parts per
billion and trillion. Such small "lifetime body burdens" may not be toxic. Fur-
thermore, he does not recognize hormesis, the tendency of some small dose of
almost anything to be a benefit and not a detriment to health. This includes the
well-documented radiation hormesis (MMM: 178-200). He is unaware of The
International Hormesis Society and its journal, now called Dose-Response.

                                       TABLE 1
                      Rank Order-Life Expectancy at Birth by Country

Country                     No. of years                Country                  No. of years

Singapore                                           United Kingdom              78.54
Hong Kong                                           United States               77.85
Japan                                               Thailand                    72.25
Sweden                                              China                       72.58
Australia                                           Vietnam                     70.85
Canada                                              World                       64.77
Italy                                               India                       64.71
France                                              Afghanistan                 43.34
Spain                                               South Africa                42.73
The Netherlands                                     Swaziland                   32.62 (lowest)
Note: Available at: http://www.cia.gov/cia/publications/factbooWraorder/2102rank.html. Updated
198                              Book Reviews

   Fitzgerald states: "Most vitamins and supplements sold in the United States
that are advertised as natural are actually synthetic chemical concoctions that
contain coal tar, preservatives, artificial colorings, and a vast range of other
potentially harmful substances" (p. 8). Coal tar is a black viscous mixture of
compounds, many of which are odorous. Does your vitamin C contain coal tar?
Mine contains cellulose, magnesium stearate and stearic acid (all of plant origin)
and silica.
   "If you have mothballs in your closet, you are exposing yourself to the
carcinogenic pesticide dichlorobenzene . . ." (p. 19). Mothballs are naphthalene,
a different compound that is one of the major odorous compounds in coal tar, but
it is not very toxic.
   "If your clothing contains synthetic fibers, you are being exposed to a form of
[horror!] plastic, and the newer the clothing, the more it off-gases molecules
of plasticizer fumes" (p. 19). The only common polymer, a far better all-
encompassing name for high molecular weight materials from repeating small
units, that contains volatile plasticizer is poly(viny1 chloride), PVC, and this is
not used in cloth made from fibers.
   "From 1950 to 2001 the incidence for all types of cancer in the United States
increased by 85%, and that was the age-adjusted rate, which means the increase
has nothing to do with people living longer" (p. 30). The incidence for two of
the most common types of cancer, breast and prostate, actually appeared to leap
in the 1980s. This was the effect of widespread adoption of mammography and
the Prostate-Specific Antigen (PSA) test. About one third of these early
diagnoses were incorrect (MMM: 224, 229). There is no cancer epidemic
(Logomasini, 2002).
   Silent Spring, by Rachel Carson, 1962, is viewed as a ". . . watershed event in
public policy . . ." (pp. 36, 70, 17 I), which it certainly was. There was no
attempt to address the successful attempts to show its lack of accuracy and eco-
overkill (Bethel, 2005; Logomasini, 2002).
   "Molecules of lead leach from paint . . ." (p. 47). In lOOYL there is no
understanding of the difference between an element, a compound and a mixture
as taught in the most elementary chemistry courses. The word "molecule" is
used many times instead of the correct word "compound"; the term "a single
molecule" is used instead of "a single compound." Lead is an element, not
a compound. It is found in old paint as basic lead carbonate (lead white),
a compound. It poisoned children who ate paint peels. Along this thought line,
aspartame was said to contain three components-methanol, phenylalanine, and
aspartic acid (p. 106). Here again, Fitzgerald shows no understanding that as-
partame is not a mixture of these three compounds, but rather a compound
derived from them, and therefore, is different in properties, including toxicity.
His comments are as foolish as telling people not to eat salt because it is
a mixture of sodium and chlorine, a pair of very toxic elements, which change
drastically on forming their ions to make salt.
   The Toxicity Questionnaire (pp. 53-61), with 65 questions, has you declaring
                                 Book Reviews                                 199

yourself more toxic if you answered yes to questions such as "Do you drink
nonorganic coffee?", "Are you often irritable?" and "Do you sometimes feel
   Fitzgerald's example of one part per billion was an aspirin tablet in 1,000,000
gallons of water (p. 153). My calculations show this to be off by a factor of over
ten; it should have been in 100,000 gallons of water.
   Lowering serum cholesterol levels by 10% with a natural product is con-
sidered a boon (p. 198), despite overwhelming data that there is no health
advantage to doing so (GCC; MMM: 78-104). A mixture of compounds from
red yeast on rice is compared favorably with lovastatin with no knowledge that
the most active component in the rice is lovastatin (MMM: 88).
   "Human bodies weren't designed to absorb synthetic chemicals" (pp. 28,
219). Fitzgerald was not aware of the low toxicity of dimethyl sulfoxide
(DMSO), which is less toxic than alcohol. "Once he was convinced that it was
non-toxic, Dr. Stanley Jacob has taken an ounce of DMSO orally every day; as
of the year 2000, that is 40 years" (Haley, 2000). Synthetic vitamin C is well
tolerated. Our bodies are very capable of oxidizing unwanted compounds, often
to alcohols, and esterifying them with gluconic acid to get a water-soluble form
that will go out in the urine.
   "At the level of molecules seen under the electron microscope, synthetic and
natural vitamins may look similar to some chemists, but they don't assimilate in
the human body" (p. 137). Small molecules like those of vitamin C cannot be
seen under an electron microscope; they are too small. Synthetic vitamin C, at
least, is identical to the natural version (Pauling, 1986). Chemists use infrared
and nuclear magnetic resonance spectroscopy to identify compounds.
   "Naturally occurring substances seem to contain a 'lifeforce' that synthetics
cannot duplicate" (p. 138). Thus, lOOYL "disposes" of two centuries of Organic
Chemistry, which began as a separate discipline about the time that synthetic urea
identical to the natural product was synthesized from the inorganic ammonium
cyanate in 1828, putting a crack in the "vital force" or "vitalism" theory. Oxalic
acid was made from cyanogen in 1824, and the death of "vitalism" came in 1845
with the synthesis of trichloroacetic acid from the elements (Noller, 1951). Many
other natural products from quinine to progesterone have been synthesized.
   Fitzgerald's main push for the use of more natural products avoids mentioning
curare, tetrodotoxin and coral snake venom, etc. We must be just as careful of
natural chemicals as of synthetic. Some of the problems with natural chemicals
as drugs recognized in the mid-20th century, besides lack of patentability, were
the variability of concentrations of the desired chemicals (or mixtures) in plants.
The age, location, microclimate, soil condition, season of harvest and other
variables made and make it difficult to duplicate the dose and composition of the
active compounds without prohibitively expensive processing. On the other
hand, despite over-promotion and overdosing of many drugs, who among us
would want to give up antibiotics that still work, anesthetics, human insulin from
200                                     Book Reviews

   Fitzgerald's implied contention that we are exposed to a greater variety of
toxins in 2006 fails to note the greater quantity of them in 1906 (pp. 62-87). Our
great-grandparents breathed wood smoke, coal smoke, paint fumes, kerosene
fumes, ozone and NOx from early electric motors, as well as barnyard fumes.
    lOOYL is an example of how not to use science to guide decisions. From
non-specifically cited references written by non-scientists to incomplete
literature searches to rank chemophobia leading to rampant errors, a scattered
dozen of almost accidentally valid conclusions, in my opinion, does not make
this book worthwhile.

                                                                     JOEL M. KAUFFMAN
                                                       Professor of Chemistry Emeritus
                                              University of the Sciences in Philadelphia

Bethel, T. (2005). The Politically Incorrect Guide to Science. Washington, DC: Regnery, pp. 73-85.
Day, R. A. (1979). How to Write and Publish a Scientific Paper. Philadelphia, PA: IS1 Press.
Fallon, S. (2001). Nourishing Traditions. Washington, DC: New Trends Publishing, p. xi.
Haley, D. (2000). Politics in Healing. Washington, DC: Potomac Valley Press, pp. 167-207.
Logomasini, A. (2002). Chemical warfare: Ideological environmentalism's Quixotic campaign against
    synthetic chemicals. In Bailey, R. (Ed.), Global Warming and Other Eco-Myths (pp. 149-177).
    Roseville, CA: Prima Publishing.
Noller, C. R. (1951). Chemistry of Organic Compounds. Saunders, p. 3.
Pauling, L. (1986). How to Live Longer and Feel Better. New York, NY: W. B. Freeman, p. 54.

Sasquatch: Legend Meets Science by Jeff Meldrum, New York: Tom Doherty
Associates, 2006. 297 pp. $27.96 (hardcover). ISBN-13: 978-0-765-31216-7;
ISBN-10: 0-765-31216-6.

  Sasquatch: Legend Meets Science, by Professor Jeff Meldrum, is essential
reading for mammalogists, wildlife biologists, and other zoologists interested
in increasing their awareness of the evidence supporting the existence of
the Sasquatch as an extant North American mammal. But all scientists
and scientific-minded readers will benefit from Jeff Meldrum's scientific
approach to the subject, which has rarely been treated either impartially or
  During the 1990s, Sasquatch investigators and researchers met in Harrison
Hot Springs, British Columbia, for an annual forum sponsored by Stephen
Harvey. Scientists were represented for a number of years only by Grover
Krantz of Washington State University, Henner Fahrenbach, and myself. Con-
sequently, when anatomist Jeffrey Meldrum first participated in 1996, his
contribution was a welcome addition to ongoing attempts to make sense of
unexplained aspects of reported Sasquatch anatomy. His participation and col-
                                 Book Reviews                                 201

legiality provided a much-needed stimulus and credibility to other investigators.
As he continued to participate in "Bigfoot" meetings, conferences, and sym-
posia in the ensuing years, he was responsible for significant contributions to
the accumulating body of knowledge, especially by providing the anatomical
basis for physical features not easily understandable.
   It is not just Meldrum's academic background and qualifications as an anat-
omist and expert on the subject of bipedalism that are important. Such qual-
ifications would be of little value to the discovery process were it not for his
willingness to lend his name and reputation to a subject long-considered not
just controversial but categorized as pseudoscience. In assembling and archiving
a collection of Sasquatch track casts for scientific scrutiny in his Idaho State
University laboratory, he has provided a much-needed repository of physical
evidence for the Sasquatch long-demanded by scientists.
   But this is merely background to his more recent contribution to our knowl-
edge of this misunderstood North American mammal. He has followed up
participation in a number of "Bigfoot" conferences and television documen-
taries with this book, Sasquatch: Legend Meets Science, in which he addresses
clearly and at length the evidence supporting not just the existence of the
Sasquatch, but what it is. Attempts to satisfy both a popular and scientific
readership in a single book is a tall order, and it is sometimes unclear which is
being targeted. The author is at his best when he is addressing details of
Sasquatch anatomy and aspects of anthropology and paleontology. He also pro-
vides helpful information interpreting DNA analysis and other related tech-
nologies used in testing the evidence. To his credit, he has also courageously
and proficiently addressed the popular widespread perception that hoaxes have
been "proven" to explain all Sasquatch observations and tracks.
   In recent years, it has become apparent that the Sasquatch problem is both
a scientific and philosophical problem, one which goes beyond merely describ-
ing the anatomy, behavior, and ecology of the animal. It is a problem that
involves the need to understand and address scientific resistance to a subject that
is perceived as scientifically taboo. By treating the Sasquatch as a subject of
scientific research, the book is a major step forward in overcoming the
designation of the Sasquatch as a subject of pseudoscience and in bringing it
solidly into the realm of science where it belongs.

                                                    Courtenay, British Columbia

John Bindernagel is the author of North America's Great Ape: the Sasquatch
202                               Book Reviews


Junk Science: How Politicians, Corporations, and Other Hucksters Betray
Us by Dan Agin. New York: Thomas Dunne Books (St. Martin's Press), 2006.
323 pp. $24.95 (hardback; cloth). ISBN 0312352417.

   According to Wikipedia, "The term 'junk science' was first coined in 1973
by Paul C. Giannelli in the Journal of Criminal Law and Criminology in ref-
erence to experts who use their expertise to mislead juries or lawmakers." It
is "a pejorative term used in political and legal disputes in the United States
to describe scientific data, research, analyses or claims which are alleged
to be driven by political, religious, financial or other questionable motives"
(http://en.wikipedia.org/wiki/Junk-science,    accessed 16 November 2006).
   A book about junk science should then consider how the legal system tries
to establish what is reliable science and what is not. Inevitably one would
consider whether the voluminous literature about pseudo-science might offer
some guidance. But this book does none of these things. The topics treated are
a grab-bag of items, some of which are not junk science under the accepted
definition. For example, there is a chapter on fraud in science, which begins by
pointing out (p. 22) that this is a separate matter from junk science and yet
concludes with an instance of fraud annotated with "As far as understand-
ing junk science is concerned, the important aspect is . . ." (p. 39). Chapter 9,
about the need for universal health insurance, is even less a matter of junk
   I prefer not to review books than to write negative reviews, but the title of this
one demands attention from anomalists and others concerned with gauging the
trustworthiness of science. So be warned that this is not an intellectual
disquisition, it is a rant whose title might better have been, The Commercialized
Society and All the Things I Don't Like About It. It is reminiscent of the
compendia on "pseudo-science" published by self-styled "skeptics", where an
array of beliefs and attitudes is castigated without benefit of demonstrating why
the castigation is justified in each individual case. As with those compendia,
readers who agree with the author on a given topic may well be delighted while
the others are outraged.
   Many sweeping assertions are banal and questionable, say, "Every civili-
zation has the choice of either controlling its own destiny or remaining pas-
sive as it awaits its future" (p. 1); "science and technology . . . [are] the two
prime generators of political, social, and economic progress" (p. 2); we "have
apparently evolved to enjoy eating" and now commit "slow suicide aided and
abetted by the media and food industry" (p. 45). The insistence (chapter 4) that
genetically modified foods are safe is disingenuous and speciously argued. The
simplistic dogmatism becomes unbearable at places like "Evolutionary theory
                                  Book Reviews                                 203

is no more a guess than atomic theory" (p. 198), or "all of science is a single
evidentiary fabric of interdependent observations" (p. 203). The author should
have heeded his own advice: "Some rationalists need to be reminded that
premises are often based on what is currently known, and what is yet to be
known is more than can be imagined" (p. 205). In his castigation of President
Bush and a "cabal" of opponents of stem-cell research, Agin acknowledges that
stem cells would ideally be from the individual to be treated (p. 221), without
noticing that this is a strong argument for research on adult stem cells as
opposed to embryonic stem cells; and there is no opposition at all to research on
adult stem cells.
   Many attempted criticisms are muddled, as of the "basic assumption . . . that
if a trait recurred in families over several generations it must be genetic"
(p. 15); much respectable work is based on that assumption, for instance, seek-
ing genetic predispositions to various diseases such as breast cancer. Readers
of this journal in particular will find ill-based the critique of alternative med-
icine in comparison to "standard evidence-based scientific medicine" (p. 113),
especially given that "evidence-based medicine" is so recent a fad (see
www.ebmny.org/). As so often with pseudo-skeptics, Agin insists that science
actually is the way he would like it to be, for instance, that "standard drugs . . .
are immediately withdrawn as soon as . . . toxicity is discovered" (p. 116); one
might wish!
   I am unhappy over the perceived need to be so negative about what is clearly
a well intentioned, sincerely felt work by an author who does see some things
correctly and clearly, albeit without applying to his own views the standards of
rigor that he demands of the view of others. So here are some of the general
points on which he is very good: the future is unpredictable (p. 3); history should
be taught "as a cautionary tale" (p. 9); outside their specialty, scientists are lay
people (p.16); an article published in a reputable scientific journal is not
necessarily a reliable source of evidence (p. 23); much of what is disseminated
about "nutritional genomics" is rubbish (p. 51); evolution does not select for
long life-spans (p. 81), and evolution is a separate question from that of the
origin of life (p. 211); "non-profit" does not mean no one is mahng a profit
(p. 286). Chapter 17 on sociobiology and evolutionary psychology, and Chapter
18 on race and IQ, are sensible and well reasoned, especially the disconnect
between an average theoretical construct such as "general intelligence" and the
actual characteristics of any given individual (p. 262). But then Chapter 19 goes
overboard with caricatures of Milton Friedman's views and of actual business
practices under American capitalism.
   As with most compendia, one is likely to learn some new things. I hadn't
known about several of the instances of fraud, or that the notorious diet-drug
fen-phen had been approved only for psychiatric uses-prescribing it for losing
weight was an "off-label" prerogative, which physicians have (p. 52).
204                               Book Reviews

hyphens, as in "infrared radiation-blocking gases" (p. 187), which can only
mean gases that are both radiation-blocking and infrared.

                                                               HENRY H. BAUER
                             Professor Emeritus of Chemistry & Science Studies
                                             Dean Emeritus of Arts & Sciences
                               Virginia Polytechnic Institute & State University

The End of Suffering: Fearless Living in Troubled Times, by Russell Targ and
J. J. Hurtak. Charlottesville, VA: Hampton Roads Publishing, 2006. 180 pp.
$15.95 (paper). ISBN 1-571744681.

   During the Vietnam War, I taught English at a Buddhist monastery school in
Thailand as a Peace Corps Volunteer. I led a spartan but happy life, personally
untouched by the immense suffering experienced by thousands of my peers and
millions of Vietnamese just an a hour's flight from Bangkok. I returned home
three years later with a great appreciation for Buddhist meditation techniques
and their ability to help me live in the present. But I skipped the opportunity to
delve into the teachings of Buddha regarding suffering and how to overcome it.
When you're healthy and 21, you don't need comforting.
   Russell Targ did. He personally suffered cancer and the untimely loss of his
daughter, Elizabeth Targ. And he found comfort and answers in the teachings of
a famous disciple of Buddha, Nagarjuna, who offers a therapy for people caught
up in suffering.
   Personal happiness and how to achieve it is not a typical target for SSE inves-
tigation. But the nature of "reality" is, and this is where The End of SufSering and
science intersect. Robert Jahn's exploration of microscopic psychokinesis
(micro-PK) and Larry Dossey's investigations into mind-body medicine both
raise profound questions about the model of reality and consciousness proposed
by Western science. Buddhism offers up an Eastern model of reality that avoids
the materialistic absolutism embedded in Cartesian, either-or thinking and the
psychological problems Targ associates with this Western view of reality. He
points out the compatibility of this alternative, non-dualistic model of reality
with modern quantum physics's view of reality (e.g., light is neither a wave nor
a particle but can be manifest as either). He's not alone in exploring this linkage.
B. Allan Wallace, president of the Santa Barbara Institute for Consciousness
Studies, is a former Buddhist monk who earned a doctorate in religious studies
at Stanford and has studied under the Dalai Lama. His new book, Contemplative
Science: Where Buddhism and Neuroscience Converge, offers additional
                                Book Reviews                                205

intriguing insights into the compatibility of Buddhism and science. The point?
It's possible Buddhism has it right. And if we can accept this non-Western
model of reality, new ways emerge to view and overcome personal suffering.
This in a nutshell is Targ's and Hurtak's argument. It's not unreasonable.
    I don't have the expertise to judge the authors' claim that Nagarjuna "stands
out in global history as an unprecedented teacher of the highest order." It seems
excessive. And the authors' liberal use throughout the book of Hindu words1
concepts (which requires a six-page glossary) makes it read at times more like
a philosophy textbook than your typical nirvana-in-nine-minutes, self-help
handbook. But chronic worriers, negative thinkers, and fundamentally unhappy
folk in slow psychological or spiritual melt-down with the time and willingness
to walk east a few hundred steps may find the exit they've been seeking from
their unhappiness.

The Universe Wants to Play: The Anomalist #12, edited by Patrick Huyghe
and Dennis Stacy. Jefferson Valley, New York: Anomalist Books, 2006. 202 pp.
$12.95 (paper). ISBN-10 1933665149.

   This is an engaging anthology of widely diverse essays from the distant
borders of science. The following six articles, in my view, were the most
intriguing: "A Heretic for Our Times: A Visit with Rupert Sheldrake," "Elusive
Telekinesis: The Rudy Schneider Story," "London's Monster Scares," "Microbe
Sailors of the Starlight" (on panspermia), "The Strangest of the Strange" (UFO
encounters), and "The Perch Lake Mounds Mystery" (curious ancient New York
mound complex).

                                                               WILLIAM CORLISS
                                                           Glen Arm, Maryland
206                              Book Reviews

AIDS, Africa and Racism by Richard C. & Rosalind J. Chirimuuta. London:
Free Association Books, 1989 (first published 1987). 192 pp. (paper). Out of
print. ISBN 1-5343-072-2.

What is AIDS? by F. I. D. Konotey-Ahulu. Watford, Hertfordshire, England:
Tetteh A'Domeno Co., 1989 (reprinted 1996). 227 pp. (paper). Out of print.
ISBN 0-9515442-3-3.

   The central point of these books: there is no evidence that AIDS originated in
Africa, and considerable evidence against the notion.
   Since the mistaken view of an African origin of AIDS remains the conventional
wisdom, these books remain worthy of attention. Moreover, without questioning
the dogma that HIV cause AIDS, the books document such wide-ranging
incompetence on the part of the early Western researchers in Africa as to throw
doubt on HIVIAIDS theory as a whole. It is far from irrelevant to note that several
of those (for example, Kevin de Cock and Thomas Quinn) who were quite wrong
about African AIDS remain leading lights in the HIVIAIDS establishment, most
strikingly perhaps Peter Piot, who is now executive director of UNAIDS.
   Western researchers jumped to conclusions on the basis of ill-founded studies
whose results were soon invalidated. They had taken positive "HIV" tests as
proof that human immunodeficiency virus was widespread, but it soon become
clear that "HIV" tests are confounded by cross-reactions with many medical
conditions, including such very common ones in Africa as malaria and tuber-
culosis. Furthermore, these "carpet-bagger" researchers were ignorant of trop-
ical medicine and categorized as "AIDS" a host of conditions whose clinical
symptoms are characteristic of endemic parasitic and infectious diseases.
Moreover, it had long been known to medical science that many tropical
diseases bring about a suppression of the immune system, in other words, they
bring about an "acquired immunedeficiency syndrome", that is, AIDS; as does
malnutrition, which has also long been endemic in many parts of Africa (e.g.,
Chirimuuta, pp. 38-39).
   Based on nothing but doubtful anecdotes, leading Western journals published
wild speculations about the sexual attitudes and behavior of Africans, and
fanciful suggestions of rituals and practices by which a precursor of HIV might
have jumped from monkeys or apes to infect humans. Perhaps even more
farfetched was the speculation that some Africans could be asymptomatic car-
riers of HIV infection, sometimes coupled with suggestions that HIV is an old
virus that has only recently mutated into virulence-difficult to square with
many facts, including that purported epidemics of AIDS could rage in some
parts of sub-Saharan Africa while leaving neighboring areas unaffected
(Chirimuuta, pp. 27, 37).
   The Chirimuutas present plausible evidence that almost every interpretation
ventured in those early years stemmed from racist bias unsupported by empirical
                                 Book Reviews                                207

evidence, for example, how AIDS among Haitians was viewed (Chirimuuta,
chapter 2). Imagined scenarios included that Haitians who had worked in Africa
might have served as the conduit that brought AIDS to the Americas; even
though the clinical symptoms of the first Haitian "AIDS" patients in the United
States were distinctly different from the clinical symptoms of other AIDS
patients (Chirimuuta, p. 12).
   The author of one book, and a co-author of the other, are Africans. They differ
sharply in sociopolitical viewpoint, but agree wholeheartedly on the essential
facts about HIV, AIDS, and Africa. Richard Chirimuuta was born in Zimbabwe,
was active in the movement for independence, and wrote and spoke about Africa
while resident in Britain; Rosalind Chirimuuta is Australian by birth and a con-
sultant specialist in ophthalmology. Their book is flavored by a strong, even
strident, leftist political slant. Felix Konotey-Ahulu, Ghanaian and a committed
Christian, has practiced medicine in Africa as well as Britain and is distin-
guished for his work on sickle-cell diseases in particular. His book is flavored
by a traditional attitude toward what constitutes acceptable behavior and good
manners, what used to be called good breeding and proper upbringing. The first
chapter of Konotey's book sets its tone: "tafracher" is the expression tradi-
tionally inserted before a speaker says something that should not be said in
polite company, and Konotey finds ample opportunities to deploy it as he refers
to matters that Western media describe in unrestrained detail.
   Both books fully document primary sources. The most striking thing, which
really ought to have settled the matter once and for all, is that AIDS had
a significant impact, indeed caused panic, in the United States long before
purported cases had been reported from anywhere in Africa. Throughout the
 1980s, more than half of all AIDS cases in the world had shown up in the United
States (I), and the rest of the world viewed AIDS as an American phenomenon
(Chirimuuta, p. 7). These early American cases were predominantly white
males, and none of them had reported any contact with Africa (Konotey, p. 36).
Furthermore, if AIDS had come out of Africa, surely it would have emigrated
first not to America but to the ex-colonial countries of Europe whose
populations had been in intimate contact with Africans for many decades:
Belgium, Britain, France, Germany (Chirimuuta, p. 36).
   Retrospectively was published the suggestion that, in the mid-1970s, a Danish
surgeon had shown the clinical symptoms later called AIDS. That she had
worked in Zaire immediately led to the inference that she must have been
infected there. On the other hand, the first putative AIDS case described in
Germany, from around the same time, was a homosexual who had never visited
Africa, Haiti, or the United States. The Danish anecdote has continued to be
a mainstay of stories about the origin of AIDS, while the contradictory German
case is ignored (Chirimuuta, pp. 24-5). Africans in Europe who became ill with
AIDS-like illnesses were presumed to have contracted the disease in Africa even
when they had been resident in Europe for many years (Chirimuuta, p. 26 ff.),
208                               Book Reviews

   Konotey-Ahulu spoke with many physicians and researchers while visiting
16 countries, including 26 cities, in sub-Saharan Africa in the 1980s; he also
interviewed people involved in sex trade. He found that AIDS was clearly an
imported phenomenon. In Ghana, it had been brought home by sex workers who
had plied their trade in the Ivory Coast and elsewhere, catering to European and
American sex-tourists in exchange for precious foreign currency.
   Both books explain why inferences based on clinical diagnosis (actual
symptoms) should take precedence over inferences based on seroepidemiology
(blood tests); the point is made with particular authority by Konotey-Ahulu on the
basis of his first-hand experience and expertise, with copious descriptions of
actual cases. (A related point has been emphasized at book length by
the toxicologist Mohammed Ali Al-Bayati (2): the inference of "AIDS" should
never be drawn before a process of careful dzfierential diagnosis, because so many
medical conditions and toxic substances can mimic the immune deterioration and
consequent opportunistic infections seen in AIDS; yet such differential diagnosis
is conspicuous by its absence: anyone in Africa with AIDS-like symptoms is
immediately said to have AIDS, and outside Africa anyone in a "high-risk" group
with AIDS-like symptoms is immediately said to have AIDS.)
   Relevant to the "AIDS-out-of-Africa" shibboleth is the matter of vested
interests. In the United States, federal agencies were in cahoots with gay activists
in a successful propaganda campaign to portray AIDS as a universal threat rather
than a phenomenon restricted to a segment of gay communities and drug abusers
(3). By contrast, no influential group was able to draw the media's attention, and
thereby public attention, to the fact that all the allegations about an African
origin of AIDS were (and remain) unfounded (Chirimuuta, chapter 1).

                                                               HENRY H. BAUER
                             Professor Emeritus of Chemistry & Science Studies
                                             Dean Emeritus of Arts & Sciences
                               Virginia Polytechnic Institute & State University

 By October 1988, the World Health Organization had reported a global total of
 124,114 cases of AIDS: 88,233 from the Americas, most of them (76,670) in
 the United States, and only 19,141 from Africa; Western Europe counted
 15,251, Eastern Europe 89, Asia 281, and Oceania 1,119 (Chirimuuta,
 appendix B). By August 1989 (Konotey, appendix G), the distribution was
 much the same: world total 177,965, Americas 119,662 (United States,
 100,885), Africa 31,146, Asia 413, Europe 25,219, Oceania 1,525.
                                 Book Reviews                                209

 Mohammed Ali Al-Bayati, Get All the Facts: H N Does Not Cause AIDS;
 Toxi-Health International (150 Bloom Dr., Dixon CA 95620), 1999. ISBN
 A. Bennett and A. Sharpe, AIDS fight is skewed by federal campaign
 exaggerating risks, Wall Street Journal, 1 May 1996, pp. Al, 6.

From Alchemy to Chemistry in Picture and Story by Arthur Greenberg.
Hoboken, NJ: Wiley-Interscience, 2006. 637 pp. $69.95 (hardback; cloth). ISBN

   This is a coffee-table book in style and heft. It covers an enormous range,
chiefly by short anecdotes, and is profusely illustrated. The author is master of
his material and obviously enjoyed giving a personal perspective, often with
a light touch.
   This would not be easy reading for people without a good background in
chemistry: most of the discussions and descriptions are too succinct to permit
painting of background knowledge; but there are many references for further
reading. The index is likely to be reasonably useful, but it is not as com-
prehensive as the coverage of people and topics in the book itself.
   Those who need an introduction to this subject must look elsewhere, but those
who already have a background of information about it will find much of
interest here, and will enjoy browsing and skimming in the book.

                                                              HENRY H. BAUER
                            Professor Emeritus of Chemistry & Science Studies
                                            Dean Emeritus of Arts & Sciences
                              Virginia Polytechnic Institute & State University

"Cold Fusion without Electrochemistry", interview with Claus Rolfs by H.
Muir. New Scientist, 192, 2574, 23 December 2006, 36-39.

   In this interview in New Scientist, Claus Rolfs, a Physicist at the Ruhr
University in Bochum, Germany, spoke of hearing about some curious results
from a lab in Berlin. A team of researchers reported on the fusion of deuterons
inside different materials, including the metal tantalum when it was highly
210                                     Book Reviews

loaded with deuterium. Compared with fusion rates in a gas, rates in metals were
consistently higher, especially at low temperatures (Muir, 2006). The phrase
"cold fusion" was never used.
  In the original report about these results from the Technical University of
Berlin, "cold fusion" was mentioned quite matter-of-factly with citation of 2 of
the old references (Czerski et al., 2001). Rolfs's students were able to repeat the
   Anathema though it may be to most "hot fusion" physicists, it will not
surprise readers of JSE (Bauer, 2001; Kauffman, 2001).

                                                                      JOEL M. KAUFFMAN
                                               Department of Chemistry & Biochemistry
                                               University of the Sciences in Philadelphia

Czerski, K., Huke, A., Biller, A., Heide, P., Hoeft, M., Ruprecht, G. (2001). Enhancement of the
   electron screening effect for d d fusion reactions in metallic environments. Europhysics Letters,
   54(4), 449454.
Bauer, H. H. (2001). Book Review of Excess Heat: Why Cold Fusion Research Prevailed by Charles
   G. Beaudette (2000), in J . Scientific Exploration, 15(1), 147-153.
Kauffman, J. M. (2001). Book Review of Voodoo Science by Robert Park (2000), in J. Scientific
   Exploration, 15(2), 281-287.
Muir, H. (2006). Half-Life Heresy. New Scientist, 192 (2574), 23 December 2006, 36-39.

"The North Atlantic ice-edge corridor: a possible Palaeolithic route to the
New World" by Bruce Bradley & Dennis Stanford, World Archaeology, 36:4,
2004, 459-478.
   A recent NOVA program, "Ice Age Columbus", suggested trans-Atlantic
voyages during the last Ice Age, citing in support a 17,000-year-old flint found
in Virginia that resembled European Solutrean artifacts. This article reviews that
       Archaeological evidence is lacking for the generally accepted migrations
       from Asia.
       Clovis tools represent highly developed flint-knapping techniques.
       Precursors of these should be found in the original homelands of the
       Clovis people.

                                                                        cr. HENRY H. BAUER
                                                                     e-mail: hhbauer@vt.edu
                                     Book Reviews                                       211

'''Dark matter' seen? Look again!" Astrophysical Journal 648, 2006, L109-

  Astronomers claim to have found the first direct evidence for the existence of
"dark matter". Their reasoning is as follows:

Gravitational potentials of galaxy clusters are too deep to be caused by the detected
baryonic mass and a Newtonian gravitational force law. One proposed explanation of this
mystery invokes dominant quantities of non-luminous "dark matter". The other invokes
alterations to the particle's dynarnical response to the gravitational force law. The actual
existence of dark matter can only be confirmed either by laboratory detection or, in an
astronomical context, by the discovery of a system in which the observed baryons and the
inferred dark matter are spatially segregated. An ongoing galaxy cluster merger is such
a system.
   Next, assume that stars make up just 1-2% of the total mass [an assumption valid only if
dark matter exists], and that plasma makes up 5-15% of the total mass [also valid only if
dark matter exists]. But during a merger of two clusters, galaxies behave as collisionless
particles, while the fluid-like X-ray-emitting intra-cluster plasma experiences ram pressure.
Therefore, in the course of a cluster collision, galaxies spatially decouple from the plasma.
   Such an effect is clearly seen in the unique cluster 1E 0657-558. Two galaxy
concentrations that correspond to the main cluster and the smaller sub-cluster have
moved ahead of their respective plasma clouds that have been slowed by ram pressure. This
phenomenon provides an excellent setup for our simple test. In the absence of dark matter,
the gravitational potential will trace the dominant visible matter component, which is the X-
ray plasma. If, on the other hand, the mass is indeed dominated by collisionless dark
matter, the potential will trace the distribution of that component, which is expected to be
spatially coincident with the collisionless galaxies. Thus, by deriving a map of the
gravitational potential, one can discriminate between these possibilities.
   Weak gravitational lensing of background galaxies shows an observed displacement
between the bulk of the baryons and the gravitational potential, which proves the presence
of dark matter for the most general assumptions regarding the behavior of gravity.
(emphasis added; bracketed material mine)

However, this argument needs to assume what it is trying to prove, because it
must assume the existence of dark matter and make inferences about the light
distribution based on that assumption. But the converse is equally true. If we
assume no dark matter, then light and gravity are expected to coincide, just as
observed, because the plasma contains a relatively minor contributor to overall
mass. In most mature galaxies, most of the mass is in already-formed stars, not
in gas and dust. And the dominant visible matter component is likewise the stars.
Just because the dust becomes X-ray bright during a collision does not make it
massive. See the italicized sentence above, where the opposite was assumed. So
the conclusion of this paper is invalid because it uses interpretations valid only if
dark matter exists to argue that dark matter exists.

   Nobel Prize Awarded To Big Bang Proponents As Evidence Vanishes
In early October, the Nobel Prizes for 2006 were announced. The prize in
physics was awarded to John C. Mather and George F. Smoot for the discovery
212                                  Book Reviews

of the blackbody character of the microwave radiation in space with the COBE
satellite. The significance of this finding, according to the citation, read as
follows: "The COBE results provided increased support for the Big Bang
scenario for the origin of the universe, as this is the only scenario that
predicts the kind of cosmic microwave background radiation measured by
COBE. These measurements also marked the inception of cosmology as a
precise science."
   However, as is now well known outside of Big-Bang-dominated circles, the
simplest explanation of the microwave radiation is the "temperature of space",
as correctly calculated by Eddington in 1926 and verified with greater accuracy
by later authors: 23°K. This is the minimum temperature that anything bathed in
the radiation of distant starlight can reach. No Big Bang proponent ever came
close to predicting the correct temperature of this radiation, its dipolar
asymmetry, or the tiny size of its fluctuations. A glance at a 2002 article
entitled "The top 30 problems with the big bang", published in Meta Research
Bulletin and Apeiron, shows 30 of the ever-increasing list (now over 50) of fatal
problems for the Big Bang theory. The article is replete with citations, including
those for both correct and incorrect microwave temperature predictions. [MRB
11:5-13 (2002); http://metaresearch.org/cosmology/BB-top-3Oap Apeiron 9
(2002): http://redshift.vif.codJournalFilesN09NO2PDF/V09N2tvf.PDF.]
   The blackbody character of the microwave radiation was an important obser-
vational finding, and its discoverers deserve credit for that (despite trying to
attach religious significance to it themselves). But the significance of the finding
weighs heavily with the Nobel committee in deciding which discovery was the
most important. Because the committee's justification contains egregious errors
(alternative explanations work better, and true support for the Big Bang is
almost non-existent), the award tends to devalue the prestige of the entire Nobel
process and make it appear to have become just another propaganda wing of
mainstream science. So we include this Nobel Prize award as another example
of "Specious Sciencew-trumped up support for failing theories advocated by
mainstream authorities to keep funding channels open.
   As if that were not bad enough, the following new results about the mi-
crowave radiation were just released in September [http://www.physorg.com/
news763 14500.html; ApJ 648:176 (2006)l:
The apparent absence of shadows from galaxy clusters where shadows were expected to
be is raising new questions about the faint glow of microwave radiation once hailed as
proof that the universe was created by a 'Big Bang.' In a finding sure to cause con-
troversy, scientists at the University of Alabama in Huntsville found a lack of evidence of
shadows from 'nearby' clusters of galaxies using new, highly accurate measurements of
the cosmic microwave background . . .. Up to now, all the evidence that the micro-
wave radiation originated from as far back in time as the Big Bang fireball has been
circumstantial. However, if you see a shadow, it means the radiation comes from behind
the cluster. If you don't see a shadow, then you have something of a problem. Among the
31 clusters studied, some show a shadow effect and others do not. Taken together, the
data shows a shadow effect about one-fourth of what was predicted-an amount roughly
                                     Book Reviews                                      213

equal in strength to natural variations previously seen in the microwave background
across the entire sky. So either it (the microwave background) isn't coming from behind
the clusters, which means the Big Bang is blown away, or . . . there is something else
going on. Maybe the clusters themselves are microwave emitting sources. But based on
all that we know about radiation sources and halos around clusters, this kind of emission
is not expected, and it would be implausible to suggest that several clusters could all emit
microwaves at just the right frequency and intensity to match the cosmic background

The shadow effect is better known as the Sunyaev-Zel'dovich effect, or "S-Z
effect" for short. Just over a year ago, published results of another study using
WMAP data looked for evidence of "lensing" effects that should have been seen
(but weren't) if the microwave background was a Big Bang remnant. So
evidence continues to mount that the microwave radiation is a relatively local
effect, such as Eddington's "temperature of space".

                                                                    TOM VAN FLANDERN
                                                                          Meta Research

Both of the above pieces were excerpted porn Meta Research Bulletin 15:4142

"Stimulating Illusory Own-Body Perceptions" by Olaf Blanke, Stephanie
Ortigue, Theodor Landis, and Margitta Seeck, Nature, 419, 2002, 269-270.

"Does the Arousal System Contribute to Near Death Experience?" by Kevin
R. Nelson, Michelle Mattingly, Sherman A. Lee, and Frederick A. Schmitt.
Neurology, 66, 2006, 1003-1009.

   The first article, which has been widely reported in the lay press, described an
epileptic patient in whom out-of-body experiences (OBEs) were repeatedly elic-
ited by temporal lobe stimulation, confirming a similar report almost a half cen-
tury ago by Canadian neurosurgeon Wilder Penfield. The authors of this paper
demonstrated that an illusion of being out of the body could result from elec-
trical stimulation of the right temporal lobe, and they speculated about the role
of vestibular processing in the experience of feeling dissociated from the body.
   Many of the lay accounts of this research, however, have made the un-
warranted jump from this observation to the assumption that spontaneous out-
of-body experiences must also be illusions due to temporal lobe activity. That is
214                              Book Reviews

a reasonable hypothesis, particularly in light of evidence linking spontaneous
OBEs to absorption, hypnotizability, and dissociation, but it remains at this point
an untested hypothesis.
   On empirical grounds, the induced out-of-body sensations elicited by tem-
poral lobe stimulation resemble spontaneous OBEs, but are not identical to
them. For example, OBEs induced by electrical stimulation are accompanied (in
Blanke's patient and in Penfield's) by vestibular and complex somatosensory
responses, such as bizarre distortions of one's body image, which do not occur
in spontaneous OBEs. On the other hand, OBEs induced by electrical stim-
ulation do not include accurate perceptions of the environment from a spa-
tial perspective distant from the body, which are reported in some
spontaneous OBEs. Given the phenomenological differences (and the differ-
ences in psychological aftereffects for the experiencer), it is premature to
assume that the mechanism of electrically induced OBEs also applies to spon-
taneous experiences.
   Logical argument does not require that all OBEs be caused by temporal lobe
activity just because some are. In Penfield's patient, electrical stimulation of
the right temporal lobe also elicited the illusion of hearing an orchestra
playing. However, we do not conclude that all sensations of hearing an orchestra
are illusions due to temporal lobe activity; rather, we allow that some such
sensations may be accurate perceptions of a real orchestra that exists outside the
patient's brain. By the same reasoning, we cannot conclude, from the fact that
electrical stimulation of the temporal lobe can induce OBE-like illusions, that
all OBEs are illusions due to temporal lobe activity.
   The second article, also widely reported in the lay press, suggested an
association between near-death experiences (NDEs) and rapid eye movement
(REM) intrusion-the intrusion into waking consciousness of mentation typical
of REM sleep. The authors conducted a brief survey of an NDE group and
a comparison group that asked four questions about symptoms of REM
intrusion: visual or auditory hypnagogic or hypnopompic hallucinations (seeing
or hearing things as you fall asleep or wake up), sleep paralysis (finding yourself
partially awake but unable to move), and cataplexy (sudden buckling of the
legs). Their hypothesis was creative, but it is premature to draw etiological
conclusions from their correlational study.
   Their NDE sample, drawn from volunteers who shared their NDE on the
Internet, may be atypical of most NDE experiencers in their willingness to
acknowledge unusual experiences publicly. Moreover, it is plausible that sleep
paralysis questions imbedded for the NDE sample in an Internet survey of
unusual experiences would elicit more positive responses than identical
questions presented in face-to-face interviews to the control sample. Further-
more, the control group, "recruited from medical center personnel or their
contacts," may have had reservations about endorsing hallucinations and related
symptoms they would likely identify as pathological. This suspicion is bolstered
                                  Book Reviews                                  215

by the control group's endorsement rate of only 7% for hypnagogic hal-
lucinations, about one-fourth of that in the general population.
   Data arguing against the contribution of REM intrusion to NDE include
many features, such as fear, typical in sleep paralysis but rare in NDE, and the
occurrence of typical NDE under general anesthesia and other drugs that inhibit
   Finally, a correlation between REM intrusion and NDE would not establish
that REM intrusion contributes to NDE. This study did not explore REM
intrusion that had occurred prior to the NDE. It is equally plausible that NDE
enhances subsequent REM intrusion. REM intrusion is increased in post-
traumatic stress disorder (PTSD), and PTSD symptoms are increased following
NDE. In light of these concerns, the association of REM intrusion and NDE is
still speculative, and any causal role of REM intrusion in NDEs debatable.

                                                                   BRUCE GREYSON
                                                   Division of Perceptual Studies
                                             University of Virginia Health System
                                                                   PO Box 800152
                                                  Charlottesville, VA 22908-0152

The PEAR Proposition: Scientific Study of Consciousness-Related Physical
Phenomena: A Quarter Century of Princeton Engineering Anomalies
Research produced by Strip Mind Media, 2005. 2 DVD and 1 Audio CD
multimedia set; approximately 520 minutes; $50, plus $12 shipping and
handling, www.icrl.org/contributions.php

   Robert Jahn, Brenda J. Dunne, and all who have been involved in the
programs of Princeton Engineering Anomalies Research (PEAR) are to be
commended for making the effort to provide the rest of us with an account of
their work. In anticipation of the closing of their laboratory after its greater than
27-year career, Jahn and Dunne have collaborated with independent filmmaker
Aaron Michels of Strip Mind Media to produce this comprehensive video tribute
and retrospective. But this multimedia set (2 DVDs, 1 CD) is less about
nostalgia and anecdote than it is about getting their message out to as many
people as possible: PEAR needs people to pick up the torch and continue its
lines of research. On this point, the set is a success. It is not for a night of
popcorn and escape; it may not even merit the label of "edutainment." But it has
many uses for the classroom, lab and conference and as a model for those who
wish to pursue similar scientific research.
   Since 1979, PEAR has been involved in three projects: 1) experiments in
human-machine interaction; 2) experiments in remote perception and 3) the
216                               Book Reviews

production of analytic, conceptual and theoretical frameworks within which to
understand the anomalous phenomena that resulted from these experiments.
Much of the first DVD is devoted to explaining the nature of the first two
projects. Disc One comprises three video files and a set of data files readable on
a computer. These data files are a generous treasure trove of additional infor-
mation, containing supplementary lecture slides to a class given by Jahn and
a talk by Dobyns at an SSE meeting, as well as sixteen pdf files of published
articles by PEAR members. The first video, "PEAR Synopsis" (22:30), intro-
duces Jahn as PEAR's Project Director, Brenda Dunne as the PEAR Laboratory
Manager and York Dobyns as PEAR's Analytical Coordinator. It goes on to
explain the results of their studies and the nature of the criticism they have
received. The second track, "Lecture Introduction" (14:00), is a recording of the
first in a series of lectures Jahn gave in the spring of 2005 to undergraduate
course participants at Princeton on human-machine interaction. Jahn covers the
history of modem physics and explains the limitations science currently faces.
As he puts it: "even hoity-toity physicists are now talking about consciousness,
which used to be a dirty word thirty years ago . . . [but] we have a problem with
subjectivity; we have a problem with qualia; intention; resonance; values; truth;
love; they are human experiences, but where are they in the physical models of
the world?" The third track, "Lab Tour" (91:45), is a visual tour of the entire
lab, filmed from the point of view of a potential visitor. Dunne gives extended,
in-depth demonstrations and explanations of the human-machine interaction
experiments: "Murphy" the marble machine; the mechanical "coin-flipper," the
field REG, the automatic drum and competitive co-operator games. All of
these are variations on a theme-tests to see how well the intention of the
operator can influence the direction of the output: high vs. low, quick vs. slow,
left vs. right. Dunne also walks through the remote perception experiments with
an assistant.
    The second DVD consists of the rest of the series of Jahn's course lectures
(each lasting 70 minutes on average) and a collection of additional materials
including a lecture by York Dobyns on PEAR research design; a series of
interviews and commentary given by the various students, visitors and operators
of the lab; and a slideshow of snapshots from points in PEAR's history strung
together beneath a montage of impressions people have had of PEAR: "a
home;" "a challenge;" "sometimes beautiful, always mysterious;" "a labora-
tory, a dream, an experience . . . a bit like a family;" "PEAR sets a precedent that
most people in the scientific community are afraid to get anywhere near." Jahn's
lectures are definitely the highlight of this disc. His wit and wry eloquence make
the discussion a pleasure to follow irrespective of its intrinsically interesting
content. The last two of Jahn's lectures covers the third, theoretical dimension of
PEAR's project. Jahn speaks about the historical course of science and the need
for a major "change in the rules" of science. He reviews the various conceptual
models he and Dunne have constructed and published, such as the M5 model,
the subliminal seed regime and the "filters" concept.
                                  Book Reviews                                  217

   The third disc is an audio recording of a conversation between Jahn and Dunne,
a very small excerpt of which is included in the additional materials on Disc Two.
Recorded in February 2005, the conversation lasts just over an hour and moves
through such topics as uncertainty as the raw material out of which is constructed
an anomalous result; the complementarity in methodologies and disciplinary
perspectives necessary for future scientific research; the historically original and
unique expression of subjective properties in scientific and objectively de-
monstrable results and the problems this poses; and the possibility for a future
science that is able to speak of such dimensions of consciousness as love and spirit.
The conversation as a whole is an invaluable and thoughtful addition to the set.
   One point of substance is worth malung, however. Whereas PEAR has taken
great care to vary and document the physical parameters (e.g., the type of random
device used, spatial and temporal distance) in their study of human-machine
interactions, the examination of psychological variables (e.g., consciousness,
intention, resonance) has been studiously avoided. There has been no actual
documentation of the subjective features of participants experiences. The reasons
for avoiding psychological variables have been given in the "Lab Tour" segment of
the multimedia set. But these are not good reasons. For example, it is mentioned
that the PEAR program does not assume that one person's consciousness is the
same as another person's consciousness, and that, therefore, PEAR considers
psychological measures to be inappropriate. On the contrary, it is precisely because
there are individual differences that psychological measures would be helpful in
disentangling the data (e.g., to shed light on the serial effect and negative psi).
   This is not a frivolous recommendation and it is made in the face of the
enormous respect that many of us have for the outstanding quality of the PEAR
work. It also grows naturally out of the purpose and direction already taken
in the research program. For example, examination of the effect of gender
differences on operator effectiveness has revealed some interesting results.
However, the causal agents in those cases are unlikely to be so simple as the
chromosomes or the physiological effects of prenatally circulating androgens,
i.e., sex, as such, but are rather the psychological characteristics (which would
need to be specified) associated with gender identity.
   And why stop there? Future researchers could build on the PEAR program by
looking at constructs such as openness, absorption, and transliminality-
psychological variables that have been implicated in the occurrence of other
anomalous phenomena. Indeed, the applicability of the M5 model depends upon
the interface between the Conscious Mind Module and the Unconscious Mind
Module. Measures of transliminality could be used to assess the permeability of
that interface, thus permitting tests of the model using the PEAR human-
machine protocols. Over the course of such investigations it may be possible to
find psychological variables that could predict the outcomes of operators' efforts
in remote viewing and human-machine interaction experiments. This could lead
to greater clarity regarding the PEAR data.
218                              Book Reviews

covers. No doubt, many viewers will neither notice nor care about some of the
nuances of the production choices made-the unmotivated use of handheld
cameras and jumpcuts; mixed lighting sources; wind in the microphone;
violation of the "180" axis rule -but other technical aspects might spark some
annoyance. Audio levels are inconsistent in parts, prompting one to manually
raise or lower the volume as needed. Jahn s voice is cut out mid-phrase at the
end of his introductory lecture. More importantly, though, it would have been
helpful to have included an introduction to the multimedia set so that the viewer
could properly orient herself to the material. Without an introduction, a viewer
could have the expectation that this material would have an overall narrative
structure. However, a considerable repetition of material (e.g., the history
of the PEAR research program is recounted three times on the first DVD)
supports a modular format that allows individual videos to be shown on
their own depending on the user's needs. Some of the individual videos
also need introductions. For example, the video of Dobyn s talk at an SSE
conference needs some explanation with regard to its relevance for the PEAR
   In all, what we have here is a concerted, well-thought-out effort to
disseminate knowledge about the PEAR project and the call for others to
continue researching into these aspects of a "science of the subjective." Well in
excess of this promotion and advertising, however, is the informational and
educational value of these DVDs. Librarians, booksellers, senior high school
students and young undergraduates-many audiences can benefit from this
multimedia set. Perhaps most importantly, the videos display a sense of the
quarter-century's worth of consistent research and collaboration among Jahn,
Dunne and colleagues and will hopefully facilitate the future recognition of their
impressive and original contribution to science.

                                                           SHANNON FOSKETT
                                      Committee on Cinema and Media Studies
                                                       University of Chicago
                                                            Chicago, Illinois

                                                                IMANTS ARUSS
                                                    Department of Psychology
                King's University College at The University of Western Ontario
                                                     London, Ontario, Canada
                                 Book Reviews                                219

Gravity, Version 1.1, Various 2003-2006 by Tom van Flandern. Medium:
Windows format CD. Sequim, WA: Meta Research. $19.00. http://www.
   This CD is a collection of HTML documents, PowerPoint presentations and
various supporting material, including short animations. It is authored by Tom
van Flandern (henceforth TVF) and distributed by his company, Meta Research.
The (13) PowerPoint presentations are not very useful for discerning TVF's
message and will not be reviewed here. The index file claims there are 30
document files, though I could find only 18, 4 of which are USENET
discussions of TVF's ideas. In the remaining 14 there are informal discussions of
the Lorentzian Relativity (LR) approach to special relativity (SR), the twin
paradox, the speed of gravity, operation of the GPS system, and a shadow model
of gravity mediated by superluminal 'gravitons' and differing substantially from
general relativity (GR). Amongst these files is an article by Victor J. Slabinski
giving some calculations based upon TVF's graviton model of gravity (see
below) and a review by TVF of the book Pushing Gravity: New Perspectives on
Le Sage's Theory of Gravitation, edited by Matt Edwards and published by
Apeiron. That book has a forward by Halton Arp, who appears to endorse TVF's
view that GR does not respect causality (see below).

                            Lorentzian Relativity
   In the essays "Lorentzian Special Relativity" (the filename on the CD is
'GSRL-dok') and "Lorentz Contraction" TVF explains the Lorentz interpreta-
tion of relativistic length contraction, namely that material bodies contract
according to their absolute speed through an ether, envisaged as a real
mechanical contraction viewed from the perspective of an absolute static frame.
Subsequently Ives contributed to the development of this approach, culminating
in a theory-LR-that     is fully compatible with SR; the absolute frame is never
visible due to the non-availability of absolute clocks and rods. Though from the
SR standpoint the ether is redundant, some people find the ether-based
explanations more attractive. To some degree this seems to depend on an
ingoing prejudice of which one is 'really true'-no doubt either explanation of
the twin paradox can be made to look contrived from the standpoint that the
other is the more correct one. TVF claims, however, that there are some
differences that go beyond predisposition and parsimony. In his article "Does
the Global Positioning System Need Relativity?" TVF states that the two
postulates of SR (equivalence of inertial frames and universal constancy of the
speed of light) have not and cannot be tested. By this he appears to mean that
they cannot be tested independent of the Einstein clock synchronization method,
which, if true, would mean that the best one could hope for is self-consistency
(of postulates with the clock-synchronization method). As TVF points out, the
220                               Book Reviews

speed of light is not invariant if measured with clocks that keep universal time
(e.g. synchronize moving clocks to a clock static with respect to the Cosmic
Microwave Background (CMB)). The objection seems to be that modern
relativists unfairly bias presentation of the physics in favor of SR over LR. TVF
points out that GPS clocks are set to run more slowly than earth-bound clocks
prior to launch so that they become synchronized when in orbit, constituting an
effective realization of a ('local') universal time and providing support at least
for the utility of LR over SR.
   TVF is less solid in his position on superluminal motion, both in the piece cited
above on the GPS and in another entitled "Is Faster-Than-Light Propagation
Allowed by the Laws of Physics?" (Some of the text is common to both). He states
that boosts from subluminal to superluminal motion are impossible in SR because
proper clocks cease to advance at light speed. He contrasts this with the possibility
that such boosts are permitted in LR because therein proper clocks have no special
status whilst the (preferred) universal clock does not behave in a singular manner.
But the singular behavior of the light-speed proper clock, as seen from a stationary
observer, does not imply singular behavior in the time as witnessed on-board the
light-speed craft. SR tells us- LR must agree-that though one can describe
the kinematics of moving objects with either clock, only the on-board proper clock
describes the passage of time in the manner experienced by on-board observers.
And those clocks tick quite normally even as the craft approaches light speed.
   Having greater merit is TVF's objection to superluminal boosts in SR based
upon the fact that objects with rest mass require infinite energy to pass through
light speed, leading to the notion of inconvertibility of tardyons (or bradyons)
and tachyons1. TVF claims that this difficulty does not exist in LR. He offers as
an analogy the behavior of a propeller-driven airplane wherein any amount of
power supplied to the engines will not take it beyond the speed of sound. The
implication is that i) the relativistic mass increase is to some extent illusory; and
ii) superluminal speeds are possible and only await discovery of the appropriate
technology. This analogy perhaps warrants further thought, though, however, it
fails to make a discriminating case in favor of LR over S R ~if a superluminally
propagating force is discovered, both SR and LR would require radical
reformulation, with neither necessarily the winner.
   TVF's claims in these documents that LR is superior to SR must rest,
therefore, on parsimony rather than on physically distinct predictions. At present
most favor SR since it makes fewer assumptions, even if it is less intuitive. In
the event that, for example, a superluminal force is discovered, and the new
physics is more easily understood from the standpoint of the ether concept, then
the situation will change and LR will be the more parsimonious.

                                 Shadow Gravity
  TVF is perhaps best known as a champion of a non-standard model of gravity.
The model is motivated by a particular position he takes that all forces must be
                                   Book Reviews                                  221

conveyed by particles having momentum, including the ordinary Newton-type
attraction of massive bodies. The reasoning behind this position is described in
detail in the documents "Gravity" and "21st Century Gravity", the latter being
the more recent3. Some background is required here: The Newtonian force of
gravity is known to be un-aberrated-the direction of attraction between two
uniformly moving bodies is in the direction of the instantaneous relative
position, regardless of the velocities involved4. This is to be contrasted with, for
example, radiation pressure, which is aberrated. At small speed v of the receiver,
the force of aberration is at (radian) angle vlc relative to a line between the
instantaneous positions of source and receiver (c is the speed of light). These
issues are well-illustrated by several animations on the CD, and are discussed in
"21st Century Gravity" and in detail in the document "Propagation delay vs
Aberration". However, any force conveyed by classical structureless, spinless
particles in the manner supposed by TVF will be aberrated at angle vlv,, where
v, is the speed of the TVF graviton. (Though we are unsure how to apply the SR
transformations in the case of superluminal speeds, we can suppose that this
expression holds at least when v t v,). Given this, in order to be compatible

with observation, TVF claims that the graviton speed must be much greater than
that of light: v, 2 2 X 10lOc.
   In the case of Newtonian gravity, TVF calls the force carrying particles
'gravitons'. These are not the quanta of a GR radiation field; however, they are
classical, spinless and structureless, and so they are able to impart only repulsion.
Accordingly, in order to achieve attraction, TVF posits a 'shadow' theory of gravity
in the style of Georges-Louis Le Sage, 1724-1803. Therein, massive objects are
assumed to be absorbers of 'gravitons' from an omnipresent background. If one
additionally assumes that subsequent re-emission of energy (which there must be
in order that the mass of an object remains constant) is isotropic5, it then follows
that two nearby objects will shadow each other from the background and so will be
pushed towards each other. Thus one arrives at a force of attraction mediated by
particles that impart only a repulsive force. By supposing that the 'mass' of an
object is proportional to its TVF-graviton scattering cross-section, one then has
a force that is compatible with Newtonian gravity (the l/r2 fall-off from a point
source follows from a simple geometric consideration).
   A black hole is so called because the gravitational binding is sufficiently
strong that neither light nor an object launched at light speed can escape its pull.
Accordingly, after collapse of matter sufficiently dense to form a black hole, any
radiation contained therein will not be visible to an observer outside the 'event
horizon'. In the essay "The Speed of Gravity-What the Experiments Say" TVF
finds support for his thesis (that the force of Newtonian gravity is carried by
superluminal particles) from the fact that the static gravitational field of a black
hole remains intact after collapse. That is, the static gravitational field of a black
hole remains 'visible' to observers outside the event horizon. The same applies
to the electric field if the hole is charged, and also to the magnetic dipole field in
the event that the hole is charged and rotating. Consequently, the claim that the
222                                 Book Reviews

static gravitational field is carried by superluminal particles necessarily must be
extended to apply to static electric and magnetic fields-in general.

   TVF presents his shadow theory of gravity, and ultimately the existence of
superluminal force carriers, as mandated by the observational facts. I shall now try
to give a brief explanation of why it is not mandatory. In this context it is sufficient
to work with classical weak-field (linearized) GR, wherein the components of the
metric tensor (the gravitational potentials) obey a wave-equation sourced by stress-
energy, just as in Maxwell theory electromagnetic potentials obey a wave equation
sourced by electric current. In both cases the potentials propagate at the speed of
light. There are no other speeds appearing in these equations, so no other speeds are
expected to appear in the solutions for the potentials. Yet, solving for the potentials
of a charge or mass in uniform motion one obtains the (well-known6) result that the
force (which is computed from derivatives of the potentials) points towards the
instantaneous position of the moving body- if there were no propagation delay!
The important thing to notice here is that conventional theories (SRIMaxwell, GR,
and QED) already predict the observed fact that uniformly moving bodies interact
along instantaneous lines of force, without invoking superlurninal force carriers. It
must be concluded that the disagreement between theory and observation alleged
by TVF does not exist. The mathematics underlying this prediction permit different
interpretations, depending on the gauge (which relates potentials to forces). In the
case of electromagnetism (EM) and in the 'Lorentz gauge' only a static scalar
potential remains if the particle is static. Obviously there is no retardation, and so
the corresponding electric force vector is directed towards the charge. However, if
the particle is moving uniformly there are two non-zero potentials: the scalar
potential and the component of vector potential in the direction of motion. Both of
these enter into the expression for the electric force. Though they are both retarded,
the effects of retardation cancel in the expression for the force, with the result that
the force is directed towards the instantaneous position of the moving charge. That
is, the direction of propagation of the potentials is no longer parallel to the direction
of the force being propagated. Thus, the quandary alleged by TVF does not exist in
standard classical theory: for sources in uniform motion potentials propagate at
light speed but forces remain pointing towards the instantaneous position. Similar
reasoning applies to the Newtonian force of gravity and good approximation can be
extended to cover the case of non-relativistic circular motion, with predictions that
accord with observation7.
    One now sees the cause of TVF's difficulty: He is a priori committed to
describing forces as resulting from the exchange of classical particles-a position
based in part on a plxlosophical objection to characterization of forces in terms of
derivatives of potentials (as discussed in the essay "Does the Universe Have
a Speed Limit?") and in particular on the notion that causes must be local and
                                   Book Reviews                                 223

Principles"). However, forces resulting from the exchange of structureless and
spinless particles must always be parallel to the direction of their 'propagation'8.
Consequently TVF's model will be incompatible with observation for as long as
these particles are limited to light speed. In at least one frame the correspondence
with classical theory and observation can be restored provided the force carriers
are supposed to travel at infinite speed. But the important question is can the TVF
model be made compatible with observation in other frames? I have not looked to
see if this can be done, though it does not seem likely to me.

  This CD is an unstructured collection of related and often overlapping essays
and supporting material giving a comprehensive view of the ideas of Tom van
Flandern on gravity and Special Relativity. Though sometimes one-sided, the
essays are very clearly written. They make a reasonable case for Lorentzian
Relativity. The status of the proposed alternative view of gravity is less
persuasive. The mathematical details are missing and it seems unlikely that the
ideas can be made compatible with Lorentz Invariance in the weak field limit.

                                                                    MICHAEL IBISON
                                           Institute for Advanced Studies at Austin
                                                                     Austin, Texas

    E. C. G. Sudarshan famously likened the situation to two civilizations
    separated by the Himalayas-neither aware of each other's existence. In this
    context Murray Gell-Mann's 'Totalitarian Principle' that "Everything which is
    not forbidden is compulsory" would seem to apply (though the origin of this
    phrase is T. H. White's "The Once and Future King"). Not unrelated, the same
    argument should apply to the existence of magnetic monopoles-the missing
    cousins of electric charge-which, however, have not been found.
    If a superluminally propagating force is discovered, the speed of light would
    remain as a universal constant only in a more restricted sense. The speed of
    light would be the same in all inertial frames, as measured by instruments
    constructed exclusively from 'ordinary' matter held together with forces
    propagating at speed c.
    Therein, the fact that the particles exchanged are structureless and spinless is
    only implicit.
    Ignoring motion about the barycenter, the Earth is attracted to (and therefore
    orbits around) the instantaneous position of the Sun. The force of attraction is
    not directed towards the visible position of the Sun-which is about 500
    seconds old and is correspondingly aberrated.
224                               Book Reviews

 See, for example, the lucid discussion by Feynman in his "Lectures on
 Physics" series.
 A thorough treatment and discussion of relevant TVF claims can be found on
 the archive at physics19910050.
 TVF appears not to have considered the possibility of exchange of particles with
 spin, which might, perhaps, be a more fruitful approach to achieving compati-
 bility with observation whilst remaining faithful to his philosophical position.

Chemical Heritage; quarterly publication of the Chemical Heritage Founda-
tion. Free one-year subscription available to individuals in the USA at www.
chemheritage.orglpubslpub-nav2-subscribehtml;permanent subscription upon
donation to the Foundation.

I began to receive this magazine after I had retired and discovered the Chemical
Heritage Foundation as a good resting place for books of historical chemical
interest, including old textbooks. Since then, Chemical Heritage has become one
of my half-a-dozen favorite periodicals. The latest issue to arrive (24 #3, Fall
2006) spurred me to mention it to readers of JSE. Among the articles is one on the
development of a chemical-instrument business by an individual entrepreneur,
another tracing the history of artificial dyes, a third surveying the advances in
microscopy through the electron microscope and beyond. I was reminded of how
fascinating is the history of science, how very recent are so many of the things we
tend to take quite for granted, and the truth of an assertion that was a discussion
question on my final exam half-a-century ago: "Every advance in science is an
advance in method or technique or instrumentation" (or words to that effect).
   It occurred to me that this dictum is worth sharing with people interested in
anomalies. It perhaps explains in part why the work of the PEAR group is so
widely acknowledged as cutting edge; why Mike1 Aickin's columns on statistical
techniques and approaches are so valuable; why Mike Swords, in a review of 25
years of ufology (JSE, 20.4, pp. 545-589), so regrets the lost opportunities for
intensive study of "physical trace" events. Every new approach, it seems, can take
us just a bit further before its benefits are exhausted and further work along the
same lines is just more of the same. That, I think, is the underlying (or
overarching?) reason for many of the manuscript rejections by JSE's reviewers
and editors: pieces that give personal witness to extraordinary events add nothing
to an already bulging literature of similar items.
   This recent reading also reminded me again of how grateful I've long been that
my first intellectual love was chemistry. We call it "the central science" because
that's where it lies, between physics and biology and- materials science,
chemical engineering, and the like-at the foundation of most if not all modern
technologies. What I gained most from studying chemistry, I believe, is a great
                                 Book Reviews                                 225

penchant for empiricism, a sense that sound knowledge is empirical and that
scientific theories are temporary expedients. Which is why I have tended to be
unfriendly to manuscripts that are strictly theoretical or speculative without
appropriate evidential support; those who have novel ideas cannot usefully leave
it to others to do something concrete with them.

                                                              H E N RY H . BAUER
                            Professor Emeritus of Chemistry & Science Studies
                                            Dean Emeritus of Arts & Sciences
                              Virginia Polytechnic Institute & State University
                                             www.henryhbauer. homestead.com

Anthropology of Consciousness. Editor: Charles Flowerday, University of
California Press, ISSN 1053-4202.

   This journal (AOC) covers topics that are of interest to many readers of the
JSE: altered states of consciousness, possession, trance, dissociative states and
shamanistic, mediumistic, and mystical traditions. For example, a current article
is "Advances in the Study of Spirit Experience: Drawing Together Many
Threads." "Adaptive and Maladaptive Dissociation: An Epidemiological and
Anthropological Comparison and Proposition for an Expanded Dissociation
Model" appears in an earlier issue.
   AOC is a publication of the Society for the Anthropology of Consciousness
(SAC), a Section of the American Anthropology Association (AAA). It is
necessary to join the AAA in order to join SAC, but subscriptions to AOC are
available to non-members. AOC is published semiannually.
   The SAC brochure lists further interests in addition to those above for AOC.
These include psychic phenomena, reincarnation, near-death experiences, ethno-
pharmacology, and psychopharmacology. The SAC holds an Annual Spring
Conference. The 2007 Conference will be April 4-7 in San Diego.

                                                               P. D. MONCRIEF
                                                           Memphis, Tennessee

Readers are encouraged to submit for possible inclusion here titles of articles in
preferably peer reviewed journals (typically, which do not focus on topics about
anomalies) that are relevant to issues addressed in JSE. A short commentary
should accompany. The articles may be in any language, but the title should be
translated into English and the commentary should be in English.

Pilkington responds to Roll:

   I would like to thank Prof. William Roll for his kind words and his detailed
review of my book, The Spirit of Dr. Bindelof: The Enigma of Se'ance Phe-
nomena. However, I must point out several inaccuracies or misunderstandings
in it.
  1. Gil Roller's mother, Olga, was a talented singer-actress who, though
     temperamental and somewhat possessive, doted on her son. There is noth-
     ing to suggest that she was "abusive" as the review states.
  2. The only "direct-voice" communication the group obtained was in one
     of their last sessions. The earlier work with the trumpet produced sounds
     but no words. The message "Leo in" was one of the written messages.
  3. In regard to the EisenbudISerios work, a devised method using a prepared
     "gismo" that simulated Serios-like photos demonstrates only how some
     photos might be faked under extremely lax conditions. Since many of the
     photos were produced with an on-the-spot, researcher-made gismo or no
     gismo at all, with Serios sometimes feet from the camera, the prepared
     gismo theory falls apart.
  4. There was never any proof that Sir William Crookes had an affair with the
     teenage medium Florence Cook or that "Katie King" was a hoax. Cook's
     "confession," like that of Maggie Fox, was never proved to be valid. The
     "affair" smear was seized upon by the skeptics of the time who con-
     veniently ignored the fact that other respected and experienced researchers
     had also observed and verified her phenomena under controlled conditions.
     Sir William, thirty years after his investigations of Home and Cook, stood
     by his work and expressed confidence that science would eventually
     unravel the mysteries of psychic phenomena.

                                                         ROSEMARIE  PILKINGTON
                                                           30 Donna Court #5
                                                Staten Island, New York 10314
Journal of Scientific Exploration, Vol. 21, No. 2, p. 229, 2007


Apologies once again to three sets of authors whose manuscripts are still being
delayed in publication; the backlog will likely be exhausted in the next two issues.
   Aickin's column should be taken to heart not only by those who write but also by
those who read material that dallies with statistics: the column can alerts us to ways
in which we may be misled by the manner in which results are reported or not
reported. I also recommend a piece posted on Aickin's website, "The Mystogram" (at
www.ergologic.us), which sets out very nicely how misleading can be certain ways
of presenting data graphically. Some classic books describing reasons why statistics
is sometimes described as a way to lie are Best (2001, 2004) and Huff (195411993).
   The written version of Peter Sturrock's Dinsdale Award lecture certainly fulfils the
promise of the earlier oral presentation. It underscores the authentic understanding of
scientific activity that inspired Peter to found the Society for Scientific Exploration. I
wish I had thought of his cogently descriptive and self-explanatory terms "OK
anomalies", "not-OK anomalies", and "sleeping anomalies"; they will allow future
authors to avoid some of the jargon that has often made discussions of these matters
opaque to non-specialists. A substantive point that Peter has often emphasized, but
that I have not seen put so plainly and forcefully elsewhere, is the need to draw up
a complete set of hypotheses.
   The four research articles in this issue feature three reports that directly illustrate
why this Society and this Journal are needed: careful investigations are described on
matters whose discussion is excluded from mainstream scholarly and scientific
periodicals. The fourth article, by Dobyns with commentary by Aickin, goes to the
heart of issues of validity in studies of these kinds.
   There is some instructive to-and-fro in the Letters section, and the Book Reviews
offer, as usual, informative descriptions of and comments on a rich variety of books.
   I am already anticipating the enjoyment and stimulation to be experienced at our
next annual meeting, less than a month away as I write. Those of you have not yet
been to one of these meetings don't know what you are missing, and you should
make every effort to attend. It's quite addictive.

1. Best, Joel. 2001. Damned lies and statistics: untangling numbers from the media, politicians, and
   activists. University of California Press.
2. Best, Joel. 2004. More damned lies and statistics: how numbers confuse public issues. Berkeley:
   University of California Press.
3. Huff, Darrell. 1954. How to lie with statistics. W . W . Norton (reissued 1993).
Journal of Scientific Exploration, Vol. 21, No. 2, pp. 231-239, 2007


                       Inference and Scientific Exploration

                                               MIKEL ICKIN
                                    University of Arizona, Tucson, AZ

A person closely associated with JSE posed the following question. When
people report statistics in an article, what should they include? This is an area
in which there is obviously enormous room for opinion, which immediately
induced me to try to provide an answer. As you will see, there is no perfectly
right answer, but there are several partial answers that I think frontier scientists
might want to keep in mind. Everything said here represents my views, not
necessarily those of anyone else associated with JSE, so prospective authors
should not take any of this as being prescriptive.

                                  Answer 1: Show All the Data
   Perhaps due to their academic training, many scientists seem to believe that
reduced data are more prestigious than raw data. It becomes important to them,
therefore, to show their mastery of statistical methods by reducing their data
reports to the near vanishing point, their work richly justified by obscure
symbols decorated with statistical jargon. In extreme cases the report may
consist of nothing but p-values. I will admit that when I see this, my immediate
impression is that it comes from a lack of understanding of how data analysis
works, or possibly that there is something to hide, but this may depend on my
mood of the day.
   In many cases, frontier research studies involve rather small datasets. In these
cases, it can actually take up less journal space to publish the entire dataset than
it does to roll out the statistical analysis tables. I recall reviewing an article for
another journal in which the authors filled pages and pages with largely
meaningless analysis of variance (ANOVA) tables, obviously copied literally
from the computer output, which shed almost no light on their results. When I
checked the sample sizes, I figured they could just publish their data in much
less space than the tables took up. Since some analytic tables are certainly useful
in such publications, the recommendation is probably better phrased as
publishing the entire dataset in addition to a small number of analyses.
   Another reason for full publication is that it is now easy to scan a journal page
into a document and then copy and paste it into a text document, from which it
232                                 M. Aickin

can be read into any decent statistical analysis program. This provides an
invaluable opportunity for readers to try out their own analysis on the actual
data, in order to see whether they agree with the analyses put forward by the
authors. My experience has been that in any dataset of reasonable complexity,
there are several stories about what the data mean, which are for all intents and
purposes equally supported by the raw data. Sometimes these stories differ in
major ways, leading to incompatible interpretations of the same underlying data.
My feeling is that in these cases the scientific path is to give expression to these
stories, even though they might vary. The problems are that (1) conventional
science journals do a poor job of tolerating ambiguity in experimental results,
and (2) in order to tell the different stories you have to actually be able to find
them, and here is where having different people scrutinize the same data
becomes useful.
   Of course the third problem is that authors generally do not want their raw
data re-analyzed by others. In some cases, I believe, it is because they are not
fully confident in their own ability to analyze data, and they would prefer not to
advertise any deficiencies in their approach and technique. Although it may
seem harsh, I have little sympathy for this attitude. Impeachable analyses that
cannot be checked constitute friction in the system of science, in the sense that
they slow down real progress.
   As an example of full disclosure, I have used an article from JSE (Bunnell,
[1999]. Journal of Scientific Exploration, 13, 139-148) in my classes to illustrate
linear regression, in part because the author published all the data. The
experiment was designed to determine whether healing with intent had an effect
on the rate of pepsin enzyme activity in vitro. When I plotted differences (healed
minus paired unhealed) reaction rates against the time of the experimental
replications, it was clear that there was a downward trend in the healing effect,
a trend which all but vanished by the time of the last experiment. When I first
saw this, I suspected that what was really going on is something that we see
often in biomedical research: initially published results become successively
weaker in subsequent publications. One prominent university in the eastern US
has become famous for aggressively analyzing data, then skimming off the
interpretable and statistically significant results for prestigious publication. A
time-trend of subsequent work in the area shows the same kind of decline as in
the healing intent experiment. In this case the eastern researchers are probably
exploiting random variation for professional purposes, instead of trying to
minimize chance influences. More generally, it is usually true that medical drug
studies overestimate the beneficial effects of the drugs, in that later studies, in
more representative patients, almost automatically show poorer results. In any
case, when I presented the healer example to a class of alternative medicine
practitioners, one of them pointed out that it was common knowledge that even
very successful healers tend to get "tired" when they are called on to repeat
results, thereby becoming less effective over time, and so my exploitation-of-
chance interpretation was by no means the only explanation. I think this

                          Fig. 1. Histograms of two samples.

illustrates the value of full publication, because I was able to see something that
the author had missed, and by showing it to others I was able to see an
interpretation that I had missed.
   There is another very important point in this example. No individual study
ever pronounces the final word on a scientific issue. It is cumulative experiments
within a program of research, and replication by others, that finally lead to
scientific acceptance. It then becomes critical to learn from prior experimen-
tation how to design future experiments, both to avoid problems and to correct
for difficulties that were previously found but unanticipated. The nearly
universal rule today seems to be that authors publish almost no information that
would help anyone else to design future research. Whether this is culture or
perversity is irrelevant, because simply making the raw data available would go
a long way toward solving the problem.

                              Answer 2: Graphics
   Of course in many cases Answer 1 is of no use, due to the size of the dataset.
It has always seemed to me that graphical options are then rather attractive.
Perhaps the easiest graphic is the histogram. Figure 1 shows two histograms of
a measurement carried out in two groups. It clearly shows that Group 1 has
larger values than Group 2, and in particular, the extreme positive values in
Group 1 are not found in Group 2. This is an important fact, one that is hard to
see without a histogram. I will, however, point out that while histograms are
much used in elementary statistics texts, and although good data analysts employ
them frequently, they appear rather rarely in scientific publications. My opinion
is that histograms usually show how much natural variability there is in various
measurements, and scientists often regard this as an embarrassment, which they
should therefore avoid publishing. Because I think natural variability is
important, I encourage the publication of histograms.
   Another way to portray the same data is in a dotplot (Figure 2). This simply
amounts to turning the histograms on their sides, but it facilitates visual
comparison, and often works well for multiple groups. It is perhaps worth
                                       M. Aickin

                                       ! o o o
                                       g o o
                                       8 8 8
                                       I                            I           I
                0                      1                            2          3

                          Fig. 2. Dotplots of the data in Fig. 1.

pointing out that one could reconstruct the original data (at least approximately)
from the dotplot. When the dotplot is too messy, one can turn to the boxplot
(Figure 3). Here the upper and lower edges of the box are at the 75th and 25th
percentiles of the data, and the horizontal bar in the box denotes the median. The

                    Fig. 3. Box-and-whisker plots of the data in Fig. 1.
                                     Column                                   235

extensions above and below the box are calculated as follows. Go out to the most
extreme value that is within 1.5 box-heights from the upper (or lower) edge of
the box and draw the horizontal line (then connect it to the box). Any values
beyond this are denoted by circles, since they are conventionally interpreted
to be outliers. The boxplots also show that Group 1 tends to have larger values
than Group 2, although some of the detail is obscured. I have found many
publications in which the interpretation of the boxplot is mangled (the upper and
lower extensions are often said to be 95th and 5th percentiles, and at least one
statistics package actually computes them this way).
   Graphics can sometimes be very useful for portraying values of individual
experimental units as they evolve over time. It is customary to plot the time-
trajectories of the means in these cases, but what usually happens is that almost
no individual time trajectory ever follows the pattern of the mean trajectory.
Thus, stories about what happened to individuals come to be confounded with
stories about how the group mean varied. In some of our studies we have
segmented our sample into those who did well, mediocre, or poorly, and have
shown the individual trajectories in three panels. By cutting down the number of
individuals, the distracting variation is reduced. Again, I am not recommending
graphics as a complete replacement for analysis and tables, but rather as a useful
adjunct to give the reader a sense of what happened in a more immediate way.

                    Answer 3: Small Number-Summaries
   If one wanted, the boxplot could actually be reduced to five numbers: median,
25th and 75th percentiles, and the upper and lower effective ranges (the
horizontal lines on the extensions). In data presentations, however, one more
frequently sees means and standard deviations (SDs), and occasionally ranges
(either the largest and smallest values, or the difference between them).
   The SD deserves some comment. Although any probability distribution has
a SD, it is most interpretable when the distribution is at least approximately
symmetric and sort of lump-shaped. Thus, giving the SD (as opposed to the 25th
and 75th percentiles) for a skewed distribution is not enormously helpful. From
the mathematical viewpoint, the SD is important because it plays a role in the
large-sample distributions of many statistical estimators. The most familiar is the
mean (or average) of a sample. The sample mean has a probability distribution
(due to the fact that the sample was obtained by some chance mechanism), and
under the usual assumptions its distribution has a standard deviation, which is
S D I h (where n is the size of the sample and SD is the standard deviation of the
population from which the sample is selected). This leads to a terminological
problem; there is a standard deviation both for the population distribution (SD)
and for the probability distribution of the mean (SDIfi). Early on in the
statistical literature, these were distinguished by calling SDIfi the standard
error, abbreviated SE. Most definitions you can find for the SE give it this way.
Now the plot thickens. There are lots of estimators other than the mean that have
236                                 M. Aickin

large-sample distributions that are approximately Normal and whose standard
deviations are thus important for statistical inference. I use SDE (standard
deviation of the estimate) for these standard deviations of estimates. The mean is
only one example, where SE and SDE are the same. But in other cases (like
regression coefficients, for example), the formula for the SDE is not the same as
for the SE. Overwhelmingly, people still use SE for SDE, even though it is
technically incorrect. On almost any output from a computer package what is
labeled SE is actually SDE.
   The practical problem is, which should be reported, SD or SDE? I think
the answer depends on the purpose. If you want to characterize how variable the
population is with respect to what you are measuring, then SD will do it (if the
distribution of that measurement is roughly symmetric). If, on the other hand,
you want to say how precise some estimate (like a mean or regression coeffi-
cient) is, for the purpose of statistical inference, then SDE will do it. This is
because the large-sample distribution of these estimates is Normal, which is
symmetric. In some cases both purposes are important, and so reporting both
is useful, especially if they are labeled correctly.
   In a different but related direction, one often sees experiments in which
changes are the issue. We have two groups, we treat them differently, then we
look to see how the changes in the two groups compare. This paradigm is
extremely common in biomedicine. It is, however, almost never true, even in
biomedicine, that researchers report a complete set of statistics. One complete
set includes mean before intervention, mean after intervention, SD before
intervention, SD after intervention, and correlation between pre- and post-
intervention measurements. There are other sets of five estimates that are
equivalent to these, but for simplicity it would seem that reporting at least these
five should be a bare minimum. Another measure (which can be derived from
the above five) is the SD of a change score. I think this is important because
people tend to give too much attention to the mean change, as if everyone
changed according to the mean, and the SD of change appropriately points out
that changes did actually vary, and by how much.
   If we go beyond means and SDs, in general statisticians use mathematical
models to analyze more complex data. The idea is that the model incorporates
features that we expect to be present, and so an analysis guided by the model
will be appropriate for the actual data. The issue is not (as commonly thought)
that the model must be a perfectly correct accounting for how the data were
generated. It only needs to embody the most important features, which would
result in biases if they were ignored. One example is when one has measure-
ments on multiple people, but there is reason to suspect that the measures will be
correlated (sometimes influences happen to groups, not just to individuals in an
independent fashion). In cases such as these there are models that allow for
the intercorrelation, and models that do not can produce biased estimates, or
overstate the precision of estimates, or both. In all of these cases the model
has one or more parameters (unknown constants) that reflect something of
                                     Column                                    237

importance in the experiment. Indeed, designing a good analysis depends on
setting up parameters to capture the effects of interest. Then, the computer
output estimates the parameters and provides their SDEs. Reporting the
estimates and their SDEs then seems like the most sensible way to proceed.
Examples of this type are multiple regression (for measured outcomes), Poisson
regression (for counts of events), exponential regression (for times until a target
event happens), and logistic regression (for yeslno outcomes). In each case, the
parameter of interest intends to measure how much influence some explanatory
variable has on the outcome, if all other explanatory variables could (magically)
be held fixed.
   Parameter estimates and the SDEs may seem remote from the actual under-
lying data, because they concentrate the information in the experiment so com-
pletely on the theoretically important issues. Therefore, it does not seem out of
place to ask for both some representation of the actual data (as I have suggested
above) and estimates of crucial parameters.
   While there is no single definitive reference for statistical summaries and
presentations, one that is well worth considering is Statistical Rules o Thumb, by
Gerald van Belle (Wiley, 2002). I very strenuously disagree with some of his
advice, but on balance the good ideas seem to me to much outweigh the not so
good ideas.

                               Methods to Avoid
   Because inference is an activity that is widely dispersed across multiple
disciplines, various conventions tend to arise in different areas, take hold, and
become dominant. Some of these are actually contrary to the spirit of inference,
but since they never go through any formal review process, they establish
themselves nonetheless.
   One of these arcane practices is the over-reliance on p-values and "statistical
significance". I have seen data presentations in which only p-values were
presented, with no actual estimates of the parameters from the underlying
models, which were designed to capture the effects of interest. Actually, I have
seen a lot of such presentations. At one point in my career I began supervising
a group of data analysts who would not report parameter estimates to the
scientists who paid them unless the corresponding p-values were below 0.05. It
is now a widely established practice in conventional journals to require the
proliferate display of p-values, irrespective of whether they are appropriate or
useful. This abuse naturally leads to others, such as reporting a parameter
estimate and then decorating it with one or more asterisks to indicate whether its
p-value is below 0.05, 0.01, or 0.001 (or some such scheme). One can only
imagine that the next step will be to eliminate all estimates, and just report
   Connected to this is the reporting of numbers that are at most ancillary to the
analysis. Examples are t-statistics and F-statistics, which are in fact merely
238                                 M. Aickin

intermediate figures necessary to compute p-values. The fact that computer
output displays them seems to promote the idea that they are important and that
they belong in publications. I see them only as numbers that take up space in
data tables, space that is more deserved by other numbers.
   I have said above that whenever possible one should show all data, or graph-
ical representations of all the data, along with something like the five-number
summary. In this latter case, a method to be avoided is the reporting of the one-
number summary, a mean or median with no indication of variability. Equally
unrecommended is the three-number summary consisting of the median and the
smallest and largest values. The problem is that the extreme values are in-
fluenced both by variability and the size of the sample, whereas the sample size
measures the size of the sample and the SD or 25th and 75th percentiles measure
   Despite my preference for graphics, there is one extremely frequent graphic
that has always seemed ludicrous to me. Instead of a boxplot, a solid bar (as in
a histogram) is erected to connect zero with the mean of some measurement,
and then a little TV antenna (like the upper part of a boxplot) is stuck onto the
top. I have called this a "mystogram" because it does more to mystify than
clarify. Readers who enjoy statistical satire may want to visit my website (www.
ergo1ogic.u~) read my entire (multiply rejected) article on the mystogram.
   Another large area in which I think researchers have been misguided is the
ANOVA. This encompasses a large number of techniques, many of which were
invented by R. A. Fisher, the premier statistician of the 20th century. Social
scientists are especially highly trained in this arena, and so they tend to want to
look at every problem of statistical analysis through an ANOVA lens. From my
standpoint, their chief failing is that they spend far too much time citing
significant p-values for things such as "main effects", "interactions", and
"time-by-group interactions", without giving proper attention to what actual
effects have been estimated. Thus, a significant "time-by-group interaction" is
very frequently taken to be a successful result, without consideration of whether
the actual group effects over time are beneficial or uninterpretable and whether
this procedure has reasonable statistical power (the conventional computation
does not take into account the direction or pattern of the results). Even more
damaging, in most approaches to ANOVA the numbers of individuals in
experimental cells determine the definition of the parameters, so that experi-
ments with different cell numbers are actually incomparable, because they apply
to different parameters. In virtually every case, there is a (generalized) re-
gression model that will do far better than the ANOVA model it replaces. See an
earlier column of mine for suggestions on assessing change using a regression
model (Aickin, M. [2004]. Journal of Scientijic Exploration, 18, 361-367).
Along this line, it may be worth pointing out that John Tukey (inventor of the
boxplot) once formed a "committee for the suppression of the correlation
coefficient". This was because the correlation coefficient removes the scales on
which the underlying variables are measured, whereas including them (using
                                      Column                                    239

regression coefficients, for example) is the real purpose of science. Evidently
Tukey's committee did not succeed.
   Yet another strange practice that has arisen lately is the reporting of "effect
sizes". This is an estimate of an effect on some measurement divided by the SD
of that measurement (for example, difference between post- and pre-means,
divided by SD of the pre-measurement). The problem with this practice is
immediately obvious; the pre-measurement SD will vary from population to
population, so that one is actually measuring the effect with a ruler that changes
its length, depending on the study. How this ever came to be considered
a sensible strategy completely escapes me. (In fact, "effect size" is an invention
to produce plausible sample-size calculations in grant applications when one
does not know the relevant SD; in other words, it covers up ignorance, which is
completely unnecessary after one knows the SD.) The situation is made
considerably more murky by the fact that in some models (unlike ordinary
regression) the SD is functionally related to the effect, rendering "effect size" all
but meaningless for the purposes of inference.

                               Why Is This Hard?
   One of the advantages of working in a narrow scientific area is that
researchers can agree on their basic definitions, what paradigms they use to view
experiments and reality, and how they communicate with each other. Statistical
inference belongs to all empirical sciences in which there is any appreciable
amount of unexplained variability. Thus there are many voices competing to
have their views of statistical inference taken seriously, and perhaps even to
dominate practice in other fields. It is, therefore, no wonder that there will be
a certain amount of confusion about the question, "What statistical data should
I present?"
   I have not exhausted the problems or possibilities here, but the question is
a good one, and probably I will have an opportunity in other columns to continue
to give my thoughts on it. I re-emphasize that the opinions here are mine, and not
necessarily shared by anyone else connected with JSE.

   The problem of deciding what data or statistics to present in a research article
is not easy to resolve. Above all, honesty is paramount, and whatever fairly
reflects what was found is on the right track. Sometimes it will be better to
present all the data, other times careful graphical displays will be best, and
inevitably at other times only succinct, jejune summaries can be provided. In the
end, good inference tells a story about what happened, and perhaps how or why
it happened, and so long as the numbers selected for presentation are motivated
by doing this well, they will be appropriate.
Journal of Scientijic Exploration,Vol. 21, No. 2, pp. 241-260, 2007             0892-33 10107

              The Role of Anomalies in Scientific ~esearch'

                              Center for Space Science and Astrophysics
                          Stanford University, MC 4060, Stanford, CA 94305
                                    e-mail: sturrock@stanford.edu

      Abstract-Anomalies play a key role in science, in calling into question
      some established belief: an anomaly is an anomaly only with respect to some
      hypothesis, theory, or belief system. Some anomalies (OK Anomalies) are
      greeted with interest and investigated vigorously, some (Not-OK Anomalies)
      are avoided or viewed with suspicion, and others (Sleeping Anomalies) may
      for some time go unnoticed. In this article, anomalies are viewed from the
      perspective of scientific inference. This requires that we compare the anomaly
      with a logically complete set of hypotheses, and that assessments of the evidence
      for the anomaly, and of its compatibility (or incompatibility) with various hy-
      potheses, be expressed in terms of probabilities. Some anomalies may present
      a challenge to our "model of reality." (These are normally viewed as "Not-OK.")
      Identifying our "standard model of reality" makes it possible (and necessary) to
      identify alternative models so as to form a logically complete set of hypotheses.
      Keywords: anomalies-inference-probabilities

The word "anomaly," according to Webster, is derived from the Greek "an"
[="not"] and "homalos" [="even"] and signifies a "deviation from the common
rule," or "something out of keeping, especially with accepted notions of fitness
or order." In referring to an anomaly in science, we think first of the former,
manifestly intellectual definition-a result in scientific research that does not
conform to expectations based on the prevalent theory. However, members of
this Society will be well aware that anomalies also have a sociological import-
they may be "out of keeping with accepted notions of fitness or order." S-ome
anomalies may be viewed primarily as intellectual challenges, but other anom-
alies may be in part a political challenge, in that the weight given to an anom-
aly depends on the status within the scientific community of the person
proposing the anomaly. (Yes, dear reader, there are heresies and heretics in
science, as well as in religion [Mellone, 19591.)
   Anomalies should be the life-blood of science. Niels Bohr once said that
"progress in science is impossible without a paradox," and Richard Feynman
(1956) once remarked that "The thing that doesn't fit is the thing that is most
interesting." More recently, Jahn and Dunne (1997) have written ". . . good
science, of any topics, cannot turn away from anomalies; they are the most
precious resource, however unrefined, for its future growth and refinement."
242                               P. A. Sturrock

   The first thing to note about "anomaly" is that it is a relative concept, not an
absolute concept. A result is an anomaly only with respect to a given theory or
hypothesis. In scientific research, it would be an experimental or observational
result that is not in accord with current theory. Therein lies its importance. An
anomaly provides a test of a theory. As Feynman's remark implies, it is much
more important to search for facts that do not agree with current theory than to
find further facts that do agree with that theory. If a certain fact, which is
incompatible with a given theory, can be firmly established, then that theory
must be modified or abandoned.
   We can consider one or two historical examples. In 1919, Ernest Rutherford
gave one of his assistants, E. Marsden, the task of studying the scattering of
alpha particles by a gold foil (Whittaker, 1953: 20). According to the prevalent
"plum pudding" model, an atom was composed of electrons immersed in a blob
of positively charged matter. According to this model, alpha particles should
suffer only slight deflection in traveling through gold foil. Rutherford was
astonished to find that some alpha particles were backscattered from the foil. He
said "It was as if you fired a 15-inch shell at a piece of tissue paper and it came
back and hit you." It took Rutherford over a year to digest the implications of
that anomaly. He finally concluded, correctly, that the positive charge in an atom
is concentrated in a very small space at the center of the atom.
   But not everyone responds to an anomaly in such a direct and productive
manner. Roentgen recognized an anomaly when a piece of paper painted with
barium platino-cyanide fluoresced when current was passed through an adjacent
Crookes tube (Whittaker, 1951: 357). However, that discovery was missed by
several physicists. For instance, Frederick Smith, an Oxford physicist, when told
by an assistant that photographic plates kept near a Crookes tube were found to
be fogged, told his assistant to keep them somewhere else. (Whittaker, 1951:
358). [We may all smile on reading this, but can every one of us be quite certain
that he or she is not now failing to recognize an anomaly in his or her research?]
   Different anomalies evoke very different responses from the scientific com-
munity. I suggest that there are at least three different categories of anomalies:
"OK Anomalies," "Not-OK Anomalies," and "Sleeping Anomalies."
   An "OK Anomaly" is one that has been discovered by an established
scientist, preferably using expensive equipment, and which appears to be an
anomaly that scientists can cope with.
   A "Not-OK Anomaly" is one that is not obviously resolvable and presents an
unwelcome challenge to established scientists, possibly (but not necessarily)
because it has been discovered by a non-scientist.
   A "Sleeping Anomaly" is one that has not yet been recognized as an anomaly.
   As examples of OK anomalies, I cite two from astronomy: (1) Quasars are
objects that, when first identified by Maarten Schmidt of the Mount Wilson and
Palomar Observatories, were anomalous in that they appeared to be star-like but
had redshifts similar to- larger than-those of typical galaxies (see, for
instance, [Shu 1982: 315 et seq.]). Quasars have subsequently been determined
                 The Role of Anomalies in Scientific Research                  243

to be distant galaxies containing a massive black hole. (2) Pulsars are radio
sources that pulse with periods of seconds or less (see, for instance, Shu [1982:
p. 131 et seq.]). When discovered in 1967 they were an anomaly since all
previously known radio sources were essentially constant or varied only errat-
ically on much longer timescales. Pulsars have subsequently been determined
to be rotating neutron stars with very strong magnetic fields.
   It is worth noting that claims of both of these astronomical discoveries were
made by established astronomers using powerful optical or radio telescopes. The
discovery of pulsars led to Nobel Prizes for Professor Anthony Hewish, who
was in charge of the research project, and for Professor Martin Ryle, director
of the observatory (the Mullard Radio Observatory at Cambridge, England).
However, the initial discovery was actually made by Miss Jocelyn Bell, then
a research student. It is also interesting to note that the first records of pulsars
were kept secret, due initially to the possibility that they may have been
emissions from intelligent life forms in other "solar" systems, but later to some
other motive-possibly noble, possibly "Nobel."
   Both anomalies were viewed as due more to limitations in our astronomical
knowledge than to errors in astronomical or physical theory.
   A classical example of a Not-OK Anomaly is that of meteorites. These objects
fall from the sky and may be discovered by any citizen, educated or not.
Moreover, no specialized equipment is necessary. They are now known to enter
the atmosphere from outer space, originating in a vast cloud of such objects in
the solar system. However, their nature was unknown until the 18th Century,
when E. F. F. Chladni published a small book on them in 1794. Twenty-two
years earlier, in 1772, French academicians had ruled that these objects could
not have fallen from the sky, since there are no stones in the sky to fall.
According to Sears (1978), "The scientific community . . . made mei-ry over the
credulity of people who imagined the stones to have fallen from the heavens."
[What are the topics that are genuine, over which present-day scientists make
merry?] The authenticity of meteorite falls was established by the distinguished
scientist Jean-Batiste Biot, who was sent by the President of the National
Institute to investigate a particularly large meteorite fall (over 3,000 stony
meteorites) that occurred at L'Aigle on April 25, 1803.
   A list of current Not-OK Anomalies contains topics that are generally
dismissed as bogus by the scientific community: precognition, telepathy, psy-
chokinesis, reincarnation, "flying saucers," etc., etc. The distinguished English
astrophysicist Malcolm Longair (1984) warns young scientists that "it is dif-
ficult to be taken very seriously as a scientist if you mix up real science with
quasi-scientific pursuits such as spoon-bending, parapsychology, unidentified
flying objects, extrasensory perception, etc." However, the list also contains
topics studied by scientists with a good track record of scientific research, such
as the proposal by Halton Arp that the redshift of quasars may contain
a contribution other than the usual cosmological redshift (see, for instance, Arp
and Sulentic [1985]), and the proposal by Martin Fleischman that nuclear
244                               P. A. Sturrock

processes may be influenced by electrochemical processes (Fleischman et al.,
1989; see also Storms, 1996).
   The close geometrical match between the west coast of Africa and the east
coast of South America may be regarded as a "Sleeping Anomaly." Although
this fact had been noted by Francis Bacon, Antonio Snider-Pellegrini, Benjamin
Franklin and others, it was not generally recognized as a challenge to un-
derstanding until Alfred Wegener pointed out, early in the 20th Century, that
geologic features of the West African Coast would accurately line up with
similar features on the East Coast of South America when the two continents
were juxtaposed and proposed an interpretation. Wegener attributed the cor-
respondence to the breakup of one large continent (referred to as "Pangaea")
and the progressive separation of the parts by a process he called "continental
drift." This proposal was ridiculed for many years. The distinguished geo-
physicist Sir Harold Jeffreys once remarked to me-       with a smile-that there
is no force inside the Earth that is strong enough to move continents. Members
of one scientific community (in this case, geophysicists) seldom welcome with
applause a proposal made by a scientist from another community. (Wegener
was a distinguished scientist, but he was a meteorologist, not a geophysicist.)
The tide turned when geophysicists found that the magnetic signatures were
effectively mirror-imaged on the two sides of the Mid-Atlantic Ridge, showing
that it was the spreading center and providing a mechanism for what became the
new theory of plate tectonics.
   We now know that the scientific community was in error in its response to the
challenge of meteorites and to that of continental configurations. Can we be sure
that scientists of the 21st century are not making similar errors in their responses
to some current phenomena? To pursue this question, we need to give a little
thought to the nature of science. Richard Feynman (1956) remarked succinctly
that "The essence of science is doubt." Three and a half centuries earlier, Francis
Bacon (1603) had written "If a man will begin with certainties, he shall end
with doubts; but if he will be content to begin with doubts, he shall end with
certainties." These precepts are fully in accord with the rules of scientific in-
ference, as developed by Jeffreys (1973), Good (1950), Jaynes (2004), and others.
According to this theory, it is advisable to proceed along the following lines:
  We should
   1. Think in terms of probabilities, not certainties.
   2. Consider a complete set of hypotheses, not a single hypothesis.
   3. Examine our initial beliefs, and represent them by "prior probabilities."
   4. List the relevant items of evidence, and estimate the credibility of each
   5. In this way, estimate the "weight of evidence" that each item gives for
      each hypothesis.
   6. Combine the "weights of evidence" with the prior probabilities to arrive
      at our post-probabilities.
                 The Role of Anomalies in Scientific Research                  245

We adopt a logically complete set of hypotheses to be sure that the anomaly can
be compatible with one of our considered hypotheses. The epigram "Anything
that does happen, can happen" is attributed to the distinguished astronomer and
physicist Robert Leighton. Jeffreys (1961) wrote "It is sometimes considered
a paradox that the answer depends not only on the observations but on the
question; it should be a platitude."
   In order to clarify this procedure, it is helpful to consider a simple example.
The above procedure need not be restricted to scientific questions. We consider,
as an example, the authorship of the "Shakespeare" plays. It is surely an
anomaly that the plays show a detailed knowledge of Italy, yet Shakespeare
never left England. We may analyze this anomaly as follows, using a procedure
based on the principles of scientific inference that have been described
elsewhere (Sturrock, 1973, 1994). First, we need a complete and mutually ex-
clusive set of hypotheses. We adopt the following:
HI: The author of the Shakespeare plays was William Shakespeare;
H2: The author of the Shakespeare plays was Edward de Vere, Earl of Oxford;
H3: The author of the Shakespeare plays was somebody else.
We begin by giving these three hypotheses equal prior probabilities:

where Z denotes "zero-order" or background information. One can usually
ignore the term 2, unless one runs into difficulties (such as finding that none of
the specified hypotheses is compatible with the evidence), in which case one
needs to consider what is really being implied by this term.
  We now need one or more "items" that can be part of an interface between
data and theory. For present purposes, we adopt just one item, which comprises
two exclusive statements:
F1: The author had first-hand knowledge of Italy;
F2: The author did not have first-hand knowledge of Italy
We need to assign probabilities to these statements based on the hypotheses, and
based on the relevant evidence (the plays).
   We know that Shakespeare had no first-hand knowledge of Italy and that de
Vere did. Whether a hypothetical "somebody else" had knowledge of Italy is
problematical. Ordinary actors and theater managers would not have had that
knowledge. On the other hand, some noblemen and perhaps some merchants may
have had extensive stays in Italy. Let us suppose that there is a one percent chance
that the unknown author might have had first-hand knowledge of Italy, then
246                               P. A. Sturrock

  Finally, we need to assess these options on the evidence of the plays.
Personally, I find it hard to believe that a playwright who had no first-hand
knowledge of Italy would have had the knowledge and motivation to write in
such detail about Italy, but I will allow that possibility a chance of one percent:

Then some formal manipulations (Sturrock, 1973, 1994; the relevant equations
are reproduced in Appendix A) lead to the following post-probabilities that
combine our thoughts about the hypotheses and about the relevant evidence:

                               P(H1 IF,Z) = 0.005,
                               P(H21F,Z) = 0.980,
                               P(H3IF,2) = 0.015.

We see that this item of evidence strongly favors de Vere, and even favors
"somebody else" over Shakespeare.
   Of course, this is just one piece of evidence, and most people will start out
with the presumption that the "Shakespeare" plays were in fact written by
Shakespeare. Let us suppose that we start out feeling 99 percent confident that
the author was indeed Shakespeare, but allowing 0.5 percent chance that it may
have been de Vere and 0.5% chance that it may have been somebody else. Then,
if (using the procedure given in the articles just cited) we fold this initial
assessment together with the above assessment that was based on the probable
familiarity with Italy, we arrive at
                              P(H1 IF, 2) = 0.074,
                              P(H2JF)Z) 0.803,
                              P(H3IF)Z)= 0.123.
We see that de Vere still comes out ahead, and Shakespeare still comes in last.
   We are really interested in the application of these procedures to anomalies in
scientific research. Hopefully (but not necessarily) these assessments can be
somewhat more objective (or, as Ziman [I9781 would say, "consensible") in the
realm of science than in the realm of historical literary speculation. However, the
merit of this procedure is not so much that it leads to definite answers, as that it
will typically lead to definite questions.
   I now wish to describe briefly three anomalies that have turned up in my own
scientific research in recent years. One of these comes from "mainstream"
science, and the other two are from topics that Longair (1984) warns young
scientists not to get involved in.
   In considering an anomalous experimental result or observation which ap-
pears to contravene current theory, we need to be able to estimate the probability
that the result could have occurred "by chance" on the basis of that theory. One
way to do this is to consider a wide range of similar results so that we can say
                 The Role of Anomalies in Scientific Research                         247


                           Frequency, Cycles per Second
       Fig. 1. The power spectrum to be expected from the middle-C note on a piano.

"If this particular result occurred by chance, then many other similar results
should also have occurred by chancew-or-"very few similar results would
have occurred by chance." That is to say, it is helpful to have some way of
"scanning" a wide range of possibilities, of which the result in question is
simply a special case.
   Indeed, an anomaly in scientific research is typically an unexpected result or
observation that follows, or is accompanied by, many results or observations that
occur as expected. For instance, in the Rutherford experiment mentioned earlier,
for every alpha particle that was backscattered by the gold foil, many more were
only slightly deflected. Sometimes the anomalies are associated with particular
values of some parameter. In this case, it is obviously helpful to "scan" the
result as a function of that parameter.
   One scanning procedure that is often helpful is known as "power spectrum
analysis." (See, for instance, Jenkins & Watts [1968]). One searches for periodic
modulations of a measurement as a function of frequency. If one were to record
the sound of middle-C on a piano and then carry out a power-spectrum analysis,
one would find a peak in the display corresponding to a frequency of 262 cycles
per second, as in Figure 1. There would also be peaks corresponding to
"harmonics" of this frequency, at 524 cycles per second, 786 cycles per second,
etc. Of course, these patterns are not anomalies; they are expected. However,
248                                     P. A. Sturrock

Fig. 2.   The power spectrum that might be found, allowing for harmonics and a possible 120-hertz
          background hum.

one might find one piano in a thousand for which the power spectrum also shows
a peak at 120 cycles per second, as in Figure 2. This would be regarded as an
anomaly, until one found that the piano contained a piece of electrical equip-
ment, when it would no longer be an anomaly. (The sound produced by 60
cycles per second electrical power is predominantly at 120 cycles per second.)
Now suppose that the recording is not that of a musical instrument but the noise
of a large room full of chattering people. Then one is likely to obtain a very
ragged power spectrum, as in Figure 3. In this case, it would be an anomaly to
find a sharp peak, as in Figure 4. On investigation, one might find that a security
alarm had been triggered somewhere in the building. Once that was discovered,
the peak would no longer be an anomaly.
   This brings me to some of my recent research. I have been studying
measurements of the solar neutrino flux. The process producing neutrinos (the
thermonuclear conversion of hydrogen into helium, etc.) is generally believed to
be constant, and the experimental teams analyze their data on the assumption
that the flux is in fact constant. The only expected periodic modulation would be
a small variation, with a period of 1 year, due to the fact that the Sun-Earth
distance varies in the course of a year. This modulation has been detected.
   However, I have been interested in the possibility that the physical pro-
                  The Role of Anomalies in Scientific Research                         249

                          Frequency, Cycles per Second
   Fig. 3. The power spectrum that might be found from a recording made in a noisy room.

cesses generating the solar neutrino flux may not be spherically symmetric. In
this case, one might find fluctuations corresponding to the frequency of solar
rotation as seen from Earth-about 13.5 cycles per year, corresponding to
a period of 27 days. Many other forms of radiation from the Sun (X-rays, etc.) do
vary in this way, since they are influenced by the Sun's magnetic field, which
typically has a very complex structure. According to some theories (Chauhan &
Pulido, 2005), neutrinos might be influenced by a magnetic field, in which
case the measured neutrino flux might be found to vary with a period of about
27 days.
   The Super-Kamiokande collaboration has made available an extensive com-
pilation of solar neutrino measurements (Fukuda et al., 2003). My colleagues
and I have carried out several power-spectrum analyses of this dataset, the most
recent of which (Sturrock & Scargle, 2006) is shown in Figure 5. This does not
show a peak at the solar rotation frequency, but neither does the power spectrum
of the disk-center magnetic field. One of the main features in the magnetic-field
power spectrum is a peak at the second-harmonic of the rotation frequency
(three times the rotation frequency) at 39.60 2 0.42 yr-l. The power spectrum
of the solar neutrino data shows a peak in this frequency band, at 39.28 yr-l. It
shows a stronger peak at 9.43 yr-l which is due, we believe, to a mode of
250                                   P. A. Sturrock

   16,                  I               I                I                I

Fig. 4. The power spectrum that might be found from a recording made in a noisy room that shows
        an anomalous sound at 900 hertz.

internal oscillation of the Sun. These features in the power spectrum of solar
neutrino data represent an anomaly since, on the basis of standard neutrino
theory, the flux should be constant and the power spectrum featureless.
   The next example is closer to the interests of this society. I have carried out an
analysis of a catalog of 12,100 UFO reports taken from a catalog compiled by
Larry Hatch (Available at: http://www.larryhatch.net; Sturrock, 2003). Figure 6
shows a power spectrum of the events. We see that there is a prominent peak at
1 yr-', which is not unexpected, since we spend more time outdoors in summer
than in winter. Hence this peak does not tell us anything new: it is certainly not
"anomalous." However, we can carry out an analysis that is a little more
complicated, which searches for evidence of a rotating pattern of modulation. A
modulation associated with the location of the stars will show up as a peak with
frequency 1 yr-l. A rotation with the same frequency, but in the opposite
direction, would show up as a peak with frequency - yr-l. The result of this
"running-wave" analysis is shown in Figure 7. We see that there are exceedingly
strong peaks for forward waves with frequencies 1 yr-l and at 2 yr-l, and only
weaker peaks for reverse waves at those frequencies. This result provides very
strong evidence for what is called a "local sidereal time" effect: the probability
of a UFO event is related to which stars are overhead at the time of the event.
                 The Role of Anomalies in Scientific Research                  251

  15                 I                I                I                I

   "0               10               20             30                40         50
                                     Frequency ( y i ' )
             Fig. 5. Power spectrum of Super-Kamiokande solar neutrino data.

Unless one can show that most UFO events are due to misperceptions of certain
astronomical objects, in a restricted range of local sidereal time, this comprises
an anomaly.
   The third example is taken from current research recently carried out in
collaboration with James Spottiswoode (Sturrock & Spottiswoode, in press). We
have applied the two procedures used in the two previous examples to a catalog
of 3,325 free-response anomalous cognition experiments. The results of the
simple power-spectrum analysis are shown in Figure 8. The strongest feature in
this power spectrum occurs at v = 24.65 yr-*, quite close to twice the synodic
lunar frequency (24.74 yr-l). When the data are analyzed in terms of rotating
frames, as in our UFO analysis, we obtain the result shown in Figure 9. In this
case, the reverse-wave peak is stronger than the forward-wave peak, but this is
consistent with an association of the results of the experiments with the posi-
tion of the moon. Hence this analysis provides quite strong evidence for an
anomaly-a lunar effect on anomalous cognition experiments.
   There is of course a vast literature of studies of PSI and UFO data, and of many
other similarly curious "anomalous phenomena." (See, for instance, The Source-
book Project, available at: http://www.science-frontiers.com/sourcebk.htm).     Paul
Kurtz (1983; a philosopher) refers to a wide range of such phenomena as
                                      Frequency, yr-1
           Fig. 6 . Power spectrum formed from a catalog of 12,100 UFO events.

"paranormal." He writes "[The term] 'paranormal' . . . is applied to anomalous
data that supposedly transcend the limits of existing science and are due to
unknown or hidden causes. The paranormal world view . . . contravenes the model
of the universe derived from the physical and behavioral sciences." Kurtz's
approach is proto-typical of the self-styled "skeptical" community, which I prefer
to refer to as the "pseudo-skeptical" community.
   However, practicing scientists do not regard our current scientific knowledge
as absolute and immutable. Sagan (1973) wrote "I would like to return to the
question of possible new or alternative laws of physics. [Maybe] there are new
laws of nature to be found even under familiar circumstances. I think it is a kind
of intellectual chauvinism to assume that all the laws of physics have been
discovered by the year of our meeting." The Russian physicist Vladimir
Ginzburg (1973) wrote "Science of course never ends. There will always be new
laws and clarifications. When we say some law of physics is valid, we always
bear in mind that it is true within certain limits of applicability." And Edgar
Mitchell (1993) wrote "There are no unnatural or supernatural phenomena, only
very large gaps in our knowledge of what is natural . . .. We should strive to fill
those gaps of ignorance."
   Hence, a major challenge in the study of anomalous phenomena is to identify
                  The Role of Anomalies in Scientific Research                          253

   - 15,
                                        Frequency, yr-1
Fig. 7. Running-wave power spectrum formed from a catalog of 12,100 UFO events. Forward
        waves are shown with positive power and reverse waves with the negative of the power.

the basic assumptions of our current "weltanschauung," "world view," or
"model of reality" with which these phenomena are incompatible. This
important question could and should be the topic of a major research project. In
the present discourse, I look only for a very simple model.
   It is my impression that the following three hypotheses form the basis for the
usual rejection of evidence for such phenomena:
      Any topic which is incompatible with physical theory, as it is now known,
      is impossible.
      Consciousness is simply a brain activity.
      No "superior beings" have any influence, or have had any influence, on
      events and developments on Earth.
  I suggest that, if we wish to study such phenomena, we should consider not
only these three assumptions, but also the possibility that one or more of these
assumptions may be incorrect.
  To formalize this procedure, we may introduce the following three pairs of
  Ordinary Physics (OP). The world is governed by (and restricted by) laws of
physics as they are now known.
254                                     P. A. Sturrock

   12                 I             I             I             I             I



      "0              5            10          15              20            25           30
                                         Frequency, yr-1
Fig. 8.    Power spectrum for the frequency range 0-30 yrpl, formed from Z-values derived from
           3,325 free-response anomalous cognition experiments.

   Extraordinary Physics (EP). The world is also subject to laws of physics of
which we now have no knowledge and which make possible phenomena that are
now inconceivable.
   Ordinary Consciousness (OC). Consciousness is a brain activity and is
therefore localized in time and space.
   Extraordinary Consciousness (EC). Consciousness has an existence in-
dependent of the brain and is not limited in either time or space.
   No Intelligent Intewention (NII). There is not now, and never has been, any
intervention by non-human intelligent beings in events and developments on
   Intelligent Intervention (II). There is or has been intervention by non-human
intelligent beings in events and developments on Earth.
   In terms of this set of options, the "Standard Model of Reality" comprises OP,
OC, and NII. In almost all scientific research, the standard model of reality is
built into the zero-order information 2. In the current study, it is likely that there
are other assumptions built into Z, which are unrecognized and therefore
unquestioned. For instance, there may be phenomena which are real, but which
cannot be verbalized, for which it would therefore be difficult to enunciate the
underlying hypotheses.
                  The Role of Anomalies in Scientific Research                         255

                        I                                I               1

                      22               24                26            28               30
                                       Frequency, yr-1
Fig. 9. Running-wave power spectrum for the frequency range 20-30 yr-', formed from Z-values
        derived from 3,325 free-response anomalous cognition experiments. Forward waves are
        shown with positive power and reverse waves with the negative of the power.

  Now that we have identified what we regard as the "standard model," we can
immediately list seven non-standard models of reality: EP, OC, NII; OP, EC,
NII; OP, OC, 11; EP, EC, NII; EP, OC, 11; OP, EC, 11; and EP, EC, 11. It is
convenient to refer to these as "Model of Reality Version 000," etc., or, briefly,
"MOROOO," etc. Then the set of models becomes what is outlined in Table 1.

                                        TABLE 1
                                     Models of Reality

                                 MOROOO = {OP, OC, NII)
                                 MORlOO = {EP, OC, NII)
                                 MOROlO = {OP, EC, NII)
                                 MOR001 = {OP, OC, 11)
                                 MOR110 = {EP, EC, NII)
                                 MOR101 = {EP, OC, 11)
                                 MORO11 = {OP, EC, 11)
                                 MOR111 = {EP, EC, 11)
Note: MOR = Model Of Reality; OP = Ordinary Physics; OC = Ordinary Consciousness; NII = No
Intelligent Intervention; EP = Extraordinary Physics; EC = Extraordinary Consciousness; I1 =
Intelligent Intervention.
256                              P. A. Sturrock

   If we wish to study anomalous phenomena according to the principles of
scientific inference, we should consider all eight of these possible models of
reality, not just the standard model.
   An important question that now arises is whether we should regard these three
choices as independent, or whether the probability of one choice is likely to
depend on one or two of the other choices. My own view is that the choice OP/
EP will have an important influence on the other two choices. If "extraordinary
consciousness" is real, it probably cannot be understood in terms of ordinary
physics, so the prior probability that we assign to OC or EC will depend on
whether we are associating it with OP or EP. Similarly, one possibility for
intelligent intervention is that beings from another "solar system" are visiting or
have visited Earth. Travel from other stars seems virtually impossible if we think
in terms of ordinary physics, but-for all we know-it may be comparatively
easy in terms of some form of extraordinary physics.
   If we regard the OCIEC choice and the NII/II choice as independent of each
other, then we can proceed to organize the eight prior probabilities as follows:
We first assign prior probabilities to OP and EP: P(0PIZ) and P(EP1Z). Note
that, in setting these prior probabilities, we should ignore all the experimental
and observational results that support OP: since EP must contain OP as a special
case, it follows that any result that is consistent with OP will also be consistent
with EP.
   We next consider the choice OC/EC, but relate the prior probabilities to our
choice of OP or EP: P(OCIOP, Z), P(OCIEP, Z), P(ECIOP, Z), P(ECIEP, Z).
Based on OP, the probability of OC will be high, and that of EC will be small.
Based on EP, the probabilities of OC and EC may be comparable. Similar
considerations apply to the choice NII/II.
   Based on these assumptions, the prior probabilities for the eight possible
models of reality may be listed as follows:

               P(MOR000) = P(OP,OC, NII) = P(OClOP, 2) X
                           P(NIIIOP,2) X P(OPIZ),                              (6)
               P(MOR100) = P(EP,OC, NII) = P(OCIEP,2) X
                           P(NIIIEP,2) X P(EPIZ),                              (7)

   The prior probabilities of our eight possible models of reality are formed
from combinations of the following ten probabilities: P(0PIZ) and P(EPIZ);
P(OClOP, Z), P(OCIEP, Z), P(EClOP, Z), and P(ECIEP, Z); and P(NZZlOP, Z),
P(NII)EP,Z), P(II)OP,Z), and P(IIIEP, Z).
   However, we are most interested in estimates of P(EP1Z) and of the post
probabilities for EC and 11, which are given by
                 The Role of Anomalies in Scientific Research                   257


   In considering our assessment of EP, we should bear in mind that our basic
laws of motion and gravity are only 300 years old and that relativity and
quantum mechanics are only 100 years old. What is the probability that we have
already discovered virtually all of physics? What is the probability that, even if
we continue research for the next million years, there will be no further devel-
opments as revolutionary as relativity and quantum mechanics? It would be hard
to justify a very small value for P(EP1Z). Indeed, it would not be unreasonable to
adopt a value larger than 0.5.
   On the other hand, most scientists are probably of the opinion that P(EClOP,
Z) and P(IIlOP, Z) are small. Hence, the above equations may probably be
approximated as


    Appendix B lists the prior probability and the two conditional probabilities
that need to be assigned in order to arrive at estimates of the probability of the
most interesting non-standard models of reality.
   The first terms on the right-hand side of these equations represent assessments
of very speculative possibilities on the basis of unknown physics. To give these
quantities very small or very large values would be an act of faith. It appears
that, on the basis of our present knowledge (and ignorance), we cannot assert
that extraordinary consciousness and intelligent intervention are either very
likely or very unlikely.
   The key assessment is the prior probability for EP. If this is considered to be
very small, then all the models of reality will be unlikely, except the standard
model. However, if the prior probability for EP is thought to be non-negligible,
then (since EP is beyond our present comprehension) assessments of the prior
probabilities for the four models that involve EP are likely to be non-negligible.
   In order to obtain an informed range of estimates of P(0PIZ) and P(EPIZ), it
would perhaps be reasonable to consult a number of theoretical physicists, but
it is not at all obvious to which intellectual communities one should turn for
estimates of P(OCIOP, Z), P(OCIEP, Z), P(EClOP, Z), and P(ECIEP, Z) or of
P(NIIlOP, Z), P(NIIIEP, Z), P(IIlOP, Z), andP(IIIEP, Z). However, the
assessment of the prior probabilities for the eight possible models of reality is
not essential for progress to be made. More important is the change in our
assessments of the probabilities of these models when we examine the relevant
evidence, since scientists should be able to agree on the weight of evidence, even
if they differ widely in their prior probabilities. The crucial point is that we will
258                                      P. A. Sturrock

no longer refuse to examine a phenomenon because it appears to contravene the
standard model. We will simply estimate the "weight" which each piece of
evidence contributes to each of the eight possible models (and any other models
that might be proposed) and then keep track of the accumulated weight of
evidence for each model of reality.
   Note that this approach leads to a different interpretation of the terms
"paranormal" and "super-natural." These terms can now simply be interpreted
as indicating that certain phenomena are (or appear to be) incompatible with the
standard model of reality. It is then an open question, to be investigated, whether
these phenomena are compatible with a non-standard model of reality. This
interpretation can therefore lead to productive scientific research, whereas the
standard approach of the pseudo-skeptical community leads to very little

' This essay is based on my Dinsdale Lecture presented on June 8, 2006, to
  the 25th Annual Meeting of the Society for Scientific Exploration in Orem,

  This research has profited from conversations with many colleagues, but I
wish to acknowledge my special debts to Henry Bauer, John Derr, Federico
Faggin, Bernie Haisch, Bob Jahn, Jeff Scargle, Jacques Vallee, and Ron
Westrum, and to the late Ed Jaynes, the late Marcello Truzzi, and the late Ian

Arp, H., & Sulentic, J. W. (1985). Analysis of groups of galaxies with accurate redshifts. Astrophysics
    Journal, 291, 88.
Bacon, F. (1603). Advancement of Learning (1603 ed.). Book 1.
Chauhan, B. C., & Pulido, J. (2005). LMA and sterile neutrinos: A case for resonance spin flavour
    precession? Journal of High Energy Physics, 6 , 8.
Feynman, R. (1956). The relation of science and religion. Lecture Series on Engineering and Science,
Fleischmann, M., Pons, S., & Hawkins, M. (1989). Electrochemically induced nuclear fusion of
    deuterium. Journal of Electroan. Chemistry, 261, 301.
Fukuda, Y. (2003). Status of solar neutrino observation at Super-Kamiokande. Nuclear Instruments
    and Methods in Physics Research, Section A, 503, 114.
Ginzburg, V. L. (1973). Communication With Extraterrestrial Intelligence. MIT Press, p. 208.
Good, I. J. (1950). Probability and the Weighing of Evidence. London: Griffin Press.
Jahn, R. G., & Dunne, B. J. (1997). Science of the subjective. Journal of Scientific Exploration,
    11, 201.
Jaynes, E. T. (2004). Probability Theory: The Logic of Science. Cambridge University Press.
Jeffreys, H. (1961). Probability (3rd ed.). Oxford University Press.
Jeffreys, H. (1973). Scientific Inference (3rd ed.). Cambridge University Press.
                    The Role of Anomalies in Scientific Research                               259

Jenkins, G. M., & Watts, D. (1968). Spectral Analysis and its Applications. San Francisco: Holden-
Kurtz, P. (1983). Science and the Paranormal: Probing the Existence of the Supernatural. Scribners,
    p. vii.
Longair, M. S. (1984). Theoretical Concepts in Physics. Cambridge University Press, p. 23.
Mellone, S. H. (1959). Heresies. In Encyclopedia Britannica (Vol. 11, p. 499).
Mitchell, E. (1993). Newsletter of the Institute of Noetic Sciences. 14 November, 1993.
Sagan, D. (1973). Communication With Extraterrestrial Intelligence. MIT Press, p. 206.
Sears, D. W. (1978). The Origin and Nature of Meteorites. Oxford University Press, p. 2.
Shu, F. H. (1982). The Physical Universe. An lntroduction to Astronomy. Mill Valley, CA: University
   Science Books.
Storms, E. 1996. Review of the "cold fusion" effect. Journal of Scientific Exploration, 10, 185.
Sturrock, P. A. (1973). Evaluation of astrophysical hypotheses, Astrophysical Journal, 182, 569.
Sturrock, P. A. (1994). Applied scientific inference. Journal of Scientific Exploration, 8, 491.
Sturrock, P. A. (2003). Time-series analysis of a catalog of UFO events: Evidence of a local-
    sidereal-time modulation. Journal of Scientific Exploration, 18, 399.
Sturrock, P. A. (2006). Power-spectrum analysis of Super-Karniokande solar neutrino data, taking into
    account asymmetry in the error estimates. Solar Physics, 237, 1.
Sturrock, P. A. & Scargle, J. D. (2006). Solar Physics, 237, 1.
Sturrock, P. A., & Spottiswoode, S. J. P. (2007). Time-series power spectrum analysis of performance
    in free response anomalous cognition experiments. Journal of Scientific Exploration, 21, 47-66.
Whittaker, E. (1951). A History of the Theories of Aether and Electricity (2nd ed., Vol. 1). London:
    Thomas Nelson and Sons.
Whittaker, E. (1953). A History of the Theories of Aether and Electricity (2nd ed., Vol. 2). London:
    Thomas Nelson and Sons.
Ziman, J. (1978). Reliable Knowledge. An Exploration of the Grounds for Belief in Science.
    Cambridge University Press.

                                        APPENDIX A
   This appendix reproduces equations derived elsewhere (Sturrock, 1973, 1994)
and which are used in this essay.
   We consider a complete set of hypotheses Hi, i = 1, . . ., I and assign them
prior probabilities P(HI IZ), . . ., P(HIIZ), where Z indicates "Zero-order" or
background information. For each item of evidence E we introduce a set of
statements S,, n = 1, . . ., N and then estimate the probability that each statement
follows from each hypothesis, P(S,IHi, Z), and from the evidence E, P(S,IE, 2).
Then the posterior probabilities are given by

   If we need to combine results from more than one item of evidence, say
E l , . . ., EA, the result is given by
260                               P. A. Sturrock

                                 APPENDIX B
   This appendix lists the principal prior probabilities that one needs to estimate
in order to assign prior probabilities to the most interesting models of reality
specified in this essay.
   The first estimate one needs to specify is the probability that there is
extraordinary physics still to be discovered:

where EP indicates Extraordinary Physics and Z indicates "Zero-order" or
background information. (Remember that each probability estimate must be
larger than zero and less than unity.)
   Then, if we ignore the probability that Extraordinary Consciousness (EC) and
Intelligent Intervention (11) are compatible with Ordinary Physics (OP), the
important estimates to make are
                               P(ECIEP,Z) = -

   Then the post-probabilities, related to the most interesting alternative models
of reality, are given to good approximation by
         P(EP,EC, NIIIZ) P(ECIEP, Z)P(EPIZ) = -
          P(EP,OC, IIIZ) .=: P(IIIEP, Z)P(EPIZ)= -
          P(EP,EC, IIIZ) = P(ECIEP, Z)P(IIIEP,Z)P(EPIZ)= -7
where NII indicates No Intelligent Intervention and OC indicates Ordinary
                            The Yantra Experiment

                    Princeton Engineering Anomalies Research Laboratory
                          School of Engineering and Applied Science
                        Princeton University, Princeton NJ 08544-5263
                                e-mail: rgjahn @princeton.edu

    Abstract-Qualitative and analytical observations of consciousness-related
    anomalies in random event generator (REG)-based experiments suggest that
    direct conscious feedback regarding experimental performance may impede rather
    than facilitate anomalous effects. The Yantra experiment tests this hypothesis by
    providing no outcome-related feedback to the operator. Feedback is replaced by
    a visual and auditory environment expected to be conducive to anomalous
    performance. This environment allows a number of options which operators can
    adjust to suit their personal taste, or to explore alternative conditions. The lack of
    feedback intrinsic to the program is reinforced by an experimental policy that
    forbids an operator to receive feedback before completing 10 experimental
    sessions or declaring an inability to return for further data collection.
       Data analysis assumes that individual operators perform idiosyncratically; that
    populations distinguished by gender and previous experimental experience may
    perform differently; and that operator performance may depend on the
    environmental parameters of the protocol. All of these dependencies are found
    to exist. The most general test for distinctive individual behavior, a X2
    constructed from the 2-scores for each segment in which intention, operator, and
    environment are held constant, produces X2 = 629.05 on 558 degrees of freedom,
    p = 0.020. The effect appears to be asymmetric and driven by changes in the
    high intention data alone. Gender differences in differential success rates are
    comparable to those seen in earlier experiments and are statistically significant
    (Z = 2.213). Analysis of subgroups distinguished by both gender and previous
    experience shows that previously experienced female operators produce
    individually consistent performances regardless of the imposed environment
    (although variable between individuals), while all other operator subpopulations
    show strong sensitivity to environmental conditions. Overall, the effect size, as
    measured by local mean shifts, is approximately four to five times that seen in
    earlier REG experiments, suggesting that similar no-feedback, environmentally
    supportive protocols may be fruitful for future research.
    Keywords: human-machine anomalies-consciousness-related anomalies-
              PEAR-REG-psychological correlates-subjectivity-individual

                                     I. Introduction
The Princeton Engineering Anomalies Research (PEAR) program has studied the
effect of human intention on microelectronic random event generators (REGS) in
experiments dating back to 1979 (Jahn & Dunne, 2005; Jahn et al., 1987, 1997,
262                             Y. H. Dobyns et al.

2000a). Various modes of performance-related feedback have been used over that
time. In the original experiment, feedback was automatic unless the operator went to
some effort to avoid it, since a large and conspicuous front panel on the REG device
displayed both the current trial value and a running average for the current collection
of trials. Moreover, since a final run mean was displayed for the operator to record in
a logbook, the "no-feedback" condition was maintained only for the duration of the
current trial sequence. Subsequent remote experiments with the same equipment
were run in a genuine no-feedback condition. Alternative modes of graphical
feedback were introduced in the late 1980s, and proved popular with operators.
   The initial introduction of graphical feedback seemed not to have significant
consequences for the effect size, except for some operators on an individual basis
(Nelson et al., 2000). Later experiments, however, suggested that this might not be
a universal generalization. An experiment designed specifically for its appealing
feedback produced no significant results by overall outcome measures (Jahn et al.,
2000b), while in the extensive replication effort of the IGPP consortium, graphical
feedback (chosen as the default mode) actually seemed counterproductive, with
two of the three participating laboratories reporting statistically significant
differences of performance in which graphical feedback proved inferior to other
feedback modes (Jahn et al., 2000a, table M.2). In addition, anecdotal reports
indicated that at least some operators found outcome-related feedback, with its
implications of evaluation and judgment, to be objectionable and preferred to work
without feedback of any kind.
   These considerations led to the design of an experiment that would provide no
feedback regarding experimental outcomes. This design was facilitated by the
availability of a new generation of REG sources without front panel displays. With
the computer screen relieved of the necessity for a feedback display, it was decided
to use the screen to present an image that it was hoped would be conducive to
anomalous performance. The specific choice of image was motivated by the
experience of the "ArtREG" experiment (Jahn et al., 2000b). In that experiment,
operators were presented with two superimposed pictures, initially in a "double
exposure," with half the pixels on the screen coming from each picture. The
balance between the two images varied under the control of an REG input, and
the operator's intentional task was to make the chosen target image dominate
the screen. While the results of the experiment as a whole were non-significant,
there seemed to be a substantial effect size associated with a subset of the images.
These images were deemed "numinous," containing significant religious or
spiritual imagery from a number of different traditions. After some deliberation
it was decided to use a mandala design known as the "Sri Yantra" (see Figure 1) as
a numinous visual display to accompany the new experiment.

                          2. The Yantra Environments
   The environmental parameters presented by the Yantra experiment include
options for both visual and auditory components intended to facilitate

                            Fig. 1. The Sri Yantra mandala.

a meditative state of mind and suppress analytical focus. The Sri Yantra is
a pattern of interlocking triangles at the core of Figure 1, a symbol which is
supposed to represent the interpenetration of spirit and the material world. The
remainder of the design consists of a series of traditional framing elements
commonly used to surround the Sri Yantra, which also appear frequently in other
mandala designs.
   Operators have three choices of visual environment. The Sri Yantra mandala
can be presented as shown in Figure 1, as a static picture on the computer
monitor (in white lines on a blue background screen). Alternatively, sectors
defined by the various radial boundaries (the surrounding box, the internal
circles bounding the "lotus blossom" patterns, and the Sri Yantra itself) can be
presented in differing background colors, with the colorrnap changing in
a steady rhythm driven by arrival of REG trials at the computer. (The values of
the trials have no effect on this; only their reception by the computer is relevant.)
The pattern of specific color changes is chosen by a pseudo-random process
unconnected to the experimental data. As a third alternative the monitor can
simply be left blank.
   Similarly, operators are offered several options for audio environment. By
means of a servomotor controlled from an output port and connected to
a drumstick, the computer can beat a large Native American drum in the
experiment room. The default audio operation is for the drum to beat once with
264                             Y. H. Dobyns et al.

each data reception event (that is, in the same rhythm as the changes in the video
if changing video is in use). An alternative rhythm beats the drum twice, quickly,
with each trial, producing a pattern of quick double beats separated by slightly
less than a second, strongly reminiscent of a heartbeat. A third option is silence,
and a fourth allows operators to bring their own music CDs or other recording
media to play any soundtrack that appeals to them while doing an experiment.
   In addition to these various environmental options, another experimental
parameter carried as a variable is the instructed versus volitional assignment of
intention deployed in most of PEAR'S REG-based experiments. There are thus
twenty-four possible combinations of intentional assignment, visual environ-
ment, and audio environment. These are chosen freely according to the opera-
tor's preferences, although operators who explore more than one environment
are encouraged to generate substantial databases in each.

                            3. Experimental Protocol
   In Yantra, as in most PEAR REG experiments, the primary variable is
operator intention: operators actively attempt to shift the REG output dis-
tribution in the high and low directions, in a balanced design. The basic unit of

data collection is a trial of 200 random bits, summed to produce a random
integer with theoretical mean 100 and standard deviation &6             7.071. Trials
are produced at a rate slightly faster than 1 per second. Sequences of 100 trials
are generated automatically as runs. The basic unit of operator participation is
a series in which an operator completes two runs in the high intention and two
runs in the low intention. This requires approximately 10 minutes in a typical
case. Unlike the standard REG protocol, Yantra is bipolar rather than tripolar,
with no baseline intention. The assignment of intentions to runs may be made by
the operator, or determined by the computer. In the latter case the determination
is made by a pseudo-random process seeded by the time at which the program is
started. For both volitional and instructed data, the program enforces the
constraint that a series contains exactly two runs of each intention.
   Since series are quite short, many operators chose to generate multiple series in
one session. While operators could, in principle, generate as many or as few series
as they cared to, the experimental protocol provides no feedback on their
performance until they either (a) complete at least 10 series, or (b) declare that
they will not generate any further Yantra data. This policy has the beneficial side
effect of assuring that small databases from short-term operators could not be
subject to optional stopping, since operators had no information about the
outcome of their efforts. While operators could, if they wished, receive feedback
after their 10th series, several of those who continued to larger databases chose not
to be given feedback until they had completed their entire Yantra involvement.
   When the Yantra experiment was launched it was decided that it would be
closed after 1000 series had been generated. Practical considerations having to
do with the availability and enthusiasm of operators, and the desirability of large
                                       Yantra                                    265

operator databases, led to a slight relaxation of this condition, to the stipulation
that the experiment would run at least 1000 series and that after the 1000-series
mark the experiment would be kept open only for the benefit of operators who
were attempting to complete previously declared commitments regarding
personal database size. Once these outstanding commitments were completed
Yantra had generated a total of 1017 formal series. Space precludes the
presentation of the raw data in the current article, but they can be found in the
Appendix to the Technical Note on the Yantra experiment (Dobyns et al., 2006).

                            4. Data Analysis Methods
    Yantra analysis was designed from the outset under the assumption that
operators would produce individual and idiosyncratic results. Of course,
individual 2-scores for operators have always been computed in PEAR experi-
 ments; individual variability becomes relevant only when constructing an overall
"bottom-line" evaluation for the population of operators. The standard pooled,
 weighted 2-score test used in earlier experiments is clearly not acceptable under
 this hypothesis. It is tantamount to assuming that all operators are interchangeable.
 While it is the most sensitive possible test for detecting a consistent universal
 effect, individual variations are averaged out and become invisible.
    Given the hypothesized situation of effect sizes that will vary among
 individuals in an unpredictable manner, there is no one statistical test that is
 optimally sensitive for all conditions; sensitivity depends on the model of
 variation. A test that is very broadly useful, however, is a X2 test based on the
2-scores of components. This is computed by simply squaring the 2-scores of all
 component databases and summing the squares; the number of degrees of
freedom (d.f.) of the X2 is equal to the number of components. Two features of
 this test make it particularly useful and versatile. First, X2 values follow an
 addition rule: the sum of two X2 values is another X2 value with a number of d.f.
 equal to the sum of the d.f. in the two contributions. Second, if the composite Z
 mentioned above is squared and subtracted from the overall X2, result is again
 X2 distributed* with one fewer d.f. This secondary X2 is driven solely by the
 variation between subsets, the mean effect having been removed by the 2-score
 subtraction. To express these three quantities mathematically, if there are a total
 of N subsets, with the ith subset comprising nidata units and having an aggregate
 2-score of Zi, the composite 2 , raw X2,and variability X2 can be written:

* This is not a general subtraction property for X2;the difference of two X2 is not
  in general X2 distributed. It can be shown, however, that in this s ecific case,
  the residual, after subtracting the mean z2from a X2,is in fact x distributed.
266                                   Y. H. Dobyns et al.


                      I                I               I                I              I
      0              100             200              300              400           500
                               Number of Segments (Both Intentions Pooled)

Fig. 2. Cumulative plot of ( X 2 - d.f.) for all data segments distinguished by operator, environment,
        and intention in the Yantra experiment.

These equations provide the basic tools for most of the Yantra analytical
treatments. In addition to inter-operator variability, previous experiments led to
an expectation that operators might either individually or collectively vary in
their responses to the 24 operating environments, and display distinct effects in
high and low intentions. Moreover, it is expected from previous REG observa-
tions that if the operator pool is divided into subtypes by gender and previous
experience, different patterns of performance appear in the subtypes. Analyses
for all of these factors are obviously necessary for the interpretation of the

                                            5 Results
   The total database of 1017 series was contributed by 61 different operators. At
least some exploration of each of the 24 possible environments was conducted.
The extreme form of the idiosyncratic-effects hypothesis is that a different effect
may be seen in any data subset generated by a different operator, in a different
environment, in a different intentional effort. The set of all data generated by
a single operator in a single intention and environment will be referred to hereafter
as a segment. There are 558 such segments in the formal database, 279 in each
intention. These segments have a raw X 2 = 629.04, p = 0.020. Figure 2 illustrates
the outcome in the closest possible analog of PEAR'S traditional cumulative
deviation, with the excess of X2 over its theoretical expectation (i.e., the number of
d.f.) plotted against the number of segments accumulated.
                                             Yantra                                           267

                                 Number of Segments in Each Intention

Fig. 3. Cumulative X 2 plot of operator X environment segments, with segments in the high intention
        and low intention shown separately.

   Figure 3 illustrates the result of separating the segments according to operator
intention. The high segments have X2 = 357.36 on 279 d.f., p = 0.0010. The low
segments, in contrast, have X 2 = 271.68 (p = 0.612). The effect is thus driven by
the large mean shifts observed in the high intention alone, a result similar to that
seen in other non-feedback experiments (Dunne & Jahn, 1992). One may also
construct the population of 2-scores for the intentional difference: ZA = (ZH -
ZL)/fi, for each matched pair of segments (i.e., the segments run in the high and
low intentions by a given operator in a given environment). Not surprisingly, this
produces an intermediate result:         Xi
                                        = 321.59 on 279 d.f., p = 0.040.
   These results are almost purely driven by inter-segment variation. The overall
pooled Z, results are 0.0307 and -0.2070 in the high and low intentions,
respectively: the pooled ZA = 0.1681. Subtracting out this average effect
yields variability-driven X (all with 278 d.f.) of 357.36 (p = 0.00091) in the high
intention, 271.64 (p = 0.596) in the low, and 321.56 (p = 0.037) in the high -
low difference.

5.1. Individual Operators and Operator Subtypes
   The effects are less impressive when operators are considered singly, without
regard to environmental differences. Figure 4 shows a scatterplot of the 61
operator performances in the two intentional conditions. These contributions
produce overall X2 values of 76.165 ( p = 0.091) in the high, 68.150 ( p = 0.247)
in the low, and 56.060 ( p = 0.655) in the delta condition.
   There is, nevertheless, evidence of anomalous performance in the operator-
268                                   Y. H. Dobyns et al.

    -4 -             I            I             I              I            I              1
      0              10          20           30               40          50             60
                                         Number of operators

Fig. 4. Scatterplot of all individual operator performances, not divided into environment segments.

by-operator database as well. The largest 2-score attained by any operator
(marked by a square in Figure 4) is Z= 3.833 ( p = 1.26 X lo4, two-tailed). After
Bonferroni correction for having 122 such scores to examine, this remains
a conventionally significant value of p = 0.015. Nor is this performance alone;
datasets by three different operators show 11 > 3, an overpopulation that is a
p = 0.0046 event.
   This would seem, at face value, to indicate that some operators produce
consistent, individual effects, though the population as a whole does not. This
can be clarified by examining Figure 5, which shows the operator-based X2
values for each of the four subsets resulting when the operators are segregated
according to their gender and previous experience with REG-type experiments.
More specifically, for readier visual comparison this figure shows the ratio of X2
to d.f., so that the horizontal line at 1 shows the chance expectation for each test.
The plotted letters show the X2/d.f.value for the high and low intentions. The
dotted lines show the 95% confidence limits for the X2; they are at different
heights in the different subsets because ~ ~ 1 d .has different quantiles for
different d.f., even though its expectation is always 1. It is clear from Figure 5
that the experienced female operators have highly significant individual effects
in both high and low intentions; all of the other operator subtypes show no such
effects. It is worth noting that all three of the 11 > 3 databases were produced by
such previously experienced female operators. In contrast, Figure 6 shows the
segment-based (or operator X environment) X2 for these same operator
subpopulations. Here we see the interesting outcome that the females with
previous experimental experience have a non-significant result, while each of
                                                 Yantra                                              269

Fig. 5.   Operator-based X2 values separated by operator gender and experience. The plotted letters
          show the ratio X2/d.f.for each of the two intentions: the solid line at 1 is thus the theoretical
          expectation. The dotted lines show the p = 0.05 confidence limit; they are at different
          heights in different sections due to different numbers of d.f.

the other operator populations produces a X2 in the high intention that exceeds
the 95% confidence limit for chance variation. The implications may be clearer
if the numeric data are presented in tabular form, as in Table 1.
   The first row of each section of Table 1 gives the number of d.f. in the X2 for
that column. It should be noted that for the operator X2,the d.f. do not add up to
61 because five of the "operators" in the full dataset are actually male-female
co-operator pairs and cannot be assigned to a specific gender. Below the d.f.
entry is the cutoff value for p < 0.05 significance in a X2 with that number of d.f.
   Of particular interest in Table 1 is the comparison between operator-only and
operator X environment X2 for the experienced female operators. These 20
operators produce an excess X2 of 15.48 above expectation ( p = 0.018) in the
high intention, and 18.06 ( p = 0.0087) in the low, when an operator-based X2 is
computed for each operator's total performance. When the data are further
subdivided by environment, the number of d.f. increases from 20 to 89, while the
X2 values increase from 35.483 to 96.687 and from 38.064 to 106.526 in the high
and low intentions, respectively. Put another way, the further subdivision of the
data adds 69 d.f., while adding 61.204 and 68.462 to the two X2 values. We thus
see that the previously experienced female operators show strong evidence for
an effect when their total databases are examined, but the subdivision by
environments increases the X2 only by amounts such as would be expected from
the increase in d.f., that is, consistent with these operators displaying only
270                                  Y. H. Dobyns et al.

Fig. 6. Operator X environment x2 values separated by operator gender and experience. C ' Figure 5.

random variation between environments. We may thus conclude that they show
characteristic personal effects which are unaffected by the operating environ-
   In contrast, the other operator subgroups (females without prior experience,
and both experienced and inexperienced males) show only the expected level of
random variation in their overall personal performances, but they show variation
far beyond chance levels when their data are subdivided by environment. (This
is evident in the high intention, as is obvious from Figure 6; the pooled high data
from Table 1 for these operators has X2 = 250.63 on 181 d.f., p = 0.00047. The

                                            TABLE 1
                                  Analysis by Operator Subtype

          Subtype                 New Female         New Male         Exp Female         Exp Male

Operator d.f.
p < 0.05 cutoff for this d.f.
  Op x2, HI
  OP x2, LO
  Op x2, A
Op X env d.f.
p < 0.05 cutoff
  Op x env x2, HI
  ~p x env x2, LO
  Op X env x2, A
Note: Exp = experienced; Op = operator; HI = high intention; LO = low intention; env =
                                      Yantra                                   271

low data are at chance levels, but even pooled across both intentions the high
results drive a marginally significant outcome: X2 = 407.681 on 362 d.f.,
p = 0.049, for high and low intentions combined.) We may conclude from this
that all operators, except experienced females, produce anomalous effects that
are not only personally idiosyncratic, but also strongly influenced by the
operating environment.
   This analysis by gender does not include the co-operator subset, for which
a meaningful assignment of gender cannot be made. While previous analyses
have suggested that co-operators display interesting gender-like effects
according to their status as same-sex or opposite-sex pairings (Dunne, 1991),
the co-operator database in Yantra is too small and homogeneous to extract
meaningful results from such a breakdown. There are five co-operators, all
opposite-sex pairs, contributing operator-based X2 values of          Xi
                                                                     = 4.217,
X = 5.974, and
 :               Xi   = 4.914, none of which are significant. They contribute 9
of the 279 segments in each intention, for segment-based X2 of        Xi
                                                                     = 10.045,
XT = 8.104, and  Xi  = 8.703, all likewise nonsignificant.

5.2. Gender Analysis
   While the above subdivision into types has been instructive, it differs from the
gender analysis performed by Dunne (1998), which found striking gender-based
differences in a much simpler statistic, namely, the rate of differential success by
operator gender. That is, if one simply counts, for each gender, how many
operators "succeed" in their intentional effort (have a higher mean in the high
intention than in the low), one finds different success rates for male and female
   This effect has been exactly replicated in Yantra, as shown in Figure 7, where
20 of the 31 male operators succeed in the direction of intention, while only 9 of
the 25 females do so. (For completeness, we may note that 4 of the 5 co-
operators do so.) The difference is equally present in the data of experienced and
new operators; only the smallness of the database prevents it from achieving
statistical significance among the new operators. It would appear that despite the
numerous distinctions between Yantra and other REG-type experiments, a basic
gender-based difference in response remains pervasive.

5.3. Operating Conditions
  The strongest effects in the Yantra database are the excess of variation seen in
the high intention, when the data are split into segments according to both
operator and environment, and the consistent personal performances of
experienced female operators. All of this has been established from a viewpoint
that individual operator performance is primary, and that operating conditions
provide extra sources of variation within a particular operator's database. This is
not, however, the only way the Yantra data segments can be organized. We may
ask equally well whether there are characteristic patterns of operator performance
272                                    Y. H. Dobyns et al.

                                     Differential success according to gender
   1 .o
                 All Operators                     New Operators                Experienced Operators
                    Z=2.213                           2=1.305                          2=2.260

Fig. 7. Differential success rates in direction of intention by operator gender and previous experience.

in particular operating environments, and how much inter-operator variation
occurs within a fixed environment.
   These questions can be answered directly by computing an overall composite
Z for all of the segments produced in a given operating environment. The sum of
the squared 2-scores of all segments in the environment is the basic X2 for that
environment. As discussed in section 4, when the squared composite 2, is
subtracted from this we are left with a X2 showing the degree of inter-operator
variability. It has, of course, one less d.f. than the number of segments in that
   Adding up the inter-operator variability X2 for each of the environments yields
a X2 with 279 - 24 = 255 d.f., driven by the amount of inter-operator variability
that exists when environmental conditions are held constant. Similarly, the sum
of all of the 2:. for the 24 environments is a X2 derived from any effects that are
consistent within environments, although they may vary between environments.
From the construction of these two values it is obvious that they must add up to
the same total segment-based X2 presented in earlier analyses, with the same
total d.f. This is why the calculation is referred to as an alternative way of
organizing the Yantra data. Instead of partitioning the list of segments by
operators and then examining within-operator variability from environmental
conditions, here we are partitioning the segments by environments and then
examining within-environment variability from operators.
   Figure 8 shows the results of this partitioning. Somewhat surprisingly, it
indicates that both inter-operator variation and consistent mean shifts contribute
                                                                                                                    Yantra                                                                             273

                                                                                         Operator x Environment ChiA2: Sources

                                                                                                                                                                              .   H            . . . . . . . .

                             p=.R1O                                                                              p==-W35


                                                                L                                                                                    L

6    0.5-

                                 ChiA2 from                                                                  ChiA2 from                                                           ChiA2 from
                                ALL SOURCES                                                       INTER-OPERATOR VARIATION                                               CONSISTENT MEAN SHIFTS
                             of variation (279 df)                                                  within environments (255 df)                                          within environments (24 df)

Fig. 8. Sources of the operator X environment                                                                             X2: consistent                        performance in conditions vs. operator
        variation within conditions.

significantly. In addition to the expected inter-operator variability component,
there is also a significant contribution from consistent performance across
operators within each environment. Indeed, considered in terms of effect size
(the proportional increase in the X2 over its expectation), the latter is more than
twice as large as the more highly significant effect of inter-operator variation.
   Figure 9 plots the 24 operating environments individually against these two
measures of anomalous effect. The three-letter codes indicate the three features of
the environment: assignment of intention (volitional [V] or instructed [I]), type of
visual display (changing [C], static [S], or none [N]), and type of audio
environment (single beat [S], heartbeat [HI, none [N], or other [O]). This plot
shows only the high intention, since the low intention data are indistinguishable
from chance in this representation. The vertical axis is 2, for that environment, the
pooled 2-score for all data run under those environmental conditions. The
horizontal axis is constructed by converting the inter-operator variability X2
for that condition to its equivalent Z-score (specifically by applying the inverse
normal distribution to the p-value calculated for the x ~ )The dotted circle shows
the 95% confidence bounds for the null hypothesis in such a plot; if the points
are distributed according to two independent, normally distributed variables, 95%
of them should fall within the circle. Five of the 24 points are clearly well outside
this circle (the three-letter labels are centered over the exact points); in fact, a sixth
(the VCH condition at upper left) also falls just outside the boundary. Thus, 6 of
the 24 environments exceed thep < 0.05 criterion for their distribution along these
two parameters of consistent internal effect and inter-operator variation; this
overpopulation is itself a p = 0.00096 event by exact binomial calculation.
                                       Y. H. Dobyns et al.

                           Inter-Operator Variability vs. Consistent Mean Shifts, HI

                                                             INS      VNO

                                                             ICH      IS0     ,.

                                               VSH     MI^
                                                          ICS VNH                       ISS


                         -3       -2       -1         0          1       2          3
                                       Z-Equivalent of Variability ChiA2

Fig. 9. The 24 conditions, plotted against their consistent effect (vertical axis) and inter-operator
        variation (horizontal axis). Dotted circle shows 0.95 confidence limits of null hypothesis;
        95% of plotted points expected to fall within this circle. High intention data only.

   This figure also provides potential insights into which environments actually
are more conducive to producing anomalous yields, either in a global or an
operator-specific mode. Of the six individually significant outliers, three used
instructed assignment and three volitional, indicating no preference. All
involved some form of audio stimulation-the no-audio environments are all
well within the circle. Moreover, three of the six involve specifically the single-
drumbeat audio. Since each of the four audio options appears in 6 of the 24
possible environments, this means that fully half of the single-drumbeat
environments show individually significant anomalous performance.
   To determine the effects of the particular environmental parameters indi-
vidually on the anomalous yield, a segment-wise X 2 may be computed on those
segments containing those parameters. Different parameter values can then be
compared by an F-ratio test. Table 2 summarizes these results for the high
intention only, again because only the high intention results display an overall
anomalous effect.
   While it appears that instructed assignment is driving the effect and the
volitional condition contributes little, this assessment must be made with
caution. The F-ratio test between these two X2 values is 1.267 on 198 and 81
d.f., p = 0.111. There is thus a reasonable likelihood that the instructed and
volitional databases are samples from the same underlying distribution, and the

                                       TABLE 2
                          Individual Environmental Parameters

   Parameter Value                  d.f.                   x2                p-Value

Assignment of Intention
  Instructed (I)
  Volitional (V)
Video Environment
  Changing (C)
  Static (S)
  None (N)
Audio Environment
  Single beat (S)
  Heartbeat (H)
  Other (0)
  None (N)

lack of significance in the volitional segments is a combination of happenstance
and smaller database size.
   For the video environment, the face-value conclusion is that the changing
video offers no anomalous yield, while the static video contains a strong effect.
In contrast to the previous case, this is confirmed by an F-test between the two:
F = 1.633 on 77 and 157 d.f., p = 0.0051. Even after a factor-of-three
Bonferroni correction to allow for the fact that there are three ways to pick two
comparison sets out of a group of three, this remains clearly significant at
p = 0.015. The no-feedback condition is intermediate between the two in both
effect size and significance, and F-tests confirm that it cannot be distinguished
reliably from either.
   For the audio environment, both of the drum-based environments show
robustly significant effects. The "Other" environment, indicating an operator-
provided audio background, appears to contain comparably strong effects,
although its small size precludes statistical significance. In contrast, the no-audio
condition is clearly null. Unfortunately, this distinction, while highly suggestive,
may also be subject to overinterpretation, since the comparison of the no-audio
condition with the pooled results of the active audio conditions still only
achieves a marginal F-ratio of 1.358 on 199 and 80 d.f., p = 0.0585. The factor-
of-four Bonferroni correction required reduces this almost-significant result to
nonsignificance, indicating that although the anomalous yields appear to be
present only when audio feedback is used, we cannot claim statistical confidence
that this correlation is not coincidental.
   As a final note on environmental effects, it is worth recalling that the
environment of every experimental series is chosen by the operator to suit his or
her current mood and preferences. Despite this, many of the environments seem
to produce no anomalous yield. Statistical scrutiny of the environmental
components confirms that the most popular choice of video display is associated
276                           Y. H. Dobyns et al.

with a null result that can validly be distinguished from that of the rest of the
experiment. It thus would seem that the aesthetic preference for a particular
environment is no guarantee of its facilitation of anomalous performance, even
for the particular operator expressing the preference. We may observe that this is
consistent with the outcome of the ArtREG experiment (Jahn et al., 2000b),
wherein most operators reported that they found the experience enjoyable, but
which nevertheless produced no overall anomalous yield.

5.4. Miscellaneous Observations
   Three operators, all males lacking previous experience, performed a sub-
experiment within the main Yantra experiment. These operators were all
practitioners of a Japanese healing discipline known as Johrei. They
intentionally employed Johrei techniques in exactly half their Yantra data,
and refrained from using Johrei in the other half. These data have been reported
in more detail elsewhere (Jahn et al., 2006). The results may be summarized by
noting that these operators produced strong segment-based responses in their
Johrei data and null results in their non-Johrei data. Since the Johrei condition
was not part of the formal definition of Yantra segments, this distinction has
been diluted in the current analysis. Taking Johrei use into account as a fourth
"environmental" condition would slightly increase the statistical significance
of the segment-based analysis for the overall data and for the inexperienced
male operators, but it would produce no qualitative change in the conclusions
drawn thus far.
   The overall effect size in the Yantra experiment appears to be larger than that
seen in the original REG studies. Since Yantra expects, and uses statistical tests
for, idiosyncratic effects that vary in both size and direction between operators,
direct comparisons with the overall average effect size seen in the original REG
experiment are somewhat problematic. In terms of the size of the mean shift
driving the anomalous effects, however, we may note that the X2 for the pooled
high and low data is 629.0425 on 558 segments, indicating a mean Z2 on the
segments of 1.1273. Since the mean length of segments is 729 trials, if the
excess in Z2 is (as hypothesized) driven by consistent mean shifts within
segments, this mean effect amounts to a Z of 0.01321 per trial, or a mean shift of
0.0932. Although this is an estimate resting on several assumptions about the
nature of the effect, it may be compared to the observed mean shift of 0.0208 in
the original REG experiment; the Yantra figure is approximately 4.5 times
larger. It is notable that the only other fully non-feedback experiment in the REG
repertoire, the remote database, shows an effect size that is indistinguishable
from the original REG, although it displays the same highllow asymmetry as
Yantra (Dunne & Jahn, 1992).
   If anomalous performance in the high and low intentions were independent,
or were present only in one intention, we would expect the X2 in the A
condition (the high minus low difference for a given segment) to be
                                     Yantra                                  277

intermediate between the X2 for high and low. This is in fact observed for the
operator x environment tests. In contrast, if effects were symmetric (that is, an
operator attained the same mean shift in the direction of intention regardless of
the sign of the intention), the X2 on the A condition would be substantially
larger than that in either intention. This is not seen in any of the Yantra
analyses. Instead, the operator-based X2 (without regard to environment)
consistently shows a   Xi  that is smaller than   Xi   or x. This is especially
pronounced for the experienced female operators, where both intentions have
significant X2 while A does not. This odd behavior suggests that there may
actually be a correlation between the two intentions at the level of operators'
complete databases, or a tendency for operators to produce mean shifts in the
same direction in both high and low intention, regardless of environment.
Indeed, of the 61 operators, 37 have the same sign in their overall high and low
databases, vs. 24 who produce opposite signs in high and low. The correlation
coefficient between the high and low intentional results, operator-by-operator,
is p = 0.2201, p = 0.044 (one-tailed).

                                6. Conclusions
  The analysis of the primary intentional data in the Yantra experiment leads to
the following conclusions:
   1. As a whole, the operators have anomalously shifted the means of the
      intentional data, although the mean shift is asymmetrical between
      intentions and its direction varies unpredictably among operators and
      among environmental conditions.
   2. Female operators who have previously participated in REG experiments
      show consistent individual anomalous performance in both high and low
      intentions, regardless of environment, although the performance still
      varies unpredictably among operators.
   3. Female operators new to REG experimentation, and male operators in
      general, show strong sensitivity to environmental conditions, and
      collectively produce effects only in the high intention.
   4. Despite the fact that operators choose environments that appeal to them,
      certain environments are apparently conducive to anomalous yield while
      others are not. This suggests that an environment's ability to foster
      anomalous effects may not correlate with its aesthetic appeal, as was noted
      in the ArtREG experiment.
   5. The gender-based patterns of differential success seen in earlier experi-
      ments are replicated in Yantra, on very similar scales.
   6. Examination of the individual components of the environments suggests
      that instructed assignment of intention is more conducive to anomalies
      than is volitional assignment, and that drumbeat accompaniment is more
      conducive than is silence. However, the statistical confidence of these
      conclusions is modest.
278                                   Y. H. Dobyns et al.

   7. Examination of the video component of the environment, in contrast,
      shows that the static Sri Yantra mandala produces strong anomalous
      yields, while the changing mandala does not, a distinction that is statis-
      tically robust. The state involving no visual stimulus at all is
      intermediate between the two and cannot be resolved statistically from
   8. The overall Yantra effect size can be estimated to be between four and
      five times the effect size seen in the original REG experiments. Given that
      the effect seems to be concentrated in certain conducive subsets, the actual
      increase in effect size in those cases may be even larger.

   These observations provide valuable hypotheses for future research. For
example, would experiments focusing on the conditions found to be conducive
in Yantra in fact produce larger yields? Despite the fact that the experiment as
a whole produced unpredictable anomalous mean shifts with considerable inter-
operator variation, some environments showed consistent mean shifts in the
direction of intention, while others showed consistent mean shifts contrary to
that direction. Can such tendencies be used to foster more consistent intentional
performance among operators? What are the implications, in this context, of the
gender-related differences in differential intentional success? We may conclude
that while it shows a resounding confirmation of the basic hypothesis that
anomalous human-machine interactions may take place in the complete absence
of feedback, and while it displays numerous intriguing structural features which
hint at the nature of the anomalous effect, the Yantra experiment actually raises
more questions than it answers.

  The PEAR laboratory gratefully acknowledges the support of Sekai Kyusei
Kyo, the Hygiea Foundation, the Institut fiir Grenzgebiete der Psychologie und
Psychohygiene, and numerous private philanthropists. PEAR also expresses its
gratitude to the many uncompensated volunteer operators without whom these
data could not have been collected.

Dobyns, Y. H., Valentino, J. C., Dunne, B. J., & Jahn, R. G. (2006). The Yantra experiment. Technical
   Note PEAR 2006.04. Princeton Engineering Anomalies Research, Princeton University,
   Princeton, NJ.
Dunne, B. J. (1991). Co-operator experiments with an REG device. Technical Note PEAR 91005.
   Princeton Engineering Anomalies Research, Princeton University, Princeton, NJ.
Dunne, B. J. (1998). Gender differences in humanJmachine anomalies. Journal of Scientific
   Exploration, 12, 3-55.
Dunne, B. J., & Jahn, R. G. (1992). Experiments in remote humanJmachine interaction. Journal of
   Scientific Exploration, 6 , 311-332.
Jahn, R., Dunne, B., Bradish, G., Dobyns, Y., Lettieri, A., Nelson, R., Mischo, J., Boller, E., Bosch,
                                              Yantra                                             279

   H., Vaitl, D., Houtkooper, J., & Walter, B. (2000a). MindIMachine Interaction Consortium:
   PortREG replication experiments. Journal of Scientific Exploration, 14, 499-555.
Jahn, R. G., & Dunne, B. J. (2005). The PEAR Proposition. Journal of Scientijc Exploration, 19,
Jahn, R. G., Dunne, B. J., & Dobyns, Y. H. (2006). Exploring the possible eflects of Johrei techniques
   on the behavior of random physical systems. Technical Note PEAR 2006.01. Princeton
   Engineering Anomalies Research, Princeton University, Princeton, NJ.
Jahn, R. G., Dunne, B. J., Dobyns, Y. H., Nelson, R. D., & Bradish, G. J. (2000b). ArtREG: A random
   event experiment utilizing picture-preference feedback. Journal of Scientific Exploration, 14,
Jahn, R. G., Dunne, B. J., & Nelson, R. D. (1987). Engineering anomalies research. Journal of
   Scientific Exploration, I , 21-50.
Jahn, R. G., Dunne, B. J., Nelson, R. D., Dobyns, Y. H., & Bradish, G. J. (1997). Correlations of
   random binary sequences with pre-stated operator intention: A review of a 12-year program.
   Journal of Scientific Exploration, 11, 345-367.
Nelson, R. D., Bradish, G. J., Dobyns, Y. H., Dunne, B. J., & Jahn, R. G. (1996). FieldREG anomalies
   in group situations. Journal of Scientijc Exploration, 10, 111-141.
Nelson, R. D., Jahn, R. G., Dobyns, Y. H., & Dunne, B. J. (2000). Contributions to variance in REG
   experiments: ANOVA models and specialized subsidiary analyses. Journal of Scientific
   Exploration, 14, 73-89.
Nelson, R. D., Jahn, R. G., Dume, B. J., Dobyns, Y. H., & Bradish, G. J. (1998). FieldREG 11:
   Consciousness field effects: Replications and explorations. Journal of Scientific Exploration, 12,
Journal of Scientific Exploration, Vol. 21, No. 2, pp. 281-293, 2007

        An Empirical Study of Some Astrological Factors in
        Relation to Dog Behaviour Differences by Statistical
        Analysis and Compared with Human Characteristics

                                       SUZEL FUZEAU-BRAESCH
                                    Universite' de Paris, Orsay, France
                                           fuz. bra @ wanadoo.fr

                                         JEAN-BAPTISTE DENIS
                                       INRA, Jouy-en-Josas, France

      Abstract-A survey of 500 pedigree dogs was carried out in the Paris region.
      For each dog, six behavioural traits were determined and ten of their astro-
      logical traits were retained. A statistical interpretation of the possible relation-
      ships between the two sets of traits was performed based on permutation tests.
      Two strong associations were detected between the angular positions of Jupiter
      and the Sun, and the extraversion dominant trait. There were indications of
      other associations. These associations have a remarkable resemblance to the
      standard associations usually proposed in "human" astrology.
      Keywords: behaviour4ogs and humans-permutation test-astrology-survey

For an empirical study, the dog is an appropriate subject for the investigation of
possible relationships between birth time and the position of sky elements of the
solar system. The precise aim of this study is to see if behavioural differences,
attributable to these, appear in two-month-old dogs. There are of course dif-
ferences between animals and humans but it seems reasonable to describe a
dog's behaviour with the usual descriptions employed by breeders even if these
seem anthropomorphic. There is also a recognizable proximity of psychological
relationships between dogs and humans compared with other animals (e.g. the
cat, the rabbit or the snake).
   The first position of the Sun in its ecliptic course, and at the same time,
positions of the Moon and planets, the rising (Ascendant) and setting
(Descendant) points, the highest (Mid Heaven) and lowest (Nadir) points within
the 24 hours of a day were defined. This applied one of the classical tools of
astrology, according to which a sky element situated in one of the four described
points (= "Angular", or -10") is particularly important in determining be-
haviour. It must be emphasized here that, so far, almost no scientific con-
firmation has been sought for this. Other classical tools of astrology, such as
signs of the zodiac related to the seasons, are impossible to investigate due to the
282                      S. Fuzeau-Braesch & J.-B. Denis

irregular fertility of females during the year (most births take place in spring and
   The results obtained for dogs are then compared with those classically
described in human astrology.


Organisation of the survey
   A population of 500 pedigree dogs was identified by one of the researchers
(S.F.B.). Pedigree dogs were used because breeders are always particularly
attentive to the conditions of birth, given the potential value of the pups. Thus,
when a female begins to give birth, a breeder will stay patiently by the mother
day and night, ready to take the pups, note the time, individual colours and so on.
When they sell the young dogs they need very precise information to answer the
buyer's questions. Purchasers frequently want to know the time of birth, the
order of births in the litter (was my dog first, second, or last? and so on . . .), how
the pup behaved in its first few days and weeks of life. As the pups must live
with their mother and cannot be sold until they are two months old, their
behaviour is very well documented over this period. Every breeder of pedigree
dogs keeps a very precise diary, where all this information is carefully entered
for each animal, the individuals being identified either by colour differences
(zones, patches, spots and so on) or in the case of uniform coloration, by means
of a cropped area of the coat. (The official book, called "LOF" in France,
records pedigrees and births.)
   It was decided to use different breeds of pedigree dogs to prevent any bias
linked to a given breed. They were: Bearded Collie, Belgian Shepherd, King
Charles Spaniel, Chihuahua, Coton of Tulear, French Bulldog, German Shepherd,
Labrador, Lhassa Apso, Malinois, Poodle, Sharpei, Shitzu, Tibetan Spaniel, and
Yorkshire Terrier. Geographically, the kennels were all in the Paris area to
ensure easy contact with the breeders.
   The breeders who agreed to participate have no special knowledge of, or
interest in, astrology. Over a period of five years, a total of 100 litters were
investigated, from two to eight pups in each, for a total of 500 pups. Twelve
breeders participated (see acknowledgments).

Recorded traits
   For behavioural traits, data from the breeders were used. They noted all
behavioural characteristics in detail during the two first months of the pups'
lives. The breeders' notes were freely written in ordinary language. Information
collected for the experiment was summarized according to Pr. Eysenck's method
(1975) by expressing behaviour under "Extraversion" and "Neuroticism",
giving six well defined items. They are detailed in Table 1 and the transcription
                        Astrological Factors and Dog Behaviour                             283

                                           TABLE 1
 Description of the Six Behaviour Traits: Codings and Associated Distributions for the 500 Dogs

Behaviour trait                    Coding               Presence (+)               Absence (-)
                                - --

Extraversion active                    EA                    237                        263
Extraversion dominant                  ED                    120                        380
Extraversion reserved                  ER                    137                        363
Neuroticism affective                  NA                    194                        306
Neuroticism nervous                    NN                     43                        457
Neuroticism steady                     NS                    182                        318

from the free description is given in the Appendix. The different items are
scattered over the entire range of births in the litters. There are many personality
theories and various systems of behavioural description; we have chosen
Eysenck's method as most appropriate to classify the very detailed observations
of breeders because of its simplicity and non-subjectivity. For example,
a dominant dog and a dominant human demonstrate the same characteristics -
except, of course, for the absence of those involving speech. Whereas one can
describe a dominant human as being a "powerful speaker" or being able to
"capture his audience's ear" etc., these qualities would hardly be adaptable to
a canine subject (without changing the meaning).
   Numerous methods exist for the study of personality. Eysenck's was chosen
for this study largely because of the arguments of its creator, summarised as
follows: "To find out the laws according to which this may happen, and to
isolate the major dimensions along which we can classify people, seems to me
a fundamental and critically important part of psychology [. . -1. These three
major dimensions (P-E-N = psychoticism, extraversion, neuroticism) emerge
from practically any large-scale analysis of traits published in the literature"
(Ey senck, 1990).
   The most important element is the group of behaviours attributed to each major
trait derived from the PEN and these are easily recognised in the descriptions
given by the breeders. The six major traits retained for this study are thus not just
abstract characteristics but the result of pragmatic observations (see Appendix).
   Insofar as the question of ascribing human traits to animals is concerned, the
issue was addressed by Eysenck himself for whom this transposition was not
only valid but an objective criterion: ". . . another criterion for the acceptability
of major dimensions of personality, namely that they should be apparent not
only in humans, but also in animals . . ." (Eysenck, 1990). McFarland (1990)
makes the same point.
   Finally, the age of the dogs, two months at final evaluation, was considered
satisfactory on the one hand because of the difference between the lifespan of
dogs and of humans, and on the other because all breeders agree that behavioural
structures of pups are formed very early in the context of the social group con-
sisting of the bitch and her litter.
284                         S. Fuzeau-Braesch & J.-B. Denis

                                           TABLE 2
                 Codings for the Ten Planets and Distributions of the 500 Dogsa

Astrological Trait                Coding                   Angular (+)                   None (-)

Note: Angular (+) = rising, setting, upper and lower culminations of the sky elements.

   For astrological traits, the following ten sky elements were considered: Sun,
Moon and eight planets of the solar system (Mercury, Venus, Mars, Jupiter,
Saturn, Uranus, Neptune, and Pluto). All are usually defined as "Planets" in
traditional astrology.
   As described above in the introduction, the unique astrological criterion
applied is the "angular" position of the sky elements, that is, rising, setting and
highest and lowest points at the place and time of the birth. As the earth rotates,
each element rises, sets and reaches its highest and lowest point every 24 hours
either in the visible sky or, in the case of the lowest point, sometimes in that part
of the sky which is invisible. The whelping of a bitch is always a slow process
and the intervals between the birth of successive pups can vary between 15
minutes and two hours. This factor makes dogs particularly appropriate for this
   The distribution of the 500 dogs is given in Table 2 (program "Astropc" from
Aureas, 30, rue Cardinal Lemoine, 75005 PARIS France).

                                    Statistical Analysis
   The objective was to explore possible links between behavioural traits and
astrological traits. Rather than use sophisticated multivariate approaches such as
correspondence factorial analysis, which are not always easily interpreted and
from which it is not appropriate to draw inferences, it was decided to practice
simple and well-known non-parametric tests for each of the 60 behaviour traits
in planet traits combinations.
   As an example, let us consider the 2 X 2 frequency table associated with
Jupiter (Ju) and extraversion dominant (ED) which is a sub-table of Table 3. 44
pups are (Ju+,ED+), 65 are (Ju+,ED-), 76 are (Ju-,ED+) and the majority of
them, 315, are (Ju-,ED-). To assess the degree of association between the two
traits, we used the proportion of the ED+ dogs positive for the planet. That is 441
120 = 0.367. It is worth mentioning that given the total margins of the table (109,
                        Astrological Factors and Dog Behaviour                                285

                                            TABLE 3
      Joint Distributions of the Dogs for Each Combination of Astrological Traits (in Rows)
                                 and Behaviour Traits (in Columns)"

       EA+     EA-     ED+     ED-    ER+     ER-     NA+     NA-     NN+     NN-     NS+     NS-

Note: Each 2 X 2 sub-table comprises all 500 dogs. The two most significant sub-tables are in bold.

391, 120, 380) this statistic is equivalent to all scores one can imagine to
measure the link between the two traits (one degree of freedom is involved). For
instance, the odds of the behavioural trait among the Ju- dogs (761315 = 0.241)
can be expressed as (120[1 - 0.367])/(391 - 120 [l - 0.3671).
   For the same reason, the Chi-square statistics of independence can be ex-
pressed as a function of this proportion, so our permutation test is equivalent to
the Chi-square test. The advantage of the proportion is that the type of asso-
ciation (positive = high proportion; negative = low proportion) is preserved.
   Once a proportion has been computed, the existence of a significant asso-
ciation between the two traits must be tested. To this end the classical procedure
of permutation tests (Good, 2004) was used. The principle is simple: under the
null hypothesis of no effect a large number of similar samples of data (having
the same margins) are simulated. For each of them the proportion is computed,
providing an empirical distribution where no effect is present. This must be done
for a sufficient number of simulations, say N, with respect to the level of the test,
say a, one wants to perform. Finally the observed proportion is compared to this
distribution, and if it is outside the ( d 2 quantile, [ l - d 2 ] quantile) interval,
then the effect is declared significant.
   To perform the random permutations, the elementary data set can be seen as
a matrix of 500 rows by two columns, where rows correspond to dogs and
columns to the two traits. A (1,l) row means that the corresponding dog is
286                     S. Fuzeau-Braesch & J.-B. Denis

positive for both traits; a (0,1) row means that the corresponding dog is negative
for Ju trait but positive for ED trait; and so on. The number of (1,l) rows is 44,
the number of (0,1) rows is 76, and so on. If there is no link between the two
columns, we can permute without consequence the first column giving rise to
different numbers of (1,1), (O,l), (1,0), (0,O) dogs but keeping 120 ED+ dogs,
380 ED- dogs, 109 Ju+ dogs and 391 Ju- dogs. A new proportion can be
calculated and stored. This is done N times.
   Another point deserves some consideration: the level a at which the tests were
performed. The traditional level is 5%: a =0.05. However in the present case, 60
tests were carried out on the same set of data. If this level were used and no links
existed between any of the pairs of traits, we would nevertheless expect to see
about three (=0.05 X 60) significant tests. To avoid this inconvenience, the 5%
level was used globally, dividing it by 60 (using d 2 = 0.0004) according to
a majoration known as Bonferroni inequality. This resulted in a substantial
decrease in the probability of stating significant effects, that is, producing a very
conservative procedure. The less stringent correction proposed by Benjamin &
Hochberg (1995) was also used. To obtain sufficient precision for such extreme
quantiles, N = 1,000,001 permutations was chosen. In this case the number of
values greater or less than the target quantile of a12 = 0.0004 is 400 simulated
values. This is the traditional statistical theory. P-values were co~nputed each
test, giving the significance for every level. If the P-value is 0.02, then the
corresponding test is significant for greater levels (e.g. 5%), and not significant
for lower levels (e.g. 1%).
   It is worth noticing that a possible litter effect is not taken into account, and
could bias the planet effect under study. But due to the fact that whelping takes
several hours, pups belonging to a same litter have different planetary positions,
and the consequence of neglecting such an effect is attenuation of the planet
   Using this approach, two planets were found to have an effect on the same
trait of behaviour. It was therefore decided to to examine the possible interactive
effect of the planets. To this end, the planet1 X planet2 X behaviour trait table
(2 X 2 X 2) was considered as a 4 X 2 table, with four rows associating the
combination of planets and two columns for the behaviour trait.
   This provided a Chi-square of independence with three degrees of freedom
that was further broken down, nesting the two planets' effects according to the
two possibilities.

   The distribution of the dogs over all combinations of behaviour trait and
astrological trait is given in Table 3. The main results of the statistical tests are
proposed in Table 4 and for trait ED in Figure 1. For the global level of a =0.05,
Hochberg's correction and Bonferroni correction gave identical results: two
significant tests out of the 60. These are the associations between ED and Jupiter
                       Astrological Factors and Dog Behaviour                                 287

                                           TABLE 4
Indications for Each Behaviour Trait Detected by Statistical Analysis and Classical Signification
                       Attributed in Traditional Astrology for Humansa

                            Associated planet
Behaviour trait      (with P-values of significance)        Traditional interpretation for humans

EA (active)          Jupiter in excess (-, 0.069)          Active, extravert, sociable, charismatic
                     Saturn in deficit (-, 0.099)          Not reserved, not introvert
ED (dominant)        Jupiter in excess (***, 0.000)        Active, extravert, sociable, charismatic
                     Sun in excess (***, 0.00002)          Strong personality
                     Mercury in excess (*, 0.012)          Communicative
                     Pluto in deficit (-, 0.112)           ?, various interpretations
ER (reserved)        Jupiter in deficit (*, 0.009)         Non-dominant, non-charismatic
                     Sun in deficit (-, 0.12)              Non-sociable, weak personality
                     Moon in excess (-, 0.059)             Sensitivity
NA (affective)       Moon in deficit (*, 0.042)            Insensitive,
                     Neptune in excess (*, 0.019)          Dreamy
                     Saturn in deficit (-, 0.059)          Unthinking
NN (nervous)         Mars in deficit (*, 0.047)            Lacking in force
                     Saturn in excess (*, 0.038)           Introvert
NS (stable)          P1 in deficit (-, 0.095)              ?, various interpretations

Note: Effects are indicated as follows: planet effects detected (***); strongly suggested (*); and
suggested (-). By detected, we mean that it is considered significant at 0.0002; by strongly
suggested, that it is considered significant at 10%; and by suggested, that it is considered the
strongest effect among the ten planets, or almost 10% significant.

and between ED and the Sun and they are amazingly strong. The drastic level we
used for the Bonferroni test was far from exceeded. It is striking that not one of
the 1000001 proportions computed for Jupiter was greater than the observed
value. Some other much less impressive associations are suggested and these are
shown in Table 4.
   Concerning the effect of Jupiter and the Sun on the same behaviour trait (ED),
possible interaction was analysed in the (2 X 2) X 2 table (Table 5). No addi-
tional effect was found among dogs positive for both Jupiter and the Sun. Both
planets have a strong effect but it does not appear to be cumulative.

                               Discussion and Conclusions
   This empirical study demonstrates that some relationships exist between the
moment of birth of dogs characterized by the "angular" positions (e.g. rising,
setting and upperlinferior culminations) of astrological planets, and indepen-
dently assessed behaviour traits. They appear particularly strong in the case of
dominant dogs influenced by the Sun, Jupiter and, to a lesser extent, Mercury.
   The effects must be compared with one of the tools of classical human as-
trology concerning the relationship described (Fuzeau-Braesch 2004; Lewis
2003) for births with the Sun and Jupiter in these "angular" positions. Humans
in this category are generally described as charismatic, dominant, strong, socia-
ble and influential in a group. This is obviously comparable with the canine
                             S. Fuzeau-Braesch & J.-B. Denis

                              ED ( 120 1380 +I- dogs) and 1000001 simulations

Fig. 1. For the ED behaviour trait, the proportion of positive dogs for each of the ten planets is
        displayed (dots). The lines show the empirical distribution computed by the permutation
        tests: the (heavy) solid line is the median, the dashed lines are respectively, from bottom to
        top, the quantiles 0.0001,0.0004 (solid), 0.001,0.01,0.05,0.95,0.99,0.999,0.9996(solid),
        0.9999. Planets have been ordered according to their P-values.

equivalent where the corresponding pup holds a dominant position among its
peers during its two first months of life. It is always the first to eat and this is
accepted by the entire group, and it will push the others away with impunity to
get the attention of human attendants or just to move around, breeders report.
This parallel is remarkable and can not be due to chance.
   Other effects are no more than suggestions; probably a larger sample of dogs
would be necessary to detect them statistically with greater confidence. Never-
theless, there are striking similarities with traditional human astrology indicated
in Table 4. Notable among them are those concerning the Sun, the Moon,
Mercury, Mars, Jupiter, Saturn and Neptune. A "nervous" (NN) dog is often
born with Saturn in an "angular" position, which may result in a tendency to

                                           TABLE 5
 Distribution of the 500 Dogs According to Jupiter, the Sun and the Extraversion Dominant Trait

                                                     ED+                                        ED-

Ju+   and   Su+                                       10                                          12
Ju+   and   Su-                                       34                                          53
Ju-   and   Su+                                       32                                          53
Ju-   and   Su-                                       44                                         262
                    Astrological Factors and Dog Behaviour                       289

introversion. A lack of Mars can be a weakening influence and this, too, can
result in a sensitive and timid animal.
   The results for the "reserved" (ER) animals must also be considered here:
they show Jupiter and the Sun in deficit: they are non-dominant, non-sociable,
sensitive with the Moon in excess, which is also remarkably similar to classical
interpretations for humans. An ambiguity must also be noted in the "affec-
tionate" (NA) case. This term is always used by breeders for dogs which like
being picked up and are happy to be handled: this is difficult to interpret. No
convincing results have been obtained for "stable" (NS).
   It may be underlined that the results are all the more convincing in that the
tools we applied (description of behaviour, classical astrology) are not
commonly in use for dogs.
   The similarity between observations of dogs and human astrological
descriptions can only be explained by the existence of a physical causal effect,
so far unknown. Dogs seem to react in a very similar way to that which would be
predicted by one of the classical astrological rules for humans, the "angular" sky
elements. This eliminates the argument frequently advanced to "explain" this
astrological tool; the fact that the human mother, knowing the birth chart of her
children, influences her child in the "right" direction. Clearly no such cultural
factor can occur in dogs. It is also difficult to evoke a factor of hereditary nature.
For such a factor to be effective, all pups of a given letter should be borne under
the same planet position, which is not the case due to the duration of whelping.
Indeed, pups coming from the same litter have different behaviours and different
sky positions.
   Thus it must be supposed that a causal physical influence exists. It is worth
recalling here various studies on the reception of waves emanating from sky
elements, particularly the Sun and Jupiter. It is well known that in short wave
radio, for example, receivers must be retuned at the rising, the culmination and
the setting of the Sun, this being a result of the ionosphere acting as a plasma
(Soloviev, 1998). Jupiter has also been much studied for its own waves which
reach the Earth in spite of its magnetic environment (Rogers, 1995; Rosolen
et al., 2002). Planetary magnetospheres of the various elements of the solar system
are now a subject of new and vigorous research with spacecraft observation.
They are very dynamic objects (Blanc et al., 2005) and it is not inconceivable
that the time may be ripe to consider interdisciplinary work between
astrophysics and astrology.
   These observations in dogs must be followed up by much further similar
research, in the search for more insight into the veracity and the limits of
astrology. This is all the more necessary as so very few studies of the subject,
anywhere in the world, have been so far recognized as scientific (Dean &
Mather, 1977), with the exception of those of Gauquelin (1973, 1982) on angular
planets and professions.
   In future, studies may also concern the cognitive sciences linked to the
organization of behavioural differentiation of individuals.
290                         S. Fuzeau-Braesch & J.-B. Denis

   Thanks are due to the following breeders of pedigree dogs who kindly agreed
to participate in this study: Mesdames and Messieurs Calais, Cattelain, Carillon,
Falchi, Gora, Jenny, Ladiray, Lalliot, Le Borgne, Lepoudkre, Morisset and
Reinard. We would also like to thank the reviewers for their interesting
comments, especially the indication concerning Hochberg's correction for
simultaneous testing.

Benjamin, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful
   approach to multiple testing. Journal of the Royal Statistical Society B, 57, 289-300.
Blanc, M., Kallenbach, R., & Er, N. V. (2005). Solar system magnetospheres. Space Sciences Review,
   116, 227-298.
Dean, G., & Mather, A. (1977). Recent advance in natal astrology, a critical review 1900-1976.
   London, UK: Ed. Astrological Association.
Eysenck, H. J. (1990). Rebel with a cause. London, UK: Allen.
Eysenck, H. J., & Wilson, G. (1975). Know your own personality. Penguin Books.
Fuzeau-Braesch, S. (2004). Astrologie, connaissance de soi. Palaiseau, France: Ed. Agamat.
Gauquelin, M. (1973). Le dossier des influences cosmiques. Paris, France: Ed. Denoel.
Gauquelin, M. (1982). Psychology of the planets. San Diego, CA: Astrocomputing Service.
Good, P. (2004). Permutation, Parametric, and Bootstrap Tests of Hypotheses. Springer.
Lewis, J. R. (2003). The Astrology, the Encyclopedia of Heavenly Influences. Canton, MI: Ink Press.
McFarland, D. (1990). Dictionnaire du Comportement Animal. Paris: R. Laffont Ed.
Rogers, J. H. (1995). The Giant Planet Jupiter. Practical Astronomy Handbook. Cambridge
   University Press.
Rosolen, C., Lecacheux, A., Gerard, E., Clerc, V., & Denis, L. (2002). High dynamic range
   interferences tolerant digital receivers for radioastronomy: Results and projects at Paris and
   Nancay Observatory. In The Universe at Low Radio Frequencies, Proceedings of IAU Symposium
   (pp. 00-00). City, India: Pramesh Rao Ed.
Soloviev, 0 . V. (1998). The low frequency radio waves propagating in the perturbed Earth-ionosphere
   waveguide with a large-scale three-dimensional irregularity. Radiophysics and Quantum
   Electronics, 41, 5.

   List of words used by the breeders (translated terms with original French
terms) to describe the behaviour of pups, and how the words were associated
with the six behaviour traits in the study.
   ACTIVE (actif)
   Active - actif
   Bold - audacieux
   Rascally - coquin
   Daring - culotte
   Curious - curieux
   Clever - dkbrouillard
   Bright - degourdi
   Impudent - effrontk
   Wide-awake - eveill6
   Expressive - expressif
   Frisky - exuberant
                  Astrological Factors and Dog Behaviour

Go-getter - fonqeur
Cheerful - gai
Noisy - gueulard
Playful - joueur
Crafty - malin
Responsive - rCactif
Animated - remuant
Spontaneous - spontanC
Lively - vivant
Roguish - voyou
Vivacious - vif
DOMINANT (dominant)
Aggressive - agressif
Belligerent - bagarreur
Strong character - caractke fort
Boss of the litter - chef de la portCe
Determined - dCcidC
Dominant - dominant
Shameless - effrontk
Strong - fort
Greedy - gourmand
Eats well - mange bien
Snappy - mordant
Doesn't give in - ne ckde pas
Gets what he wants - obtient ce qu'il veut
Afraid of nothing - peur de rien
Knows what he wants - sait ce qu'il veut
Happy everywhere - se plait partout
Beguiling - sCducteur
Sociable - sociable
RESERVED (rkserv6)
Aloof - B 1'Ccart
A little silly - b2ta
Always give in - ckde toujours
Timorous - craintif
Discreet - discret
Distant - distant
Dominated - domine
Sleepy - dormeur
Not dominant - non dominant
Unaggressive - pas agressif
Unplayful - pas joueur
Timid - rkserv6
Self-effacing - s'Ccrase devant les autres
Solitary - solitaire
Touchy - susceptible
Shy - timide
292                     S. Fuzeau-Braesch & J.-B. Denis

  AFFECTIVE (affectueux)
  Affectionate - affectueux
  Likes petting - calin
  Confident - confiant
  Gentle - doux
  Tender - tendre
  NERVOUS (nerveux)
  Sensitive - sensible
  Diffident - effacC
  Impressionable - impressionnable
  Nervous - nerveux
  Easily frightened - peureux
  Whiny - pleureur
  Whimperer - pleurnichard
  Wild - sauvage
  Restless - agitt
  STAID (stable)
  Staid - stable
  Compliant - adaptable
  Pleasant - agrkable
  Friendly - aimable
  Demonstrative - avenant
  Cool-headed - bien dans sa tete
  Relaxed - dCcontractC
  Balanced - CquilibrC
  Good character - heureux caractkre
  Independant - indkpendant
  Not dominant - non dominant
  Not afraid - pas craintif
  Calm - pas nerveux
  Not shy - pas timide
  Fits in anywhere - s'adapte B toute situation
  Sedate - sage
  Sure of himself - sur de lui
  Quiet - tranquille


The reviewers raised several points that readers should note.
   First, are there any independent data to justify applying to dogs a scheme
developed for humans? Any precedents? Moreover this scheme is only one
among many that have been proposed-there is no mainstream consensus among
   Second: the classification of descriptors would have benefited from input
from some disinterested outsiders, as a way of avoiding subjectivity. As it
                   Astrological Factors and Dog Behaviour                  293

stands, one wonders whether "dominant" really should subsume all of
shameless, greedy, sociable, and happy.
   Third: In trials of human astrology, some have suggested that one should not
use young subjects because traits have not had enough time to be clearly
expressed. Might not the same concern apply here? The subjects were 2-month-
old puppies.
Journal of Scientific Exploration, Vol. 21, No. 2, pp. 295-317, 2007

        Exploratory Study: The Random Number Generator
                      and Group Meditation

      Department of Physical Medicine and Rehabilitation, Biomedical Engineering Institute,
         MMC 297, and Bakken Medical Instrumentation and Device Lab, University of
                              Minnesota, Minneapolis, MN 55455
                             "e-mail: Lynnemasonl08@yahoo.com

              Institute of Noetic Sciences, 101 San Antonio Road, Petaluma, CA 94952

      Abstract-Experiments using truly random number generators (RNGs) have
      reportedly demonstrated anomalous deviations in various group settings. To ex-
      plore these claims, group meditation (average 261 females, 398 males) was tested
      as a venue for possibly inducing these deviations using a true RNG located in
      a large meditation hall. A total of 94 hours and 33,927 trials, each trial consisting
      of 1,000 random bits collected in 10-second periods, were recorded during
      meditation (Transcendental Meditation and advanced techniques). Cumulative
      deviation results were in accordance with chance expectation for baseline data,
      but showed significant non-randomness for the first (p < 0.00001) and second set
      of meditation data (p < 0.00001). A sub-section of the meditations, known as
      "yogic flying," showed significant deviations for both the first (p < 0.000001)
      and the second data sets (p < 0.000001). Results at a second test location known
      as the Vedic Observatory were significant for the first (p < 0.01) and second data
      collections (p < 0.05). All results were analyzed for any possible mean drift by
      subtracting differences in the pre- and post-test baseline slopes. After the
      adjustment for any drift, the direction and the experimental results were still
      significantly atypical, with a greater number of zeros being generated than ones.
      The use of non-exclusive-or-ed methods to eliminate drifts of the mean of the
      random data is discussed as well as the use of RNGs for measuring changes in
      collective consciousness associated with standardized meditation.
      Keywords: random number generator-random event generator-group
                consciousness-global consciousness-meditation-Transcendental
                Meditation-humanlmachine interactions

The putative anomalous influence of groups of humans on truly Random
Number Generators (RNGs) have been used to measure the effect of global and
group consciousness in a variety of settings including meditations, meetings,
ceremonies, sports events, and tragedies (Bierman, 1996; Jahn et al., 2000;
Nelson 1997; Nelson, 2001; Nelson et al., 2002; Nelson et al., 1998; Radin,
1997, 2002, 2006; Radin et al., 1996). RNGs have also been used with
296         Lynne I. Mason, Robert P. Patterson, and Dean I. Radin

individuals and pairs to study the effect of human intention and humanlmachine
interactions (Dunne, 1998; Jahn et al., 1997; Nelson et al., 1998; Radin &
Nelson, 1989). Nelson has reported that RNGs or random event generators in
group situations were found to act non-randomly with significant deviations of
the means (or in some cases, variance) in situations involving "calm but
unfocused subjective resonance" and those "that foster relatively intense or
profound subjective resonance" (Nelson et al., 1998: p. 425).
    Of the various contexts tested thus far, perhaps group meditations are closest to
Nelson's (Nelson et al., 2002) prescription for the optimal environment to produce
deviations in the RNG outputs. Previous research has suggested that time-
synchronized as opposed to non-synchronized meditation appears to influence the
RNG to a greater extent. That is, a meditation involving a large number of people
worldwide practicing an assortment of types of envisioning, prayer and meditations
at the same time reached significance (p=0.047) (Nelson et al., 1998),as did another
group meditation with a coordinated time (p = 0.012) (Nelson, 2002a). However,
a third group meditation with a non-synchronized time yielded a non-significant
result (Nelson, 2002a). It should be noted that the meditations in these tests included
a wide variety of mental activities, from casual and celebratory to formal meditation
techniques. The present study explored whether the group consciousness effect
might be enhanced by using a single, standardized form of meditation practiced by
hundreds of people at the same time and place.
   Radin (2001,2002), reported the effects of the violent events on 9111/01 in the
U.S.A. on a collection of international RNGs from the Global Consciousness
Project (GCP) (Nelson, 2002a) that became significantly non-random with in-
creasing variance. May and Spottiswoode, (2001) have presented a reanalysis of
that data and contest the original interpretation of the results. In contrast to May
and Spottiswoode (2001), four researchers independently report significant
anomalies in the data (Nelson et al., 2002).
   In response to 9111/01 over 1700 practitioners of Transcendental Meditation
gathered together from 9/23/01 to 9/27/01 at Maharishi University of
Management (MUM) in Iowa. Additional meditations and extended group
meditations with varying numbers of participants were organized in addition to
their normal meditation schedule. The normal daily schedule called for group
meditations to begin at 7:05 AM CST (except on Sundays, which were to begin at
7:35 AM) and at 5:20 PM CST.
   RNG data from the GCP were analyzed from 37 RNGs located at different
locations around the earth, but not including Iowa (Nelson 2002~).          Significant
deviations from chance were not achieved when evaluating all 735 minutes of
data collected over the five days of meditations. On the day of the peak number of
meditators (over 1800), there was an exploratory significant result (p = 0.0012). A
trend was also reached for a specific section of the meditation period known as
"yogic flying" when cumulated over the 5-day period. Nelson reported that the
relatively small number of days, five, ruled out further analysis of the yogic flying
deviations, underscoring the need for a longer multi-day study to allow for more
                              RNG and Meditation                               297

extensive investigation. Nelson (2002~)  noted that during the yogic flying portions
of the meditations the significant deviation was in the direction opposite to that
observed in the majority of data from the Global Consciousness Project and from
Princeton University's Princeton Engineering Anomalies Research (Jahn, 2002).
This atypical direction result was also reported by Nelson during a Silent Prayer
on 9/14/01, Full Moon ceremonies, sacred sites in Egypt, and a prayer vigil
(Nelson, 2002a; Nelson et al., 1998). Because of these directional effects, Nelson
(2000~)  discussed including directional predictions in future meditation studies.
Nelson (2006) stated "that a little more than half the events for the GCP that are
somewhat like meditation show the downward trend."
   There is an independent body of experimental evidence supporting the idea
that large groups of meditators practicing a single type of meditation
(Transcendental Meditation and advanced meditation practices) at a synchro-
nized time have been found to decrease violence, crimes, car accidents, hospital
admissions, alcohol consumption (Dillbeck, Landrith, & Orme-Johnson, 1981;
Hagelin et al., 1999), war casualties (Orme-Johnson, et al., 1988), and improve
the stock market performance (Cavanaugh, Orme-Johnson, & Gelderloos,
1989). A time lag or carryover effect that diminishes over months has been
measured in studies evaluating the effect of group meditation on societal
indexes (Dillbeck, 1990; Dillbeck et al., 1987; Hagelin et al., 1999; Orme-
Johnson et al., 1988). The effects of these group meditations appear to involve
a distance factor, with the effect being greater in the vicinity of the meditation
groups (Hagelin et al., 1999). Similarly, RNG research has shown potential
distance effects with peak effects closer to the source of large global events as
measured by hemispheres, continents, country and region (Radin, 2001), but
further research is necessary because distance effects in research involving
intention typically have not been found when experimenting with individual
subjects (Radin, 1997; Jahn and Dunne, 1987). The use of both local RNGs as
well as distant RNGs with meditation groups would be necessary to conduct
a systematic study of the role of distance.
   The objective of the present study was to extend the previous research on
RNGs and meditation by 1) expanding the number of meditation sessions, 2)
incorporating a local RNG at the site of interest, 3) measuring a standardized
type of meditation practiced at coordinated times, with a precise count of par-
ticipants, and 4) taking note of the direction of the non-random nature of the
response. Three predictions were made:
   a) groups of people practicing the same meditation simultaneously in one
location would result in a significant departure from chance expectation (50%
ones and 50% zeros, an 0.5 expectation), specifically the cumulative deviation of
the percent zeros would be greater than chance expectation obtained on a local
RNG as measured over the whole meditation, b) particularly for a specific
subsection of the meditation known as yogic flying, and c) the direction of the
non-randomness would show a decrease in ones and thus an increase in zeros.
298         Lynne I. Mason, Robert P. Patterson, and Dean I. Radin

                            Equipment and Methods
   A laptop computer and a truly Random Number Generator (Orion V1.2) was
employed. The Orion RNG uses noise-based analog signals that are converted
into random bit streams. These bits are transmitted in the form of random bytes
to a standard RS-232 serial port. According to the manufacturer's manual (Orion
2006) the baud rate is 9600 characters per second and the device is capable of
supplying about 960 random bytes or 7600 random bits per second. Co-author
Radin notes that transmission of a byte in the context of serial communications
takes 10 bits, not 8, so the Orion provides about 9600 random bits per second.
   The RS-232 port was tested for accurate minimum voltage (> 5V) with the
actual voltage at 8.9 volts. The field recordings used a battery source for the
laptop and a time-stamped marker for recording sections of interest.
   Additionally, a second type of RNG (Mindsong, Inc. Research, microREG)
was used for a limited time. The Mindsong is described by the manufacturer as
incorporating, "Brownian movement of electrons using a Junction Field Effect
Transistor (JFET) in a high gain circuit that generates the Noise signal"
(Haarland, 2003). Non-deterministic randomicity is assured by the electron noise
in this JFET circuit. According to the Mindsong's manufacturer, (Haarland, 2003)
the bits are transmitted to a standard RS-232 serial port with a baud rate of 9600
characters per second and a 2600 bits per second sampling rate. The majority of
research involving RNGs and consciousness has been done with additional
software to apply exclusive-or (xor) logic to the data. In the xor technique, the raw
data from the random number generator is "masked" or "exclusive ored" (xored)
either against a pseudo random byte or a regular 011 sequence. According to the
RNG manufacturers, the advantage of xoring is to ensure randomness with less
chance of a bias; specifically, it eliminates systematic drifts in the mean. A
disadvantage of xoring the data against a fixed mask is that the output is no longer
raw binary data, and it may constrain long-term changes in the mean numbers of
ones and zeros. Scargle (2002) has proposed using non-xored data. Scargle (2006)
explains that using a logical xor operation and reversing of the data in the bit
stream may totally eliminate according to the design philosophy anomalous
effects and all physical effects in consciousness research. Nelson (Nelson et al.,
2002) emphasizes the existing large database of RNG studies that use xoring and
have significant experimental effects contradicts Scargle7sviewpoint.
   The RNGs manufactured by Mindsong have additional hardware xoring
(Bradish et al., 1998; Haaland, 2003). As stated in the patent, "the analog output
of this random signal is converted to a random binary stream . . . and further
treated with a selective inverter that inverts some but not all of the series of data
values according to a pseudo-random sequence mask. The selective invertor
coupled to a sampler that inverts some, but not all, of the series of digital data
values to produce a selectively inverted series of digital data values is an
essential feature of the patent and our device. One of the benefits of this is the
prevention of baseline drift."
                              RNG and Meditation                               299

   In conclusion, a comparison of software xored and non-xored data initially at
baseline was conducted. This was followed by the use of non-xored data (no
additional software for xoring) for the rest of the experiments based on the design
specification of the RNG: to avoid altering the original data with a mask
(Scargle, 2002).
   One trial with the Orion and the Mindsong RNGs consisted of 1000 bits
collected every 10 seconds with 1000 trials per run (approximately 10,000
secondslrun or 1,000,000 bitslrun) for 2.78 hours. This configuration was chosen
for its capacity to capture one complete meditation period (approximately 120
minutes which is well within the 2.78 hours limit), in one run, and within the
capacity of the laptop batteries. Acquisition and analysis software (Watson,
version MREGOOsl, 2001) provided the total number of bits counted, and the
calculation of the deviation of ones (likewise zeros) from the RNG. Count mode
was set for ones to indicate a positive, increasing direction, and conversely
increased zeros to indicate a decreasing, negative direction. RNGs with increasing
ones means there is less randomness due to generating more ones than zeros. In the
acquisition software used this was designated by an increase in the positive upward
direction as represented on the analysis graphs. Increasing zeros means there is
less randomness due to generating more zeros than ones. This is represented by an
increase in the negative downward direction of the graphs. This was a mean-shift
analysis not an analysis of variance. The information in cumulated deviation (in
reference to 50% ones, an 0.5 expectation) of the ones counted from the trials
consisting of 1000 bits every 10 seconds was used for the statistical analysis.
   Data analysis followed procedures previously described (Jahn and Dunne,
1987) for single RNG use and with Z-scores (Radin, 2002). Specifically se-
quential samples of 25 bytes were collected from the RNG since each byte
consists of 8 bits and each sample yields 200 bits of ones and zeros. The number
of ones beyond 100 (100 is the theoretically expected mean) were counted in each
sample and this number was added to the previous accumulated number. The
total number of bits counted was calculated, the percent deviation of ones was
calculated, and the sums of the deviation were calculated. Z scores based on 95%
confidence levels were calculated. The Z scores = x - No, where x is the sample
value, p = mean, and o = standard deviation. All tests are reported as one-tailed.
   Please note the term "xored" as used here refers to additional software xoring.
Likewise "non-xored" refers to not using additional software for xoring. No
changes were made internally to any of the RNGs as described above which
employ internal xoring techniques.
   Four main tests were run (See Chart 1):
     a pre-test baseline "control" period in our laboratory comparing xored and
     with no additional software xoring (non-xored) data for the Orion RNG as
     well as a pretest baseline "control" period in our laboratory for the
     Mindsong RNG with no additional software xoring (non-xored).
     experiment A, consisting of recordings of a meditation group, a subsection
300            Lynne I. Mason, Robert P. Patterson, and Dean I. Radin

                         7'""i""'"'                                             DIFFERENCE

    "CONTROL"                IN LABORATORY                     -0.20

 EXPERIMENT A                1.IN MEDITATION HALL              -2.18              (-1.08)
 6/16/02-6/27/02             2.YOGIC FLYING                    -9.52              (-8.42)
                             3.VEDIC OBSERVATORY               -2.09              (-0.99)
 EXPERIMENT B                1.IN MEDITATION HALL              -2.03              (-0.93)
 7/15/02 - 8/1/02            2.YOGIC FLYING                    -9.30              (-8.20)
                             3.VEDIC OBSERVATORY               -1.81              (-0.71)
   "CONTROL"                 1.IN LABORATORY                   -1.34
   816102-1/12/03            2.SHIELDED RNG                    -1.10

Chart 1:   Shows a summary of the procedures of the experiments and the resulting slopes. Slope is
           the number of excess ones beyond expectation when random, per one thousand bits.

      of the group meditation known as yogic flying and also a site known as the
      Vedic Observatory for the Orion and Mindsong RNGs.
      experiment B, consisting of a replication of experiment A but for a longer
      period using the Orion RNG.
      a post-experimental "control" period in our laboratory/offices using the
      Orion RNG.

   No formal predictions were made for the post-experimental "control" results.
The experimenters in this exploratory research decided they did not have enough
information to precisely predict a possible lag effect, but a lag effect was con-
sidered as previous studies had reported "carryover" effects after the medi-
tations had ended (Dillbeck, 1990; Dillbeck et al., 1987; Hagelin et al., 1999;
Orme-Johnson et al., 1988.) The effects do not appear to end immediately when
the group meditation is over, just as the music does not immediately end when
a coherent orchestra stops playing. We hear the music for a few moments after
the musicians have stopped playing, the sound lags or carries over. Likewise the
effects of the meditation as recorded by the RNG may not end when the medi-
tation recordings are over but also carry over or lag.
   The control condition in this experiment was defined in purely operational
terms, i.e., data collected while not "exposed" to meditation. Nelson et al.
(1998, p. 452) notes "that even in laboratory experiments there is evidence
                              RNG and Meditation                               301

that traditional control data may not be immune to anomalous effects of

The Laboratory Pretest Baseline Control Comparison of
Xored and Non-xored Data
   The Orion RNG was run in our University device lab in a small office, 5' X 7',
used occasionally as a library and 3' from the first author's desk. Data collection
consisted of 30 hours xored, 30 hours non-xored and an additional 89 hours non-
xored with the Orion RNG. As noted previously, non-xored in this paper refers
to no additional software xoring masking of data. These were considered the pre-
test "control" baseline samples for an inactive period. A Mindsong RNG non-
xored was also run in our lab.

Experiment A in the Meditation Hall for Group
Meditations Including Yogic Flying
   Location of the RNG was at Maharishi University of Management, (MUM) in
Fairfield Iowa, a rural university town with a population of approximately
10,000, with two meditation halls 0.25 km apart and designated by gender. RNG
recordings took place in the first hall with an average of 261 female meditators
(range 178-356, sd = 42). Attendance numbers are methodically tallied before
each meditation for the purposes of future research at MUM. Collection during
the summer was practical due to our laboratory's overall research schedule, even
though fewer meditators were present because of their summer vacation sched-
ules. It should be noted that in addition to the meditators in the first hall where
the RNG was located, there were male meditators in the adjacent meditation hall
(average 398) during the recording periods on the same time schedule practicing
the same meditation techniques. Only the first two authors and a research officer
(from MUM) were aware of the recordings taking place. The meditation group
was not aware of the experiment and the RNG located in the woman's medi-
tation hall was not visible to the participants. Since the participants were not
aware of the RNG experiment the participants did not use intention to attempt to
influence the results.
   Meditation recording sessions lasted approximately 2 hours each and began at
7:05 CST (Sundays at 7:35 AM) and 5:20 PM CST. Our schedule and budget
estimates allowed us to collect in two trips for a total of 94 hours of meditation
data. This would expand the previous research done at MUM of 58.75 hours
over 5 days (Nelson, 2002~). would also exceed the RNG meditation research
conducted elsewhere that used multiple meditation techniques including a single
3-minute period, a single 10-minute period and a single 1-hour period (Nelson,
   Extended and additional smaller group, meditations scheduled before and
after the main group meditations (7:05 AM CST and 5:20 PM CST), in the
meditation halls made planned comparison before or after the meditation periods
302        Lynne I. Mason, Robert P. Patterson, and Dean I. Radin

impractical. Furthermore, others have discussed a residual effect or lag effect of
the group meditation predicted to last even after the daily meditation time is
finished (Oates, 2002). The meditation hall has been used for meditation for over
20 years, 365 days a year and accordingly may not qualify as a neutral non-
active control site even during non-meditation periods of the day.
   Analysis was planned for the entire meditation period as a whole and then
following Nelson (2000~) analyze a specific section of the meditation known
as yogic flying. Phenomenological reports of yogic flying include descriptions
of waves of bliss (Alexander and Langer, 1990). Yogic flying is based on the
ancient Yoga sutras of Patanjali (1978) and is predicted to create peace in the
collective consciousness (Hagelin et al., 1999).
   Experiment A in the Vedic Observatory. Within 5 kilometers of the medi-
tation hall is an open-air site known as the Maharishi Vedic Observatory. It
consists of ten precisely designed and positioned astronomical instruments based
on ancient designs of sundials or "yantras," each about 2 meters high (Global
Vedic Observatories Corporation, 1996). Observing the instruments is predicted
to create psycho-physiological balance (Global Vedic Observatories Corpora-
tion, 1996) as well as development of "peak experiences" (Maslow, 1962) and
stabilized "higher states of consciousness" (Alexander & Langer, 1990; Mason
et al., 1997; Travis et al., 2002).
   Each recording session using the RNG located in the center of the Vedic
Observatory was approximately 90 minutes long. The majority of sessions
involved only the first author at the Vedic Observatory site, although there were
short periods of unscheduled visitors in a minority of sessions. The majority of
sessions were recorded between 1:00 PM CST and 3:00 PM CST with a few
exceptions due to weather and schedule conflicts. None of the recordings at the
Observatory were made during the same time as the meditation recordings.

Experiment B
  Replication of experiment A with increased data collection for 18 days.

In the Laboratory Post-experimental "Control"
  Non-xored recordings in our laboratory repeated the pre-test baseline
"control" recordings.


Laboratory Pre-test Baseline "Control" Comparison of
Xored and Nun-xored Data
   The baseline for both xored and non-xored data in the laboratory setting were
random for the RNG (Orion). Results show no significant terminal (end of
interval) non-randomness in the 30 hours, 10,883 trials, or 10,883,000 bits of
xored data (Z = 0.834, p = 0.798) (Figure 1A). Likewise, there was no significant
                                    RNG and Meditation                                         303

                                            number of trials

                                             number of trials
Fig. 1A. A pre-test control period showing non-significant deviations from a RNG. Pre-test RNG
         xored shows 30 hours of data, 10,883 trials, 10,883,000 bits of xored data. Pre-test RNG
         non-xored shows 30 hours, 10,883 trials, 10,883,000 bits of non-xored data using the same
         equipment, RNG and location in our lab. As expected the RNGs did not reach terminal
         significance at the p = 0.05 level in the control period.
Fig. 1B. An additional pre-test control RNG non-xored period of 89 hours, 32,000 trials,
         32,000,000 bits of non-xored data from a RNG in our lab. As expected there is no
         significance for the control period at the p = 0.05 level. The cumulative deviation plots
         show parabolic lines for one standard deviation and p = 0.05 for a chance criteria as
         a function of increasing trials. The jagged solid lines show the cumulative deviations over
         all the trials. SD = standard deviation. RNG = Random Number Generator.
304        Lynne I. Mason, Robert P. Patterson, and Dean I. Radin

terminal non-randomness for 30 hours, 10,883 trials, 10,883,000 bits of non-xored
data (terminal Z = -1.33, p = 0.091) (Figure 1A) and for 89 hours, 32,000 trials,
32,000,000 bits of non-xored data (terminal Z = -1.138, p = 0.127) (Figure 1B).

Experiment A in the Meditation Hall for Group Meditations
   Results included significant anomalies (terminal Z = -8.434, p = 1.697 X
10-17) for nineteen group meditation sessions from experiment A totaling 32
hours, 11,360 trials, 11,360,000 bits of data. As described below, data were
reanalyzed to take into account a possible mean drift and still maintained
significance (terminal Z = -4.726, p = 1.1449 X lop6) (Figure 2A). The results
were in a decreasing direction indicating increasing cumulative zeros.
   The reanalysis for a possible mean drift involved finding the linear regression
slope of the cumulative pre-test data (using the Orion) and subtracting it from the
slope of the cumulative post-test data. This reanalysis was performed because
a difference was found between the pre- and post-data terminal Z scores. Slope
was calculated on the cumulative deviation scores for both the pre-test data and
post-test data non-xored. The comparison involved the non-xored pre-test data of
89 hours and the non-xored post-test data of 89 hours. The slope was determined
by using regression data analysis in Excel 2000 with a zero intercept. The input
ranges were the cumulative deviation in bits and the number of trials. The output
variable represents the slope. The difference pre-post in the slopes were
subtracted from each cumulative bits deviation score for each trial during the
experimental phase. Specifically in this case the difference in the slopes pre- and
post-test was subtracted from the meditation cumulative bits per trial scores.

Experiment B in the Meditation Hall for Group Meditation
   Data collection consisting of 32 group meditation sessions totaling 63 hours,
22,567 trials and 22,567,000 bits of data from experiment B is also significant
(terminal Z =-9.068, p = 6.126 X         The data was reanalyzed for a possible
cumulative drift and remained significant (terminal Z = -3.872, p = 5.397 X
lop5), (Figure 2A.) The results were in a decreasing direction, indicating more
zeros than ones.

Experiment A in the Meditation Hall for Yogic Flying
  The yogic flying portions of the meditations are also highly significantly non-
random (terminal Z = -14.046, p = 4.12 X            for the first set of data from
experiment A consisting of 5 hours of data, 1728 trials, 1,728,000 bits. It
maintains significance (terminal Z =-12.600, p =1.061 X            after reanalysis
for a possible cumulative drift (Figure 2B).

Experiment B in the Meditation Hall for Yogic Flying
   The data for the yogic flying portions of the meditations from experiment B are
also significant (terminal Z = -14.774, p = 1.087 X loA9) for 8 hours, 2,971
                               RNG and Meditation                               305

trials, 2,971,000 bits and maintains significance (terminal Z = -12.639,
p = 6.471 X lo-") after reanalysis for a possible cumulative drift (Figure 2B).
The direction for the yogic flying data for experiments A and B is an atypical
decreasing direction indicating more zeros than ones. The yogic flying slopes
(-9.52 and -9.03 ) are higher (see Chart 1) as compared to the slopes of the
meditations as a whole (-2.18 and -2.03).

Mindsong RNG
   Figure 2C represents trials with a second type of RNG by Mindsong. The
pre-test RNG Mindsong data used no additional software to xor the data. The
pre-test data was taken in our laboratory's office at pre-test and was non-
significant (Z = 0.222, p = 0.5878) for 23.5 hours of data, 8470 trials, and
8,470,000 bits. Meditation RNG Mindsong in Figure 2C was acquired without
using software xoring and recorded in the meditation hall and is significant
(terminal Z = -5.248, p = 7.6951 X        ) for 23.5 hours of data, 8470 trials,
and 8,470,000 bits. The Mindsong RNG stopped functioning after 8000 trials
and was unable to output data. There was no graphical display or numerical
data display from the RNG.

Experiment A in the Vedic Obsewatory
  The Vedic Observatory recordings for experiment A consisting of 24 hours of
data, 8,918 trials, 8,918,000 bits are significantly non-random (terminal
Z = -5.950, p = 1.378 X        and was significant after reanalysis for a possible
cumulative drift (terminal Z = -2.64, p = 0.004) (Figure 3).

Experiment B in the Vedic Obsewatory
  Experiment B consisted of significant Vedic Observatory recordings for 31
hours of data, 1 1,271 trials, and 1 1,271,000 bits, and were significantly random
before (terminal Z =-5.440, p = 2.664 X           and after reanalysis for a possible
cumulative drift (terminal Z = -1.75, p = 0.040) (Figure 3).

The Laboratory Post-experimental "Control"
    Possible cumulative drift. The RNG (Orion) after approximately 480 hours of
data collection consisting of 80-90 hours per month over 6 months did not appear
to behave as it did before recording in the meditation hall and the Vedic
Observatory. Specifically, when the RNG was rerun non-xored in our laboratory
it showed a possible cumulative linear downward drift as compared to the pre-test
non-xored control baseline in our laboratory. The slope was calculated as excess
ones greater than expected if random, per 1000 bits. A negative number indicates
an increased proportion of zeros. At post-test, 89 hours of post-test control data
collection for 32,000 trials, 32,000,000 bites was significant (terminal Z =-7.28,
p = 1.70 X 10-13) with a slope equivalent to -1.3 excess onesl1OOO bits (Figure 4).
At baseline before the experiment there was not a significant terminal Z score
306     Lynne I. Mason, Robert P. Patterson, and Dean I. Radin

                                number of trials

               0   500   1000    1500     2000     2500   3000   3500
                                number of trials

                                number of trials
                                    RNG and Meditation                                         307

downward trend (Figure 1B). At the end of the post-test period the RNG had not
returned to pre-test baseline behavior. However, the terminal Z scores were not
significantly different pre and post when run xored.
   Electronic devices are more likely to fail or develop drifts in the early part of
their operating life, then level off and then again increase in failures when they
become older (US government inspector's technical guide, 1987). The new RNG
(Orion) could have developed a cumulative drift with use. However, the
manufacturer prior to shipping reported they tested the device for randomness.
Co-author Radin reports continued randomness after 5 years of use with the
same type of RNG. It is conceivable but not likely that both internally built-in
xored independent random data streams in the Orion RNG developed similar
biases resulting in more zeros than ones.
   No formal predictions were made for the post-experimental results. The
experimenters in this exploratory research decided they did not have enough
information to precisely predict a possible lag effect. A lag effect was considered
as previous studies with this type of meditation had reported a carryover or lag
effect on the experimental measurements even after the experimental period of
meditation had ended (Dillbeck, 1990; Dillbeck et al., 1987; Hagelin et al.,
1999; Orme-Johnson et al., 1988).
   Regardless of the reason for the possible mean drift the difference of the slope
of the post-test data (-1.3 excess ones/1000 bits) from the pre-test control data

Fig. 2A. Shows significant deviations from randomness (p = 1.1449 X lop6 adjusted), for
         Meditation RNG Experiment A during 19 group meditation sessions for a total of 32
         hours, 11,360 trials, 11,360,000 bits of data. The 2-hour sessions of group practice of
         Transcendental Meditation and advanced practices involved an average of 261 females
         and 398 males. The results were replicated (p = 5.397 X lop5, adjusted) as shown in
         Meditation RNG Experiment B with 32 group sessions for a total of 63 hours of
         meditation, 22,567 trials, and 22,567,000 bits of data. RNGA = Random Number
         Generator Experiment A, RGNB = Random Number Generator Experiment B.
Fig. 2B. Shows the concatenated accumulated deviations from a more advanced section of the
         meditations known as yogic flying. Yogic Flying RNG Experiment A shows nineteen,
         15-minute sections, 4.7 hours, 1728 trials, and 17,280 bits. Yogic Flying RNG
         Experiment B shows 32 sections, 8 hours, 2,971 trials, and 29,710 bits. The yogic flying
         portions of the meditations are highly significantly non-random for both experiments
         (p = 1.061 X       and p = 6.471 X         adjusted) respectively and the slopes (Chart 1)
         are eight times more significant than the meditation data (Fig. 2A). The direction of the
         data is an atypical decreasing direction indicating increasing zeros. RNGA = Random
         Number Generator Experiment A, RGNB = Random Number Generator Experiment B.
Fig. 2C. Shows a second type of RNG by Mindsong. Pre-test RNG Mindsong shows a control
         period of 23.5 hours of non-xored data collection, 8470 trials, and 8,470,000 bits recorded
         in our laboratory and as expected, it is not significant. Meditation RNG Mindsong shows
         23.5 hours of non-xored data collection, 8470 trials, and 8,470,000 bits taken during
         meditation that are significant (p = 7.69 X lo-', adjusted). The cumulative deviation plots
         show parabolic lines for one standard deviation, and p = 0.05 plot of chance criteria as
         a function of increasing trials. The jagged solid lines show the cumulative deviations over
         all the trials. SD = standard deviation, RNGMP = Random Number Generator Mindsong
         Pretest, RNGM = Random Number Generator Mindsong.
308           Lynne I. Mason, Robert P. Patterson, and Dean I. Radin

     10000                                                                                            V)

z                                                                                          SD
C         0                                                                                           2
                                                                                           SD         -
2 -5000
                                                                                           185        .E

                                          number of trials
Fig. 3. Data collected at the Vedic Observatory for RNG experiment A shows 24 hours of 8,918
        trials and 8,918,000 bits and for RNG experiment B of 31 hours, 11,271 trials, and
        11,271,000 bits. Both experimental results are significantly non-random (p < .O1 adjusted),
        (p < .05 adjusted) respectfully.
         The cumulative deviation plots show parabolic lines for one standard deviation, and p =
         0.05 plot of chance criteria as a function of increasing trials. The jagged solid lines show the
         cumulative deviations over all the trials. SD = standard deviation, RNGA = Random
         Number Generator Experiment A, RNGB = Random Number Generator Experiment B.

(-0.2 excess ones11000 bits) was calculated and this slope was subtracted from the
original data for the meditation, yogic flying, and Vedic observatory (Chart 1).
The difference in the slopes was subtracted from each cumulative bits deviation
score for each trial during the experimental phase. The reanalyzed data takes into
consideration a possible mean drift and is presented in Figures 2A, 2B and 3.
   Malfunctioning and equipment failure was checked by substituting an alter-
nate laptop and connector post-test. This did not change the post-test results
which continued to have a significant terminal Z score. There was no indication
of computer or connector failure. Bierman (2002) offered the opinion that if the
RNG is terminally significant for some runs and non-significant for others in the
upward direction and terminally significant and non-significant in the downward
direction for others, this would not indicate a malfunctioning. In his viewpoint,
if the RNG is malfunctioning, all the individual runs would be expected to be
similar as opposed to a variety of results across runs. However, non-symmetrical
distributions could affect outcomes.
   Additional testing was performed for electrical and magnetic interference by
running the RNG without additional software for xoring in an electrically
                                      RNG and Meditation                                            309

                                             number of trials
Fig. 4. Shows post-test RNG xored, 89 hours of xored data collection, 32,000 trials, 32,000,000 bits
        in our lab. The results are not significant. Post-test RNG non-xored, shows 89 hours of non-
        xored data collection, 32,000 trials, and 32,000,000 bits in our lab. The non-xored results are
        significant and could indicate a cumulative or mean drift or a lag effect of the meditation.
         The cumulative deviation plots show parabolic lines for one standard deviation, and p = 0.05
         plot of chance criteria as a function of increasing trials. The jagged solid lines show the
         cumulative deviations over all the trials for data set 1 and data set 2. SD = standard deviation,
         RNG = Random Number Generator.

shielded isolated room for 48 hours post-test. The RNG (Orion) was run for 48
hours in a Faraday cage for electrical shielding with mu foil for magnetic
shielding and then placed for 48 hours in a Faraday cylindrical cage with
a height of 25 centimeters and diameter of 12 centimeters for electrical shielding
with a 37 centimeters connector cord to distance the RNG from the laptop. The
results using the Faraday cage (without additional software xoring) had a slope
of -1.10 similar to the experimental post-test slope of -1.34 without the Far-
aday. The concern of electrical and magnetic interference was not supported.

Z Score Analysis
   Following Radin (2002) percentages of significant Z scores at the 0.05 level for
the pre-test, meditation, and yogic flying for experiments A and B were calculated
to test for outliers. The purpose was to examine if many of the group meditation
sessions and yogic flying sessions were significantly contributing to the outcome,
and not just a few highly significant meditation or yogic flying sessions skewing
the results. A window of 115 minutes was selected because it is the length of
310           Lynne I. Mason, Robert P. Patterson, and Dean I. Radin

  -11.2 -I

Fig. 5. Z scores using a window of 15 minutes for all the yogic flying sessions (experiments A and
        B combined). This indicates that a majority of the yogic flying sessions were significant at
        the 0.05 level.

a typical meditation session. A reanalysis to control for a possible cumulative drift
was performed and are presented in parenthesis below. In the pretest control 8% of
the non-xored data were significant at the 0.05 level, 58% (28%) for Experiment
A of meditation sessions, and 44% (26%) for Experiment B of meditation sessions
and 19% of the shielded post-experimental control data. Using a 15-minute
window (length of a typical yogic flying session) the yogic flying section of the
meditation for Experiment A was 79% (68%) significant and 61% (61%) for
Experiment B. The percentages for the yogic flying indicate a majority of the Z
scores for individual sessions were significant at the 0.05 level (Figure 5).

   As predicted, the meditation and yogic flying data are significantly anomalous
(meaning more zeros than ones in the random binary stream) even after sta-
tistically controlling for a possible cumulative drift. The meditation data con-
sisted of a total of 94 hours of standardized group meditation (average 261
females, average 398 males in adjacent meditation hall) recorded in two exper-
iments, at uniform times, collected over a total of 30 days with a RNG on-site in
the females' meditation hall. The two experiments are both significant and
therefore the second experiment offers a replication of the anomalous results.
Our results extend and support previous work (Nelson, 2002c, 2006) involving
the same type of group meditation. The Vedic Observatory data was also signif-
icant after reanalysis for a possible cumulative drift, for the two experiments but
                              RNG and Meditation                               311

less so than the meditation. Future work could include subjects with the Vedic
Observatory recordings as a more appropriate test of the putative influence.

Direction of Results
   The direction of the anomalies in our work is similar to RNG research
involving prayer, full moons and sacred sites in Egypt, but unlike that typically
observed in the majority of work involving tragedies as seen in the Global
Consciousness Project data (Nelson et al., 1998) and Princeton University's
Princeton Engineering Anomalies Research labs. Nelson et al. (1998) note that
often in past RNG research the focus has been on the variance and mean shift
direction is ignored, so the methods in certain studies may make it inappropriate
to infer any meaning from the direction. Our work adds supports to the premise
that activities with "calm but unfocused subjective resonance" (Nelson et al.,
1998, p. 425) or those that foster transcendental experiences (Alexander &
Langer, 1990; Mason et al., 1997; Orme-Johnson et al., 1988; Travis et al.,
2002), or "flow experiences" (Csikszentmihalyi, 1990) may reflect a more de-
creasing directional trend for the RNG. Specifically there is less randomness due
to generating more zeros than ones. This can be represented by an increase in the
negative downward direction of the graphs. By contrast, events that "foster
relatively intense or profound subjective resonance" (Nelson et al. 1998, p. 425)
involving emotionally laden environments, such as tragedies, may result in less
randomness with deviations in the increasing direction.
   Nelson (2006) notes that "Despite the accumulations of more than 200 events
over the past 8 years we can not definitively interpret the negative versus
positive slopes in either case there is a change toward less randomness." Many
of the 200 events used variance measures so caution is advised in generalizing to
studies involving mean shifts. If the present preliminary findings are supported
with further formal confirmatory research in the future, this could lead to the use
of RNGs as a potential means of measuring the intentional "direction" for
collective consciousness.

Alternative Explanations of Results
   What possible alternative explanations of the anomalous data could be
responsible for the results? The following is an examination of other potential
explanations including an experimenter effect, temperature bias, a trial density
bias, insufficient number of sessions, non-xor influence, equipment failure,
electro-magnetic interference, statistical bias, related factors, and lag effects.
   Experimenter eflect. In regards to the experimenter effect, various RNG
experiments (Jahn & Dunne, 1987; Jahn et al., 1997; Nelson et al., 1998; Radin &
Nelson, 1989) have shown a significant effect of individual intention on the RNG.
It is possible that the conscious or unconscious intention of the experimenters
influenced the results (Wisemen & Schlitz, 1997). This study could be replicated and
designed specifically to test for experimenter effect including using other
312         Lynne I. Mason, Robert P. Patterson, and Dean I. Radin

experimenters with pre-registered intentions. However, an exploration with RNGs
(Nelson, 2002b) found no definitive evidence for experimenter effects in a situation
where the experimenter had a personal involvement in the subject matter and
expectations about the outcome. Significant outcomes were not reached and the
author concluded in this single study that there was no clear evidence of an
experimenter effect for this deeply important personal event (Nelson, 2002b). A
previous pilot study (Nelson, 2000c) conducted by a non-meditator testing the same
meditation technique occurring in the same meditation halls, as the present study, did
attempt to control for experimenter bias. Those data were collected first without any
predictions, then before data analysis predictions were made by a meditator blind to
the data. Those results were significant and do not lend support to an experimenter
bias explanation for the present results. Likewise, the participants of the present
study were not aware of the experiment and therefore had no specific intentions or
subject bias for the results. The experimenter effect cannot be completely ruled out
as the first two authors were aware of the time of the recordings but previous studies
(Nelson, 2002a, 2000c) do not support this alternative explanation.
   Temperature. Temperature biases do not appear to be a likely alternative
explanation as all recordings were within the 4" to 32" Celsius range prescribed
by the manufacturer's specifications. Furthermore, the recording temperatures
for the pre-test control, meditation, yogic flying and post-test were all similar but
the results vary for these different venues and cannot be explained by tem-
perature effects. The alternative explanation of temperature being responsible
for the results does not appear to be supported.
   Trial density. A trial density of 1000 bits per trial was selected in order to
capture the whole meditation period in 1000 trials. A trial density of 1000 has
been previously used for RNG research without any reported concerns for trial
density influencing the results (Nelson et al., 1998). The research on trial
number bias density issues is still limited (Ibison, 1998), and future experiments
could directly compare 1000 bits to other density levels to test the influence of
particular random processes on statistical outcomes. The alternative explanation
of trial density being responsible for the results does not appear to be supported.
   Number of sessions. Were there sufficient meditations to accurately measure
an effect? Other meditation research has involved multiple RNGs but the length
of meditation ranges from a single 3-minute period (Nelson, 2002a), to 58.75
hours over 5 days of recording (Nelson, 2002~). comparison to other medi-
tation recordings the present study is longer, with a total of 94 hours of medi-
tation sessions (51 meditation sessions), and appears sufficient within the
context of the literature. The alternative explanation of the number of sessions
being responsible for the results does not appear to be supported.
   Non-xoring. Non-xor (no additional software for xoring) was used as there
was no evidence in our baseline control tests to support using additional software
xor data. In the baseline control tests (Figures 1A and 1B) there was no sig-
nificance for the xor (xor refers to additional xoring software) and the non-xor
                              RNG and Meditation                               313

data. Further exploration of the advantages and disadvantages of using non-
xored data appear to be warranted.
   Assuming a cumulative drift exists, if non-xoring is responsible for the drift it
would be expected to equally affect the non-xored control baseline data, and
non-xored meditation test data. This is not what was found in the results. The
pre-test baseline control data terminal Z score is not significant while the
meditation data is significant, even though both are not xored. The validity of
the baseline recordings can also be examined. However, the pre-test baseline
control recordings (Figures 1A and 1B) are typically random as expected for
RNGs and appear valid. Hence, the alternative explanation that not including
additional software xoring being responsible for the results does not appear to be
supported. Nonetheless we have reanalyzed the data for any possible cumulative
drift and it remains significantly anomalous.
   Equipment Failure. The pre-test non-significant baseline data is more
random than the significantly non-random test data recorded during meditation
and the post-test results. It is conceivable that our relatively new RNG was
experiencing an electronic "burning-in period" that resulted in a difference in
the pre-test control baseline with the post-test (US government inspector's
technical guide, 1987). If the results were completely due to a linear bum-in
there would not be significance after controlling for pre and post differences, but
there is. Also if equipment burn-in was responsible for the results we would not
expect the different results for the meditation, yogic flying and Vedic Obser-
vatory that were taken on the same days with the same RNG.
   While the post-test software xored data is similar to the pre-test xored baseline
data, the post-test non-xored data is clearly different from the pre-test baseline
non-xored. At post-test the equipment was tested, and no evidence of equipment
failure was found for the RNG or associated computer and connector. The
alternative explanation of equipment age or equipment failure being responsible
for the results does not appear to be supported.
   Electro-magnetic inte$erence. To determine if electro-magnetic interference
was the source of the results the RNG were run electrically and magnetically
shielded as well as unshielded. No evidence for electro-magnetic interference as
an alternative explanation of the results was found, especially since the exper-
imental data was taken with batteries as the power source. The alternative
explanation of electro-magnetic interference being responsible for the results
does not appear to be supported.
   Statistical bias. For RNG research in general, a Bayesian statistical analysis
as opposed to the null hypothesis with independent running means (not cumu-
lative deviations) (Scargle, 2002) has been suggested as a more stringent
approach to the results (Sturrock, 1997). Sturrock (1997) emphasized the lim-
itations of Z scores and analysis using p values. In this study, it was thought
prudent to use the accepted to date statistical methods (Radin, 2002). Future
research will have to clarify this line of Bayesian inquiry. All data windows
reflected the length of real-time events, not arbitrary times, and no data was
314         Lynne I. Mason, Robert P. Patterson, and Dean I. Radin

excluded from the analysis, therefore the results are not related to data manip-
ulation or "data fiddling" (Scargle, 2002).
   Related factors. Though the results are supportive of our exploratory
predictions, at this point, it cannot be definitively concluded that the results
are due to an influence of group meditation andlor the Vedic Observatory. Other
factors besides meditation or related auxiliary factors to meditation could be
involved. Further research could include ruling out the simple effect of large
numbers of people in silence, or numbers of people sitting non-actively. How-
ever, no significance has been found for relatively silent non-mobile audiences
at conferences (Nelson et al., 1998).
   Lag effect. The post-test results of this study could be interpreted as a
candidate for a carryover residual effect, lag effect or entrainment effect. Could
using the RNG, during the meditations or at the Vedic Observatory create a lag
effect or alter the results of RNG? A new RNG (Orion) developed possible
cumulative drifts after exposure to the group meditation, but not before. Inten-
tional time delay effects have been previously reported in the literature involving
single subject studies (Dunne and Jahn, 1992). A time lag or carryover effect
that diminishes over months has also been measured in studies evaluating the
effect of group meditation on societal indexes (Dillbeck, 1990; Dillbeck et al.,
1987; Hagelin et al., 1999; Orme-Johnson et al., 1988). Extensive longitudal
research would be needed to support or dismiss the lag effect or entrainment
effect as an alternative explanation of the results.
   Co-author Radin, reports continued randomness after up to 5 years of use with
multiple Orion RNGs when run non-xored. However two of these RNGs were
used for experiments involving meditation. Co-author Radin notes these two
RNGs then developed in the post-test a downward drift similar to that reported
in the present experiment. Preliminary reviews found no downward drift in
subsequent non-meditation related experiments with these RNGs. Radin's
investigation and reanalysis of this previous data (Radin 2006; Radin & Atwater,
2006) is underway in order to discover if there is a meditation and RNG
interaction responsible for the drift in the meanlvariance or a lag effect or some
other possible mundane answer.
   It is not clear there is a cumulative drift involved or what the source of the
drift is or if there is a lag effect. At this point, the difference in post-test data
from pre-test baseline "control" data is not clearly accounted for, therefore
a statistical control for any cumulative drift regardless of the source was used.
The experimental data was still significant after reanalysis for a possible drift for
the meditation, yogic flying and Vedic Observatory for both the initial exper-
iment and its replication.

  Our predictions for the meditation data, yogic flying and Vedic observatory
data were significantly supported and were in the predicted direction. Our work
adds to the premise that certain activities that foster transcendental experiences
                                   RNG and Meditation                                        315

(Alexander & Langer, 1990; Mason et al, 1997; Orme-Johnson et al., 1988;
Travis et al, 2002) may reflect a more decreasing directional trend (increased
proportion of zeros) in RNG outputs. Alternative explanations do not clearly
account for the observed results. The results were still significant even after
controlling for a possible cumulative drift of the mean from an unknown source.
   To our knowledge this is the first experiment with specific predictions for the
direction of a mean shift, and it involves the largest number of synchronized
meditations recorded with a local RNG on site. Having a population doing
a standardized mental technique on a regular basis is advantageous in studying
various aspects of the phenomenon. Further research appears warranted to
explore group meditation as a venue for anomalous results with the RNG. Future
research could test the direction of the results, distance effects from the group,
possible lag or entrainment effects, experimenter effect, non-xoring data
techniques, group size effects, number of RNGs and possible auxiliary factors.
Theoretical questions could include a continued inquiry (Hagelin, 1987; Nader,
2000; Nelson, 2002d; Radin, 2002; Routt, 2005) as to whether or not con-
sciousness is a causal factor.
   What is the possible practical contributions and application of this research? It
is conceivable that RNGs could be used to indicate directional changes in
a proposed global collective consciousness. Just as changes in seismic meters are
used to detect high and low indications of impending earthquakes, RNG outputs
could warn us of changes in collective consciousness while considering any
anticipatory effects. RNGs could also be employed to evaluate preventive and
ameliorative measures that utilize collective consciousness. For example, the
RNG could evaluate the efficacy of various technologies from many traditions,
including group meditations to reduce collective stress in global consciousness
in order to prevent and reduce local and global tragedies.

  The authors gratefully acknowledge the financial support of the Bakken
MIND Lab from Earl Bakken. The researchers would like to extend their
appreciation to Maharishi University of Management, the participants of the
group meditations, and T. Fitz-Randolph, Director of the Vedic Observatory. We
would like to thank D. Orme-Johnson, R. Nelson, A. Belalcazar, Y. Pu, and J.
Zhang for editorial assistance; L. Stradal and D. Watson for technical expertise;
and Charles N. Alexander and Otto Schmidt for continued inspiration.

Alexander, C. N., & Langer, E. J. (1990). Higher Stages of Human Development: Perspectives on
   Adult Growth. Oxford Press.
Bierman, D. J. (1996). Exploring correlations between local emotional and global emotional events
   and the behavior of a random number generator. Journal of Scientific Exploration, I0(3), 363.
Bierman, D. J. (2002). Personal communication.
Bradish, G. J., Dobyn, Y, Dunne, B. J., Jahn, R. G., Nelson, R. D., Haaland, J. E., & Hamer, S. M.
              Lynne I. Mason, Robert P. Patterson, and Dean I. Radin

    (1998). Apparatus and method for distinguishing events which collectively exceed chance
    expectations and thereby controlling an output. US Patent Office, Number 5830064.
Cavanaugh, K. L., Orme-Johnson, D. W., & Gelderloos, P. (1989). The effect of the Taste of Utopia
    Assembly on the World Index of international stock prices. In R.A. Chalmers, G. Clements,
    H. Schenkluhn, & M. Weinless (Eds.), Scientific Research on Maharishi's Transcendental
    Meditation and TM-Sidhi Programme: Collected Papers (Vol. 4, pp. 2715-2729). Vlodrop, The
    Nether1ands:Maharishi Vedic University Press.
Csikszentmihalyi, M. (1990). Flow the Psychology of Optimal Experience. Harper and Row.
Department of Health and Human Services. Public Health Service Food and Drug Administration. (1 11
    30187). U.S. inspector's technical guide. Screening electronic components by burn-in.
    Washington, DC: US Government Printing Office.
Dillbeck, M. C. (1990). Test of a field theory of consciousness and social change: Time series analysis
    of participation in the TM-Sidhi program and reduction of violent death in the U.S. Social
    Indicators Research, 22, 399-418.
Dillbeck, M. C., Cavanaugh, K. L., Glenn, T., Orme-Johnson, D. W., & Mittlefehldt., V. (1987).
    Consciousness as a field: the Transcendental Mediation and TM-Sidhi program and changes in
    social indicators. The Journal of Mind and Behavior, 8(1), 67-104.
Dillbeck, M. C., Landrith 111, G. S., & Orme-Johnson, D. W. (1981). The Transcendental Meditation
    program and crime rate change in a sample of forty-eight cities. Journal of Crime and Justice, 4,
Dume, B. J. (1998). Gender differences in human/machine anomalies. Journal of Scientific
    Exploration, 12(1), 3-55.
Dunne, B. J., & Jahn, R. G. (1992). Experiments in remote human/machine interaction. Journal of
    Scientific Exploration, 6(4), 311-322.
Global Vedic Observatories Corporation. (1996). Maharishi Vedic Observatory. Available at: http://
    www.vedicobse~atory.org.Accessed 17 December 2006.
Haaland, J. E. (2003). Personal communication.
Hagelin, J. S. (1987). Is consciousness the unified field? A field theorist perspective. Modern Science
    Vedic Science, 1 , 29-88.
Hagelin, J. S., Rainforth, M. V., Orme-Johnson, D. W., Cavanaugh, K. L., Alexander, C. N., Shatkin,
    S. F., Davies, J. L., Hughes, A. O., & Ross, E. (1999). Effects of the group practice of the
    Transcendental Meditation program on preventing violent crime in Washington D.C.: Results
    of the National Demonstration Project, June-July, 1993. Social Indicators Research, 47(2),
Ibison, M. (1998). Evidence that anomalous statistical influence depends on the details of the random
    process. Journal of Scientific Exploration, 12(3), 407423.
Jahn, R. G. (2002). Princeton engineering anomalies research scientific study of consciousness-
    related physical phenomena. Available at http://www.princeton.edu/-pear/. Accessed 17
    December 2006.
Jahn, R. G., Bradish, G., Dobyns, Y., Lettieri, A., Nelson, R., Mischo, J., Boller, E., Bosch, H., Vaitl,
    D., Houtkooper, J., & Walter, B. (2000). Mindmachine Interaction Consortium: PortREG
    replication experiments. Journal of Scientific Exploration, 14(4), 499-555.
Jahn, R. G., & Dunne, B. J. (1987). Margins of Reality. Harcourt Brace.
Jahn, R. G., Dunne, B. J., Nelson, R. D., Dobyns, Y. H., & Bradish, G. J. (1997). Correlations of
    random binary sequences with pre-stated operator intention: A review of a 12 year program.
    Journal of Scientific Exploration, 11(3), 345.
Maslow, A. H. (1962). Toward a Psychology of Being. Princeton, NJ: Van Nostrand.
Mason, L. I., Alexander, C. N., Travis, F., Marsh, G., Orme-Johnson, D., Gackenbach, J., Mason,
    D. C., Rainforth, M., & Walton, K. G. (1997). The electrophysiological correlates of higher states
    of consciousness during sleep in long-term practitioners of the transcendental meditation program.
    Sleep, 20(2), 102-1 10.
May, E., & Spottiswoode, J. Z. (2001). Memorandum for the record, re: Analysis of the Global
    Consciousness Projects's data near the 11 September 2001 events. Retrieved October 31, 2001
    from the World Wide Web: http://noosphere.princeton.edu
Mindsong, Inc., Mindsong MicroREG, Minnesota.
Nader, T. (2000). Human Physiology Expression of the Veda and Vedic Literature. Vlodrop, The
    Netherlands: Maharishi Vedic University.
Nelson, R. D. (1997). Multiple field REGRNG recordings during global events. The Electronic
    Journal for Anomalous Phenomena (eJAP). Available at: http://www.psy.uva.nl/eJAP.
                                     RNG and Meditation                                          317

Nelson, R. D. (2001). Correlation of global events with REG data: An Internet-based, nonlocal
   anomalies experiment. .lournal of Parapsychology, 65(3), 247-27 1.
Nelson, R. D. (2002a). Global Consciousness Project. Exploratory studies. Available at: http://
   www.noosphere.princeton.edu. Accessed 17 December 2006.
Nelson, R. D. (2002b). Global Consciousness Project. My mother's passing. Available at: http://
   www.noosphere.princeton.edu. Accessed 17 December 2006.
Nelson, R. D. (2002~).   Global Consciousness Project. MUM peace meditations. Available at: http://
   www.noosphere.princeton.edu. Accessed 17 December 2006.
Nelson, R. D. (2002d). Coherent consciousness and reduced randomness: Correlations on September
    11, 2001. Journal of Scientific Exploration, 16(4), 549-570.
Nelson, R. D. (2006). TM resonance aggregation. Available at: http://www.noosphere.princeton.edu.
    Accessed 17 December 2006.
Nelson, R. D., Bradish, G. J., Dobyns, Y. H., Dunne, B. J., & Jahn, R. G. (1996). FieldREG anomalies
    in group situations. Journal of Scientific Exploration, 10(1), 111-141.
Nelson, R. D., Jahn, R. G., Dunne, B. J., Dobynes, Y. H., & Bradish, G. J. (1998). Field REG 11:
   Consciousness field effects: Replication and explorations. Journal of Scientific Exploration, 12(3),
Nelson, R. D., Radin, D. I., Shoup, R., & Bancel, P. (2002). Correlation of continuous random data
    with major world events. Foundations of Physics Letters, 15(6), 537-550.
Oates, R. (2002). Permanent Peace. Fairfield, IA: Maharishi University Press.
Orion (2006). Orion's Random Number Generator. http://www.randomnumbergenerator.nl/ Accessed
    14 December, 2006.
Orme-Johnson, D. W., Alexander, C. N., Davies, J. L., Chandler, H. W., & Larimore, W. E. (1988).
    International Peace Project in the Middle East. Journal of Conflict Resolution, 32(4), 7777-812.
Patanjali (1978). (R. Prasada, Trans.). New Delhi: India: Oriental Books Reprint Corp. (Original work
    published 1912.)
Radin, D. I. (1997). The Conscious Universe. San Fransisco: HarperCollins.
Radin D. I. (2001). Extended analysis: terrorist disaster: September 11, 2001. Global Consciousness
    Project. Available at: http://www.noosphere.princeton.edu/terror.html.Accessed 17 December 2006.
Radin, D. I. (2002). Exploring relationships between random physical events and mass human
    attention: Asking for whom the bell tolls. Journal of Scientific Exploration, 16(4), 533-548.
Radin, D. I. (2006). Entangled Minds. Simon & Schuster.
Radin, D. I., & Atwater, F. H. (2006). Entrained minds and the behavior of random physical systems.
    Paper presented at The Parapsychology Association Convention 2006, Stockholm, Sweden, August.
Radin, D. I., & Nelson, R. D. (1989). Evidence for consciousness-related anomalies in random
    physical systems. Foundations of Physics, 19(12), 1499.
Radin, D. I., Rebman, J. M., & Cross, M. P. (1996). Anomalous organization or random events by
    group consciousness: Two exploratory experiments. Journal of Scientific Exploration, 10(1), 143.
Routt, T. J. (2005). Quantum Computing. Fairfield, Iowa: First World Publishing.
Scargle, J. D. (2002). Commentary: Was there evidence of global consciousness on September 11,
    2001? Journal of Scientific Exploration, 16(4), 57 1-578.
Scargle, J. D. (2006) Personal communication.
Sturrock, P. A. (1997). Bayesian maximum-entropy approach to hypothesis testing, for application to
    RNG and similar experiments. .lournal of Scientific Explorution, 11(2), 181-192.
Travis, F., Tecce, J., Arenander, A., & Wallace, R. K. (2002). Patterns of EEG coherence, power, and
    contingent negative variation characterizes the integration of transcendental and waking states.
    Biological Psychiatry, 61, 293-319.
US government inspector's technical guide (MIL-STD-883). 1987. Philadelphia: Naval Supply Depot.
Wiseman, R., & Schlitz, M. (1997). Experimenter effects and the remote detection of staring. Journal
    of Parapsychology, 61, 197-207.
Journal of Scientific Exploration, Vol. 21, No. 2, pp. 318-324, 2007


                  Comments on Mason, Patterson & Radin

Experiments on physical random number generators are fascinating for a very
specific reason. In a general sense, we use the term "random" to refer to events
that happen over time andlor space, for which we have no causal explanation.
When we observe regularities between physical random number generators
(RNGs) and events in the world, especially if they happen in experimental
settings, we have to take notice. The reason is obvious; whatever influence the
world events have on the physical RNGs must operate along causal pathways
that we have not yet discovered. Surely, finding new causal pathways must be
a central issue in scientific exploration.
   Although I am enthusiastic about research on the effects of world events on
physical RNGs, I am not as impressed as I would like to be with the results in the
RNG literature, as are the authors of the various papers that make up this
literature, many of which have appeared in JSE. The reason is based on my
feeling that there is much room for improvement in the experimental designs and
methods of analysis that RNG researchers use, and so I would like to use the
Editor's generous offer to comment on the Mason et al. article in order to make
points in general about RNG research, some of which are illustrated in the article.

                            Computations Related to RNG Data
One of my general criticisms of the RNG literature is that the computational
procedures are frequently described in ordinary language, which does not always
translate unambiguously into actual computation or statistical analysis. In my
opinion this tradition is continued in the Mason et al. article. For this reason, I
think it is worthwhile to make some of the computational issues more precise.
   The raw data in an RNG experiment consist of a binary sequence; that is,
a sequence xi for i = 1 . . . n, in which each component xi is either 0 or 1. It turns
out to be far easier to analyze binary data if we apply the "sign" transformation,
s(x) = 2x - 1. This leaves 1's alone, but transforms 0's to -1. Thus, s(xi) for i =
1 . . . n is a sequence of 1's and -1's. Note that summing s(xi) gives the excess of
1's over 0's in the underlying x-sequence (where a negative excess is interpreted
as an excess of 0's over l's), and that sums like this are routinely portrayed in
RNG articles.
   To reverse the 1's and 0's in the underlying x-sequence, we simply replace x
by 1 - x. Since s(1 - x) = -s(x), the reversal process for the signed sequence is
                                   Commentary                                   319

just accomplished by multiplying by -1. There are other interesting algebraic
properties of the sign function that are related to whether applying the exclusive-
or operation is a good idea or not, one of the issues raised by the Mason et al.
article. Define the eq operation on two binary numbers x and y so that x eq y is 1
when x and y are equal, and 0 when they are unequal. Then s(x eq y) = s(x)s(y),
as can be easily checked. It would have been nice if the RNG scientists had
combined sequences with eq, but instead they chose the xor operation
(exclusive-or), defined by x xor y = 1 if x and y are unequal, and 0 if they
are equal. Obviously x xor y = 1 - (x eq y), and so s(x xor y) = -s(x)s(y). The
take-away point from this is that it is easier to study the effects of xor-ing two
binary sequences using the sign transformation, although we do have to put up
with an annoying sign change (which has implications, as we will see).
   Combining binary sequences with xor happens in two places in RNG
research. Evidently all putative physical RNGs actually generate two binary
sequences internally, and then xor them for their output. This is done in
hardware, so there is nothing anyone can do about it. I believe that the reason
RNG manufacturers do this is because their goal is to offer a genuine random
number source, which is not influenced by world events. To see why this makes
sense, take expected values to show E[s(x)] = s(E[x]), and note that E[x] is the
probability of a 1 for the binary x-sequence. If x and y are two independent
binary sequences, then E[s(x)s(y)] = s(E[x])s(E[y]). Now E[x] = 112
corresponds to "pure randomness", and s(112) = 0. Therefore, the closer the
expected value of the sign-transformed sequence is to 0, the closer it is to pure
randomness. Since E[x xor y] = -s(E[x])s(E[y]), it follows that the x xor y
sequence will always be closer to pure randomness than either x or y are (an
order of magnitude closer). In fact, even if only one of the sequences is purely
random, then the xor-ed sequence will also be purely random. Therefore, the
RNG manufacturers can claim that by xor-ing they are delivering on their claim
to produce a purely random number sequence. There are two aspects of this we
need to keep in mind. (1) This is, of course, the opposite of the aims of RNG
scientists, who want to be able to detect departures from pure randomness, so it
is strange that they have chosen to use RNGs with hardware xor-ing, and this
substantiates Scargle s criticism, cited in the Mason et al. article. (2) All of the
above assertions depend on the assumption that the x and y sequences are
independent, which is perhaps somewhat less than obviously true.
   The second place that the xor operation appears is in software "masking" of
the sequence generated by the RNG. Evidently the most common scheme is to
xor the signal x from the RNG with an alternating sequence of 0's and 1's. If we
let ai for i = 1 . . . n denote this sequence, then s(ai) = (-1)'. Thus, s(x xor a) =
   We now have all the machinery we need to analyze RNG signals. First, the
RNG internally generates binary sequences x and y, and puts out x xor y. The
RNG scientist can either use this signal or xor it with the alternating binary
sequence, to obtain x xor y xor a. The sign-transforms of these signals are

                      S ( X ~XOI   yi xor ai)   =         (-1li

   In has become quite conventional in RNG research to sum the sign-transformed
version of a binary sequence for use in assessing non-randomness. Plots of
cumulative sums have been much used in the RNG literature, and although Mason
et al. repeat this, they base their conclusions on the "terminal" values of the sums.
While this analysis seems to have served Mason et al. well, in general it is
simplistic. One of the important ways that an RNG can fail to produce a random
number sequence x is that the sequence pi = E[xi] of probabilities of 1 may depart
from Y2. Mason et al. refer to the situation pi = p # ?h "drift". (The reason this
is a misnomer is that the sum of sign-transformed values can drift for other
reasons.) The expected value and variance of the sum of the sign-transformed
sequence are ns(p) and 4np(l - p), assuming pi = p for all i and independence.
   Because it will turn out to be important below, let us just consider the case
pi = p for the moment. Large-sample theory says (assuming the components of
the binary sequences are independent) that approximately

where S is the sum of the sign-transformed sequence, and Z represents a
Normal chance variable with mean 0 and variance 1. Rewriting,

  The reason this is important is that RNG scientists regularly plot S vs. n. If
p = 112, then S = ~ d nwhich explains why the curved lines in the plots shown
by Mason et al. are proportional to dn. The curves in the plots represent
something about what we expect when p = 112. In order to see what would
happen when p # 112, note that 2      d         m
                                           is actually very close to 1 for values
near p = 112. Thus, for small departures of p from '/2 we have nearly

   To summarize, the sum of a sign-transformed binary sequence should behave
like the square root of the number of components times a standard Normal
chance variable, but if the probability (p) of a 1 in the underlying binary
sequence deviates from ?h, then S should in addition have a component linear in
n and proportional to s(p).
   This sheds a bit of light on the issue of software xor-ing. No matter what the
value of p in the original sequence, xor-ing with the alternating sequence changes p
to ?h (without changing the variance). Consequently, any software-xor-ed signal
that gives a statistically significant result is rather hard to interpret, because (as
Scargle argues) exactly what we might want to see has been completely removed.
Needless to say, this makes previous positive research with software-xor-ed
                                      Commentary                                  321

                                         TABLE 1
                 Estimated slopes and probabilites from graphs in the paper

Figure                                 S(P>                                   P

sequences difficult to understand. I believe it is one of the major strengths of the
Mason et al. article to have departed from the previous, convention-driven practice.
  Before leaving this section, I want to point out that there is another very
important point that is not addressed by this analysis. It is the possibility that the
twin binary sequences x and y, generated internally by the RNG, are not
temporally independent. That is, pairs (xi, yi) generated at one time might be
correlated with pairs generated at other times, and of course each xi could be
correlated with its paired yi. This is, in fact, an entirely plausible way in which
RNGs might produce non-random numbers, but the conventional analysis, based
on partial sums of sign-transformed sequences, will never sort it out, because the
method is too simple.

                            Issues Raised by the Paper
    1. All of the experimental results reported by Mason et al. show plots of S
       vs. n with what appears to be very nearly a linear trend. In the light of the
       above analysis, this is consistent with the original binary sequence (from
       the RNG) having a departure from p = 112, in the direction of p < 112.
       The slope of each line is the value of s(p) for that experiment. I estimated
       these from the figures in the article (in the version I had), and came up
       with Table 1.
       If this is accurate, then a typical effect on the probability of a 1 in the
       underlying binary sequence is to shift it from 0.50 to something like
       0.4986, a deviation of -0.0014.
       In my opinion, one of the weaknesses of the RNG research program is that
       it focusses on being able to collect sufficiently large numbers of bits to
       show that such tiny effects are real. It cites the "odds against chance" for
       its findings, and shows impressive-looking figures like those in the Mason
       et al. article, while underplaying just how miniscule the findings really
       are. If we want to assert that there are causal pathways that are un-
       discovered by conventional science, but which can be detected by RNG
       experiments, and that these causal pathways might actually have effects
       worth paying attention to in the world we live in, then the current path of
       RNG research does not seem to be taking us where we would like to go.
    2. The authors say that a surfeit of 1's in a bit-stream indicates more
       randomness, while a surfeit of 0's indicates less randomness. No reason is

   offered for this assertion, and indeed it is hard to imagine that simply
   reversing the sense of the data (multiplying the sign transformation
   by -1) would interchange things on some "randomness" scale. Given the
   analysis I set out above, I would offer a different view of the dominant
   negative trend in the Mason et al. results. A negative trend in the
   hardware-xor-ed binary sequence output by the RNG implies that both of
   its internal binary sequences have shifted in the same direction (both of
   their p's above %, or both below %). If this is true, then the negative trend
   is more plausibly interpreted as a trend in the same direction by both
   internal sequences. What this might mean depends on how those internal
   sequences are physically generated, a fact not revealed in the Mason et al.
3. I find the authors' interpretation of the post-experiment results a bit
   strange. In seems disingenuous to say that there were no predictions about
   this phase of the experiment. An obvious reason for doing a post-
   experiment is to see that the RNGs returned to normal after the
   circumstances in which they showed an influence. When this fails, then
   introducing "entrainment" as a supportive explanation for the results in
   effect means that there is no way that the post-experimental results could
   ever falsify an RNG influence, raising a question about their scientific
   standing. Moreover, if "entrainment" were a serious explanation, we
   would have expected Mason et al. to report carefully on the prior history
   of the RNGs before their periods of data gathering, and we would have
   also expected to see cumulative "entrainment" accounted for somehow in
   the results during the meditation intervals. My conclusion is that while an
   "entrainment" hypothesis is appealing a b initio, it loses some of its luster
   post hoc.
4. In this paper (and virtually every other one I have seen on RNGs) the
   experimenters select a segment of bits from the underlying bit-stream, in
   some way that is not entirely clear. In other words, not all of the data
   generated by the physical random number generator are used. This
   amounts to applying a data "mask" that literally removes large amounts
   of data from the experiment. I am not suggesting that this was done in
   some sinister fashion, but I am claiming that this process is poorly
   described, and its effect on the statistical results has evidently never been
   tested. This is related to the next point.
5. I found the method of statistical analysis to be more obscure than I would
   have liked. At one point a "trial" is defined as 1000 bits collected over 10
   sec, and a "run" is 1000 trials. But then in the description of the statistical
   method, 200 bits are sampled in an undescribed way over an undefined
   time period. If Fig. 1A (in the version I have) is to represent about 30
   hours of sampling, then the sampling rate might be 0.1 Hz but with 200
   rather than 1000 bits, or maybe with 1000 bits, or maybe at some other
   sampling rate-we cannot tell. Given that there is no deficiency of critics
                                   Commentary                                   323

        of RNG research, it would be a good idea for RNG scientists to be more
        careful in describing what they have done.
   6.   A peculiarity of the pre-experimental data is this. If the original bit-
        stream is purely random, then after an alternating mask xor-ing it is still
        purely random. This is a result of probability theory, not something that
        needs to be tested empirically. It is not clear why one would test things
        known to be true, except as a negative control (which in this case would
        be a test that the software did the xor-ing correctly, something that could
        probably be more reliably checked directly).
   7.   It would seem to be useful in RNG research to have more controls than
        are usually employed. In this paper, for example, although there was
        suspicion that the continuous usage of the meditation hall for this purpose
        might have suggested an RNG influence from the site itself, no test of this
        hypothesis was made. Having simultaneous control RNGs in different
        locations would also, in general, seem to be a worthwhile enhancement.
   8.   Over the years I have been struck by how primitive and ritualized the
        analysis of RNG data has become. RNG experiments produce a very large
        amount of data, and many scientists (especially certain kinds of engineers)
        have an impressive armamentarium of tools for assessing and interpreting
        influences on signals. The use of terminal standardized statistics, as in the
        current paper, seems to waste a valuable opportunity for more informative
        analyses. I fully concede, however, that in the Mason et al. article the main
        results are so simple and compelling that probably nothing more elaborate
        is needed to make the points that they emphasize in their article.
        Nonetheless, the linear departures from randomness that they have found,
        while supportive of a constant pi model, never test that model. Thus there
        seems to remain, at this late stage, room for fundamental analyses of how pi
        values fluctuate over time.
   9.   Mason et al. very properly examine some potential influences on RNGs
        from known sources, such as electromagnetic fluctuations. This is
        a definite step forward in RNG research, and one that should be
        investigated more systematically.
  10.   As a tiny terminological quibble, I would suggest that RNG researchers
        stop referring to "unconscious" mental effects on RNGs. The subjects of
        these studies are always fully conscious, but the phenomenon referred to
        is supposed to be below their level of conscious perception; that is, it is

Finally, in the spirit of supporting research into world event influences on phys-
ical RNGs, I would offer some personal recommendations for future research.
     Abandon the path of trying to mount ever more statistically significant
     results to prove that the RNG phenomenon exists. Although the "odds

against chance" may be steadily rising over time, they are already high
enough, and we are not learning anything more in the process.
Efforts need to be made to understand how physical random numbers are
generated and to develop some hypotheses about what kinds of influences
might cause them to shift. Up to now RNGs have mostly been black boxes,
which doesn't push the science very far forward. Do temperature, pressure,
ambient light, variation in the electromagnetic field, or the force of gravity,
or any of a host of other physical characteristics produce an influence? Do
RNGs "age" in some way (as Mason et al. suggest), and if so, how do we
take that into account? The evidence so far seems fragmentary; we need it
to become systematic.
The experimental designs need to be much stronger. Simultaneous controls,
duplicate RNGs, and balancing the use of different RNGs over the
experimental design (to take out the effects of an idiosyncratic RNG, for
example) are all strongly indicated. There seems to be a rather large amount
of ad-hoc-ness to many of the RNG experiments.
Search for something that does a better job of capturing whatever influence
we are seeing. Ultimately one would like to measure it over shorter periods
of time, to relate it to changing conditions, and to therefore study its
properties. If 3 hours are necessary to even see whether an influence is
present, it is going to continue to be difficult to do interesting experiments.
What kinds of human events influence RNGs? Do they have to be spiritual
or "alternative" in some sense, or would one see a bigger effect, say, at
a political rally, or a marriage counseling session? Some systematic
research along this line might be useful.
Research physical RNGs should simply put out the physical signal, and not
somehow pre-process it in hardware. Hardware xor-ing may have done
considerable damage to RNG research, and one can only wish that RNG
researchers had been more critical of it before the Mason et al. article.
It would seem to be useful to have more people involved in RNG research.
By standards in other areas, RNG research is very inexpensive, and
reasonably easy to do. Seeing results from more groups, in a variety of
settings and with a variety of approaches, might be a very good thing.

                                                               MIKEL ICKIN
                                                       maickin @comcast.net
    Journal of Scientific Exploration, Vol. 21, No. 2, pp. 325-352, 2007

                      Statistical Consequences of Data Selection

                   Princeton Engineering Anomalies Research School of Engineering and
                      Applied Science, Princeton University, Princeton NJ 08.544-5263

          Abstract-Data selection can result from unconscious biases or preferences on
          the part of experimenters, or from deliberate efforts to skew the apparent
          character of an experimental database. In either case the same formalism can
          be applied to compute the statistical signature of the selection process. Since
          the result of a suitably chosen selection process can be arbitrarily close to any
          desired distribution of experimental outcomes, it also is necessary to take into
          account the fraction of data that would have had to be discarded. When the
          selection formalism is applied to the Princeton Engineering Anomalies
          Research (PEAR) benchmark random event generator (REG) database, it is
          found that no selection model examined is consistent with the data. An unusual
          subset of these data, produced by a single operator, which has in the past been
          the target of suspicion, is likewise inconsistent with any selection hypothesis,
          even under a worst-case scenario of deliberate fraud.
          Keywords: Statistical Methods-Meta-Analysis-REG-Human-Machine

~                                                 Introduction
    "Data selection" is a common term for the biased selective reporting of data in
    scientific research. It is frequently invoked as a dismissive explanation for
    peculiar or anomalous results. While it usually implies a deliberate attempt to
    deceive, data selection also can result from unconscious biases of experimenters
    or inadequate controls in the recording and reporting of data (Gould, 1996). The
    work presented here explores the possibility that the anomalous effects seen in
    the benchmark random event generator (REG) database generated at the
    Princeton Engineering Anomalies Research (PEAR) laboratory (Jahn et al.,
    1997) might be due to such a selection process. None of the selection models
    examined are consistent with the data, and some general properties of selection
    models suggest that no possible model of this class can be constructed to be
    consistent with the data.

1                                        Premises and Definitions
      The formalism developed here assumes that the data under consideration
    accrete in the form of samples from a standard normal distribution, each
326                               Y. H. Dobyns

individual sample being the final outcome of a single human action to generate
data (e.g., pressing a button to start the apparatus). While the only experimental
data considered here come from the PEAR program's REG experiments, the
normal distribution is ubiquitous enough that it may be hoped the formalism and
general arguments have a broader application. In the case of the PEAR REG
data, the human action is the initiation of data generation for a single "run," and
the resulting standard normal deviate is simply the mean score for that run, as
normalized by the expected mean and standard deviation. That is, x = (m - p)la,
where x is the normalized outcome, m the observed mean, p the theoretical
mean, and a the theoretical standard deviation.
   REG run scores are in fact binomially rather than normally distributed, but
since a single REG run involves a minimum of 10,000 bits (and may involve as
many as 200,000), the deviations from normality are inconsequential. For
analytical purposes, all REG data will be treated here at this level of normalized
run deviations. The observed anomalous effect in the PEAR REG experiments,
as reported elsewhere (Jahn et al., 1997), is a shift in these mean run scores,
correlated with the operator's pre-stated intention.
   These experiments involved a tripolar protocol, in which approximately equal
amounts of data were generated under three intentional conditions, high, low,
and baseline. In the high intention, the operator's goal was to increase the mean
value of the data; in the low intention, to decrease it. (The baseline was a passive
intentional condition in which the operator was not directed to make any effort.)
The only distinction between the two intentional conditions is the direction of
effort; any formal analysis whatsoever applies equally to both intentions, up to
a sign change. The following discussion, therefore, is written as applying only to
the high intention, with "positive" outcomes or shifts being in the direction of
intention, and ''negative" outcomes or results contrary to the intention. The
reader should bear in mind that exactly the same formalism, up to a sign
reversal, applies to the low intention.
   Because of this theoretical symmetry, all subsequent comparisons between
theoretical predictions and data will pool the results from both intentional
conditions. A single distribution of intentional outcomes is computed by
inverting the sign of all low-intention deviations and combining these inverted
results with the high-intention deviations to construct a single population of
deviations in the direction of intention.
   We will consider two genres of explanatory model for the observed
anomalies. The hypothesis that these data reflect an actual change in the
machine's operation is referred to as a "mean shift model." In contrast, a
"selection model" presumes that certain runs are selectively discarded,
depending on their values, so as to bias the distribution statistics of the retained
runs and create the spurious appearance of a mean shift. This altered distribution
of runs will be called the selected distribution. The undistorted distribution
existing before selection will be referred to as the source distribution. It is
assumed throughout that, since the selection model is an alternative to an
                   Statistical Consequences of Data Selection                     327

anomalous change of the distribution, the source distribution is the undisturbed,
standard-normal-distributed output of the apparatus. In general, the selection
process is probabilistic (reject a fraction of runs of a given value); it can be made
deterministic by setting the selection probability for a given value at 0 or 1 .
   The symbol p(x) will be used to refer to the selected distribution, where x is
the run value. The functional notation f(x) will be used to denote the standard
normal probability distribution, i.e. f(x) = l e - x 2 1 2 . F(x) refers to the
antiderivative off, the cumulative normal probability function: F(x)= Jl, f(t)dt.
   By hypothesis, the selection process produces p(x) from a source distribution
f(x). The probability that a run of value x will be retained is given by a selection
                                           <       <
function s(x).Since it is a probability, 0 s(x) 1 for any x. In general the total
integral S =  Jrm     s(x)f(x)dx will be less than 1 (the only exception being the
trivial "selection" function s(x)= 1). Since p(x) should be a properly normalized
probability distribution, its value must be p(x) = s(x)f(x)lS.
   The description of a selection process entails that a certain fraction of the
original data has been discarded. For purposes of treating this "filedrawer"
quantitatively, we will define the filedrawer quotient Q as the ratio of the
amount of discarded data to the amount of data retained. Since the quantity S,
defined above, gives the total integral of the selected distribution relative to the
original, Q = (1 - S)/S.
   Accusations of data selection frequently invoke an intuitive but generally
incorrect rule of thumb: since the mean shift is a very small fraction of the
observed mean value, one supposes that it could be attained by discarding
a comparably small fraction of negative data, with the implication that bias or
carelessness easily could cause a few critical runs quietly to disappear. The
commonest form of this criticism asserts that, since PEAR'S effect amounts to
 1 part in l o 4 , discarding 1 part in l o 4 of contrary data would be adequate to
produce it. Nevertheless, the explicit calculation of filedrawer values will show
that, despite the small absolute scale of the effect, the amount of discarded data
required to produce it usually would be quite substantial. In some extreme cases,
the discarded data would need to be comparable in quantity to the entire reported
database. Even in cases where the fraction of discarded data is modest, it is
orders of magnitude larger than the naive criticism would suggest, and entails
a remarkably large aggregate of actual instances of discarded data due to the
large sizes of the observed databases.

                         General and Limitative Results
   While an arbitrarily chosen selection function s(x) is likely to produce
a distorted distribution, it is natural to speculate whether this is necessarily so for
all such functions. The answer is no: the output of a selection process on a normal
source distribution can produce a shifted distribution that is also normal.
   Let f(x,a) = ( 1 1 f i a ) e - ' ~ / ~ ' be the generalization of f(x) to non-unit
variance. f(x, a ) is a proper probability distribution with total integral 1 , for any
328                                Y. H. Dobyns

nonzero a. Suppose that a selection process applied to standard normal input
with distribution f(x) produces output distributed according to f(x - p, a ) for
some positive p (since the topic of interest is the production of spurious positive
mean shifts via selection) and some a not necessarily equal to 1. This im-
plies that
                              Sf (x - P, 0) = s(x)f (47
since the total integral of s(x)f(x) is S by definition and the integral of f(x - p, a)
is 1 for any p, a. It then follows that s(x) = Sf(x - p, a)lf(x), which by
substitution of the explicit forms for f can be seen to be

Three distinct cases can be distinguished in this formula according to the value
of a.
  If a 2 > 1, the leading dependence on x is as ecx2with positive coefficient c.
  This grows without limit for large x. However, s(x) cannot exceed 1 for any x.
  Therefore this is not a possible case. Selection cannot produce a normal
  distribution with greater variance than its source; any selected distribution
  with increased variance must show some departures from normality.
  If a2= 1, the leading dependence on x is epX. This also grows without limit
  for increasing positive x, and is therefore again an impossible case. The
  selected distribution, if normality is preserved, cannot have the same variance
  as its source.
  If a 2 < 1, the polynomial in the exponent, and hence the ratio itself, has
  a maximum at x = pI(1 - a2). This is a possible situation; thus, a selection
  process can produce an undistorted normal function as its output, provided the
  variance of the output normal distribution is less than that of its source.
This result can also be used to derive a relation among the variables S, p, and a,
on the assumption that s(x) = 1 at its maximum. This gives the largest possible
value of S, and hence the smallest possible value of Q = (1 - S)/S, the filedrawer
quotient. Solving the equation Sf(x - p, a) = f(x) at x = p/(l - a 2 ) gives, after
some algebra, the relation

It is obvious that S vanishes at a = 1, as the argument of the exponential goes
to 4.It is equally obvious that S vanishes at a = 0. The derivatives of S in
the region between are
                   Statistical Consequences of Data Selection                  329

   Since p is positive by assumption, and S is nonnegative, it is obvious that
dS/d,u < 0 anywhere S is nonzero. This accords with intuition: the larger the shift
in the distribution mean, all else being equal, the more stringent the selection
must be. Setting dSlda = 0 and applying the quadratic formula shows that S has
exactly one maximum in a in the allowed range 0 < a < 1, at

This again accords with intuition: if a is too small, much of the source
distribution must be discarded in order to make the selected distribution narrow
enough. If a is too large, much of the source distribution must be discarded to
match the limit imposed by available data in the upper tail.
   For a general consideration of what is possible with selection, the important
points to note from these formulae are:
   1. For any given value of a in the selected distribution, increasing p requires
      a decrease in S.
   2. For any particular value of ,u in the selected distribution, there is an
      optimal value of a that maximizes S. As p increases, the optimal value of
      a decreases, as does S at that value of a.
   3. Any attempt to increase a beyond the value that maximizes S leads to
      a rapid decrease in S, since S = 0 at a = 1 is required.
Point 3 in particular means that there is an unavoidable tradeoff between the
degree of distortion from the original distribution, and the amount of data
discarded to achieve it; the more nearly the selected distribution approaches an
otherwise undistorted, mean-shifted version of the source, the smaller the
fraction of the source is being retained.
   The discussion above proves that, while a selection process can produce
a shifted but still normal selected distribution from a normal source distribution,
the output distribution necessarily has smaller variance than the source.
Moreover, the larger the mean shift, and/or the more closely the selected
distribution approaches the source distribution's variance, the more data must be
discarded in the selection process.
   A more general selection process that is not constrained by producing normal
output need not obey this variance rule: for example, if all data smaller than
a certain absolute value are discarded, the selected distribution will have greatly
increased variance. It will also be bimodal and hence grossly non-normal. Given
the above result for normal output, in which all properties of the selected
distribution except the variance are fixed (since the target mean shift is chosen in
advance), it seems reasonable that more general selection processes must be
subject to a similar tradeoff between the intensity of selection and the degree of
departure from normality in the output. This conjecture will be revisited after
some development and examination of specific selection models in the
following sections.
                                   Y. H. Dobyns

                       Some Plausible Selection Models
   The selection function s(x) can vary arbitrarily at every x, subject only to the
constraint 0 5 s(x) 5 1 for all x. However, a real selection process is not likely to
employ an arbitrarily complicated selection function. The selection models
discussed below attempt to sample a reasonably broad range of the space of
practical and plausible models.
   Each heading below describes a one-parameter family of models, where the
mean and all other distribution statistics are determined by a single free
parameter. The reason for examining one-parameter families is that once the
observed mean is fit by choosing an appropriate value of the free parameter, the
remaining distribution statistics have been fixed; this places each model on
a footing comparable to the one-parameter mean shift model. We adopt the
convention of calling the selection parameter a in all cases. The functional forms
of the various models are illustrated graphically in Figure 1.

  Simple Cutoff. The simplest possible data selection model is simply to reject
  all runs with a value less than some fixed cutoff a ; s(x) = 0 for x < a, 1
  otherwise. This model bears some theoretical importance in that it is provably
  the model with the smallest filedrawer quotient for any given mean shift. (The
  proof is almost trivial: it rejects all those and only those data with the greatest
  negative contribution to the mean. Changing the rule in any way will therefore
  reduce the mean shift generated per discarded data point, and thus require that
  more data be discarded to achieve the same mean shift.) The fact that the
  cutoff model also produces the greatest departures from normal distribution
  statistics of any model examined is therefore strong support for the general
  "tradeoff thesis of the preceding section.
  Fractional Rejection. It seems overly simplistic to expect that the simple
  cutoff model ever would appear in actual data. Only the most nayve of frauds
  could imagine that the total disappearance of runs below some set value could
  be invisible; even the most biased of researchers would need phenomenal
  powers of self-delusion to convince themselves that all runs, and only those
  runs, lying below a particular value were methodologically invalid. The
  fractional rejection model supposes that a constant fraction a of all
  unsuccessful (x < 0) runs are discarded:

                      s(x) = 1 - a for x   < 0 , l otherwise.

  Short Left Tail. For a further increase in psychological plausibility, we can
  suppose that an increasing rate of exclusion is employed as the value of the run
  goes more negative. For a simple representation of this possibility, a short left
  tail selected distribution compresses the variance for all negative values of x. In
  other words, for some compression factor a < 1,
                  Statistical Consequences of Data Selection

      where p(x) has been renormalized to a proper probability distribution.
      The selection function for negative x is

       S(X) =*% exp{$ (1 (5))); before s(x) = 1 for x > 0.
               =             as -

  Increasing Bias. Rather than presuming that only negative runs will be
  rejected, we may suppose that an experimenter preference for positive results
  may manifest itself as a graduated probability of rejection. For this model we
  presume that runs that exceed the positive one-tailed p = 0.05 significance
  criterion x = 1.645 are always retained, while runs with x < -1.645 are
  rejected with probability a . In the region -1.645 < x < 1.645, the rejection
  probability drops linearly from a to 0 as x increases.
                                           for x < -1.645;
                           + axl3.29,      for - 1.645 < x     < 1.645;
                                           for x > 1.645.

   This enumeration is not intended to imply that unconscious bias would work
by any such explicit or exact means as the example selection functions. Indeed, it
seems virtually certain that data selection due to unconscious bias would operate
in a messy and inexact fashion driven by a host of psychological considerations.
The intent of employing these several families of models, to the contrary, is to
provide functional forms that are relatively amenable to calculations, for models
that capture reasonable qualitative properties of the unconscious bias process.
Taking the last as an example, it seems reasonable to suppose that a biased
experimenter would be most eager to retain significant runs, and most willing
to discard significantly negative runs, with an intermediate level of preference
for intermediate cases. While it would be ridiculous to propose that such
an experimenter would unconsciously be calculating the three-part s(x) function
given above, this s(x) gives a computable quantification of the specified
qualitative features; we therefore reasonably may argue that any psychological
process fitting the given qualitative description should produce output statistics
similar to those of our exemplar, chosen for its computational convenience.
   Figure 1 displays examples of all of these functions. The uppermost plot shows
the standard normal distribution in the interval [-3, 31. The next plot is a mean-
shifted normal, corresponding to the mean shift model which is our representation
for a genuine anomalous effect. The mean of the distribution has been set at 0.3,
a size appropriate to the scale of effect in some of the more successful PEAR
datasets. The third plot shows the cutoff distribution with the same mean. (All of
the selected distributions in Figure 1 have been normalized to have the same area
as the shifted normal with which they are compared.) The next three plots show
                                      Y. H. Dobyns


   True Mean Shift

      Simple Cutoff

Fractional Rejection

     Short Left Tail

    Increasing Bias


                       -3       -2          -1        mean)      1      2         3
                                             Normalized run value
                            Fig. 1. Selection model distributions.

the fractional rejection model, the short tail model, and the increasing bias model,
all for the same mean of 0.3. Finally, the last plot shows all four selection models
superimposed as dotted lines on the shifted normal with the same mean.

                  Comparison Methods and Statistical Power
  Most selection processes produce non-normal selected distributions, and
even the normality-preserving selection process produces a selected distribu-
                       Statistical Consequences of Data Selection

                                             TABLE 1
                            Statistical Power Comparison-Required N

   Selection model           X2   Goodness-of-fit         0           Skewness             Kurtosis

      Simple cutoff                          192          30              44             1.47 X lo5
Fractional rejection                        1920         638             203                    460
          Short tail                        1040          43             127                   7551
    Increasing bias                   2.28 X lo5         638            3510                   5870

tion with a smaller variance than its source. Given the illustrations of Figure 1,
it might seem natural to test the selection models against the mean shift model
by comparing their distribution densities directly, e.g. with a X2 test for
goodness of fit on some appropriate binning scheme. In fact, higher statistical
resolution can be achieved by instead computing some of the higher moments
of the respective distributions. (See Appendix for a more detailed discussion.)
The mean shift model predicts that, regardless of mean shift, the standard
deviation of the normalized data should be a = 1; the normalized third central
moment (skewness) should be y3 = 0; and the normalized fourth central moment
(kurtosis) should be y4 = 3. Examining some or all of these parameters will
detect the difference between a mean shift distribution and a selected dis-
tribution for considerably smaller numbers of data points than a general X2 test.
   Table 1 shows the statistical power of tests on the higher moments, for the
same effect size 0.3 shown in Figure 1, by giving the number of data points
required for an a = 0.05, P = 0.50 test of each of the above selection models
against the corresponding mean shift model. That is, if the mean shift model for
p = 0.3 is taken as the null hypothesis, and the data actually are being generated
by one of the selection models with a chosen to replicate the same value of p,
Table 1 reports the N at which the expected value of the given test statistic is at
the p = 0.05 significance level (a = 0.05).* This corresponds to the point where,
if the null hypothesis is false, the test is equally likely to produce results which
are significant or nonsignificant under the aforementioned a criterion, leading to
p = 0.5 (probability of erroneously accepting a false null hypothesis). It should
be noted that p = 0.5 is an unsatisfactorily high probability of Type I1 error; it
is used here not prescriptively but for convenience of calculation, since Table 1
is intended solely to demonstrate the relative sensitivity of different tests for
detecting the various selection models.
   From Table 1 we see several features important to the identification of the
most sensitive test:

* When the test statistic is a moment parameter, its expected value is found by applying Equations 1-
  5, given in the next section. The expected value of the X2 is found by direct computation of the
  difference between each of the selected distributions and a normal distribution with the same mean
  and unit variance.
334                                    Y. H. Dobyns

  At least one of the three central-moment tests always outperforms the
  distribution-based X2 by a large margin, as would be expected from the
  analysis in the Appendix.
  In three of the four cases, the most sensitive moment test is on the standard
  deviation cr; in the remaining case it is on skewness. The kurtosis test is never
  the most sensitive.
  The most sensitive moment test, for each selection model, requires appreciably
  less data than the second-best test; the difference is sometimes considerable.
   A feature which does not appear in Table 1 is the fact that the identity of the
most sensitive moment parameter depends on the scale of the effect. For example,
in Table 1 the cutoff model can most readily be detected by its change in the
standard deviation; however, for a mean shift of 0.03, an order of magnitude
smaller than that used in Table 1 (and more typical of general PEAR databases),
the most sensitive test for the cutoff model becomes the skewness, y3. In light of
this variability, the best procedure for testing whether data are consistent with
a particular selection model would seem to be to compute the statistical power
of each test for the given effect size and use the most sensitive.

                     Selection Model Moment Parameters
   Both the statistical power calculations discussed in the previous sections and
actual comparisons with empirical data require that we calculate the higher
moments of a selected distribution from its mean shift. All of the selection rules
allow the mean and higher moments to be calculated from a given a; although
the mean shift in terms of a is seldom given as an invertible function, procedures
such as a numerical binary search readily can be used to find the a that
corresponds to a desired mean shift, and the higher moments then can be
calculated directly. Figure 2 illustrates the results of such a process for the cutoff
distribution, with the cutoff parameter, standard deviation, skewness, and
kurtosis presented as functions of the mean shift.
   The functional forms of the mean and higher moments of each selection
model, as functions of the free parameter a, can be calculated by straightforward
integrations. For example, for a given cutoff parameter a the cutoff distribution
has the following distribution moments and filedrawer parameters:
              Mean: m = f ( a ) / ( l   -   F(a)) =f(a)/F(-a)
          Variance: cr2 = 1     + ma   -   m2

         Skewness: y3 =
                            2m3 - 3am2       + (a 2 - 1)m
           Kurtosis: y4 =
                            3   + m(a3 + 3a)   m2(4a2 2)
                                                 -          + + 6m3a   -   3m4
        Filedrawer: S = 1 - F(a) ;          Q=- F(a)
                                              1 - F(a) '
                        Statistical Consequences of Data Selection                                   335




                    Distribution mean                                    Distribution mean

    0.0     0.5    1.0      1.5    2.0   2.5    3.0        6.0   0.5    1.0     1.5      2.0   2.5    3.0
                    Distribution mean                                    Distribution mean

Fig. 2.   Evolution of cutoff distribution with mean shift (a) cutoff parameter, (b) standard deviation,
          (c) skewness, (d) kurtosis.

It can be seen that, even for this relatively simple function, the functional form
of the higher central moments becomes somewhat involved. For the remaining
distributions it will be somewhat more straightforward to report their moments
(x2), (x3), (x4) rather than the statistical parameters, which are determined by the
central moments. That is, for any distribution whatsoever, the variance,
skewness, and kurtosis are given by:

            g2 =   ((x - (x))') = (x2) - (x)

The standard normal distribution has (x) = 0, (x2) = 1, (x3) = 0, and (x4) = 3,
hence its expected values of mean, standard deviation, skewness, and kurtosis
are 0, 1, 0, and 3, respectively.
   With Equations 2 in place, we may describe the parameters of the other
distributions somewhat more concisely. For the fractional rejection distribution,
described by the rejection rate a, the first four moments and filedrawer quotient
are given by:
                                    Y. H. Dobyns

For the short-left-tail distribution, described in terms of the contraction factor a,
the corresponding values are

And finally, for the increasing bias distribution, if for conciseness we define h =
1.645 for the transition points of s(x), the moments and filedrawer quotient are

                                      (2F(b) - 1)               ( x 2) = 1

                                                                (x 4 ) = 3
                     ( x ~= (2_n)b (6F(b) - 2bf (b) - 3)
                          )                                                      (5)

                          Relation to General Results
   Figure 3 applies Equations 1-5 to illustrate the ease of detection, and the
filedrawer parameter required, for the four models as the mean shift p is
increased from 0 to 0.5. The "distortion index" plotted in the top figure is the
rate of growth with N (the number of observed data points) of a Z-score
describing the departure of the most sensitive parameter from its theoretical
value. (When N is large enough for the usual normal approximations to the
variation in a, y3, and y4 to be useful, then the distortion index, multiplied by
a,    gives the expected value of /ZI for the most sensitive statistic.) The bottom
graph simply plots the filedrawer quotient for that model at that mean shift.
   It is conspicuous from the figure that the models, despite having grossly
different distributions and being best detected by different distribution
parameters, obey a generalized form of the results rigorously derived for
normality-preserving selection functions. As p increases, both the distortion and
the filedrawer of any particular model increase monotonically. At every p, the
order of models ranked by increasing distortion is exactly the reverse of their
order ranked by increasing filedrawer; changing to a model with less statistical
distortion always increases the filedrawer quotient.
                             Statistical Consequences of Data Selection

                                                       Mean shift

                             Fractional Rejection
           ---               Short Left Tail
    0.8    .    .    .       Increasing Bias



                                                      Mean Shift
                          Fig. 3.   Distortion and filedrawer for four selection models.

                              REG Data
  The PEAR REG experiment involved the collection of data from a device
with no known non-anomalous channels for the operator's influence. While full
338                               Y. H. Dobyns

details of the experimental protocols and controls are available elsewhere (Jahn
et al., 1987; Dunne & Jahn, 1995), a brief summary may be in order.
  Redundant Recording. The raw data were printed on a continuous paper tape,
  and concurrently entered into a computer file. Summary data were also
  recorded by the operator in a logbook. In any case where a discrepancy
  appeared among the three records, the paper tape record was given precedence.
  Advance Designation of Intention. The operator was required to declare an
  intention before data were generated. Data collection could not be initiated
  until the intention was entered and logged in the experimental computer.
  Continuity of Record. The paper tape record was required to be continuous,
  without gaps or breaks, as a safeguard against precisely the sort of selection
  discussed in this analysis. As an aside, it should be noted that this requirement
  of physical integrity of the paper tape and the primacy of the tape record over
  the other redundant records in case of disagreement also provided strong
  safeguards against the alteration of extant data or the introduction of spurious
  data. Such interventions would be considerably more difficult than the already
  challenging task of making data disappear from the records.

Data were collected in runs of 50, 100, or 1000 trials, where one trial is the sum
of successes in 200 binary p = 0.5 events. If spurious data selection were
attempted, the finest possible scale of intervention would have been the run
level; picking and choosing what data to retain at the level of individual trials
would require a massive invasion of all the data recording systems, both
hardcopy and electronic, and involve far more labor than fabrication of the entire
database from whole cloth.
   The three different run lengths mentioned above must be treated separately in
order to discriminate properly between selection and mean shift models. The
reason is that a mean shift model, in the absence of additional qualifying
hypotheses, predicts an effect that is constant at the trial level, and therefore
predicts that the average Z-scores of runs depend on how many trials comprise
them. Therefore, a mixture of Z-scores for runs of different lengths would be an
intrinsically heterogeneous database under one of the hypotheses being
compared, rendering all statistical comparisons suspect.
   Table 2 displays the statistics for the three run lengths present in the primary
REG database. The two active intentions have been combined, with the sign of
deviations in the low intention reversed to produce a uniform measure of deviation
in the direction of intention. The statistical uncertainty (lo) for each parameter
also is given, along with the 2-score for its deviation from the expected value.
Note that all of the 2-scores for higher moments are nonsignificant, indicating that
the data are at least consistent with a mean shift model.
   In accordance with the discussions in "Comparison Methods and Statistical
Power", we may proceed now to calculate the statistical power of testing the
various moments of the selection models on each of these databases. Table 1
presented a minimum N for having a probability P = 0.5 of failing to distinguish
                   Statistical Consequences of Data Selection

                                     TABLE 2
                                REG Run Distributions

length     Nmns      Mean               SD                Skewness            Kurtosis

a selection model from the undisturbed mean shift hypothesis. Since the amount
of formal data in hand is fixed and cannot be modified, Table 3 instead presents
p for each parameter test, on each database, where a standard a = 0.05 criterion
for rejecting the mean shift hypothesis is assumed.
   The listing of p in Table 3 values allows easy identification of the most
powerful test for detecting the presence of each selection model in a given
dataset: simply choose the parameter with the smallest fl value for that model in
that dataset. Unfortunately it also is clear that, for these effect sizes and database
sizes, the statistical power is too low to distinguish some of the models from the
mean shift model. In particular, the lowest P value that appears for the increasing
bias model is 0.923; that is, even with the most sensitive test on the best database
for the purpose, there is a 92.3% likelihood that a database actually produced by
the increasing bias process would fail to produce a test statistic significantly
different from the expected value for a mean shift model.
   Table 4 lists the predictions for the various models on these datasets. In each
case the single free parameter of the model is used to fit the observed mean; the
standard deviation, skewness, and kurtosis then follow from the functional form
of the model. In addition to the model predictions for these parameters, Table 4
gives the 2-scores for the empirical value of the given parameter (as reported in

                                       TABLE 3
                                  Statistical Power   p

                                                  Fractional                    Increasing
Dataset           Parameter        Cutoff          rejection     Short tail        bias

          runs    Skewness
100-trial runs    Skewness
1000-trial runs   Skewness
                                   Y. H. Dobyns

                                      TABLE 4
                                   Model Predictions

                                            Skewness                Kurtosis
       Model             IC       z(4          (~3)       z(~3)       (~4)      Z(y4)

50-trial runs
  Mean shift
  Fractional rejection
  Short tail
  Increasing bias
100-trial runs
  Mean shift
  Fractional rejection
  Short tail
  Increasing bias
1000-trial runs
  Mean shift
  Fractional rejection
  Short tail
  Increasing bias

Table 1) relative to the model prediction. It is divided into three sections, for the
three different datasets.
   It is clear that the cutoff model is rejected for all three datasets. The short tail
model is also strongly rejected for the largest dataset, that of 50-trial runs. Both
the fractional rejection model and the increasing bias model are consistent with
the statistics of the actual data, just as the mean shift model is. This ambiguous
result is only to be expected, given the P values listed in Table 3, where the most
sensitive tests have p = 0.751 for the fractional rejection model and P = 0.923
for the increasing bias model. We simply do not have enough data to distin-
guish these models reliably from the mean shift model on the strength of any
statistical parameter of the distribution. Therefore, the resolution of the possibility
of data selection must turn, not on the statistical parameters of the formal data
distributions, but on the filedrawer quotients describing the amount of absent data.

                              Missing Data: Void Runs
  Aside from the published data reported in Table 2, some data have of course
been discarded. The formal protocol, although it has changed over time, always
has mandated the invalidity of data collected under certain protocol-violating
conditions. To the greatest extent possible, these void criteria have been
designed with the intent of eliminating the human decision factor, and therefore
the possibility of biased preferences.
                  Statistical Consequences of Data Selection                 34 1

   The standard protocol requires that there be some record of the existence of
every occasion on which formal data were generated, or even when an attempt
was made to generate formal data. Violation of this protocol condition would
require a deliberate attempt to deceive; the consequences of such efforts will be
discussed further below, but for the moment we are concerned only with the
possible impact of bias on the decision to reject data.
   In the majority of cases, the void run generated data which were recorded, and
statistical summaries were computed. Generally these are cases where some
protocol violation mandated that the data be considered invalid; e.g., during
a period when the formal protocol required a minimum of 5 runs per session,
some sessions of fewer than 5 runs were generated due to operator misunder-
standings of the protocol. These perforce were declared void and excluded from
the formal database. During the same period, operators were permitted to com-
plete a series over the course of multiple laboratory visits (since a series might
include up to 300 runs, requiring over 5 hours of the operator's time). Some
operators never returned to complete a series, and in these cases also the
experimenters were obliged to mark the data as void.
   In some cases data were declared void due to an equipment malfunction of
such nature as not to preclude data generation or recording. For example, there
were several occasions on which runs were generated with internal (inter-trial)
standard deviations of 14-18 rather than the theoretical fl= 7.071; this
grossly aberrant output was taken as sufficient demonstration that the noise
source had suffered a breakdown and was no longer emitting properly
conditioned random values. (Indeed, in these cases physical intervention was
required to restore proper operation of the device.) In other cases, individual
runs were declared void due to protocol violations such as the unexpected and
disruptive arrival of visitors.
   In some cases, no data values were recorded for the void runs. Much of the
time this was due to equipment failures that made it impossible to record data, as
for example during a period when the automated data collection was handled
not by a local computer but by a remote connection to a departmental server,
which was prone to unpredictable downtime and sometimes failed during an
experiment. Sometimes, however, these episodes were due to operator errors in
the conduct of the experiment, or were caused by a problem such as a disruptive
visit actually prevented the recording of data rather than merely interrupting the
   The formalism discussed above addresses the statistical features left behind in
a population of observed and recorded data, as a consequence of the construction
of that population, by discarding and concealing a selected component of the
total source distribution. It is clearly fatuous to apply this technique to the
population of void runs with recorded values: the impact that their removal has
had on the data can be calculated directly, simply by restoring them to the
experimental population. On the other hand, since at least some of the void data
with known values are products of a random source known to be malfunctioning
342                                Y. H. Dobyns

at the time, including them as part of the data under analysis violates one of the
assumptions of the formalism, namely that the output of the experimental
apparatus follows a standard normal distribution.
   The best resolution of the situation with the two classes of void runs would
seem to be as follows. The distribution of those voids with values can be
computed; it can be compared, both with the null hypothesis of zero effect and
with the observed effect size in the formal data, for any evidence of bias in its
removal from the database, and it can be recombined with the formal data to
establish its impact, if any, on the scale and significance of the anomalous effect.
The voids without values, in contrast, comprise a population of missing data
which properly should be compared with the filedrawer quotients predicted for
the various selection models. These predictions should, however, be based on
the mean shift and population size of the formal data alone, not the formal data
recombined with voids of known value, since these latter are not in all cases
drawn from the same distribution.
   Table 5 summarizes the void data present in each of the three databases. It
gives the numbers of each type of void run; for the voids with values, it
additionally gives the number of voids with results in and contrary to the
direction of intention and the Z-score of this count imbalance. (These totals do
not add to the total count of voids with values in the 50-trial runs, because six of
these runs had means of exactly 100.00 which is neither in nor contrary to the
direction of intention.) Also, the mean and standard deviation of the population
of voids with values, the 2-score of this population against the null hypothesis,
and a two-population T-score for the difference between the voids and the
formal data are given. Finally, the overall Z-score for the anomalous mean shift
is recomputed with the void data added to the formal data.
   The uniformly negative means of the void populations suggest that, despite all
efforts, some degree of bias was present in the rejection of these data. None of
the three void populations differs significantly from a null hypothesis, however.

                                      TABLE 5
                                Void Runs, by Database

                             50-trial runs         100-trial runs      1000-trial runs

N without values
N with values
. . . Matching intention
. . . Against intention
Z of count imbalance
Mean Z of voids
Z vs. null
T vs. formal
Z, formal only
Z, formal + void
                   Statistical Consequences of Data Selection                  343

The composite Z for a difference from the null, across all three sets, is -1.2197,
entirely consistent with chance variation.
   The population counts of void runs in and contrary to the direction of
intention provide a secondary check of the existence of simple forms of bias. It
may be noted that in the 100-trial run length the number of void runs in the
direction of intention actually exceeds the number of void runs contrary to
intention. Of the total population of void runs across all three categories, there
are 358 contrary to intention, 330 in the direction of intention, and 6 null,
producing a net Z =-1.0675 against a hypothesis that a void run is equally likely
to be in, or contrary to, the direction of intention.
   The void 1000-trial runs do differ significantly (p = 0.036, two-tailed) from
the formal 1000-trial runs, and the meta-analytic combination of these three
T-scores produces a marginally significant composite 2 = -1.9867 ( p = 0.047,
two-tailed). While this significant difference could be taken as evidence for real
bias in the void selection process, it also should be noted that the combination of
no significant deviation from the null with a significant deviation from the
formal data is consistent with the existence of a genuine effect in the formal
data, provided that the circumstances which led to the rejection of a run as void
also were such as to impede any anomalous effect. Since the most important
criteria for voiding involved equipment breakdowns of various sorts, and
unavoidable external interruption or distraction of the operator, this last would
seem a reasonable expectation.
   The recalculated 2-scores that include the voids along with the formal data
are, for obvious reasons, slightly decreased. As a result, the composite Z-score
representing the overall evidence for an anomaly, which is 3.8087 for the formal
data, is reduced to 3.4439 when the voids are included, a reduction of
approximately 9.6%. We may conclude from this that there is marginal evidence
for a bias in the selection of void runs with values, but that it does not
substantially impact the evidence for the existence of an effect. While it might
produce a small distortion in the observed effect size, this distortion is smaller
than the statistical uncertainty in that observation.
   Having resolved the interpretation of those void runs which have recorded
values, the next stage is to consider the voids with no values by applying the
selection formalism. As discussed above, these voids without values are the
appropriate population of missing data to be compared with the filedrawer
quotients computed for each model. Table 6 compares the actual filedrawer
quotients Q, as computed from the formal data populations in Table 2 and the
void populations in Table 5 , with the theoretical values required by the fractional
rejection, short tail, and increasing bias models. (The cutoff model has been
dropped from Table 6 since its statistical predictions have already been shown to
be completely incompatible with all datasets.)
   The 2-scores presented in Table 6 are computed from the mean and standard
deviation of the binomial distribution for the theoretical rejection rate mandated
                                    Y. H. Dobyns

                                       TABLE 6
                                 Filedrawer Populations

Item                             50-trial runs        100-trial runs      1000-trial runs

Number of formal runs
Void runs without values
Real filedrawer Q
Fractional rejection model
  Predicted Q                         0.0347                0.0504                0.0608
  Predicted void population              446                   168                    43
  Z, real vs. prediction             12.3182               12.8824                6.0869
Short tail model
  Predicted Q                         0.0176                0.0259                0.0313
  Predicted void population              226                    87                    22
  2, real vs. prediction              2.7740                9.2188                4.0422
Increasing bias model
  Predicted Q                         0.0506                0.0735                0.0886
  Predicted void population              650                   245                    62
  Z, real vs. prediction             18.1485               15.5863                7.4770

conjunction with the known population of retained data, gives us both an expected
population of voids, and a standard deviation for that expected population. The
observed population of void runs can then be compared with that theoretical mean
and standard deviation to obtain a Z-score, which is the source of the Z values
listed in Table 6. The sign of the difference was ignored in computing these 2-
scores, so if p-values are calculated for them, the two-tailed form must be used.
   It is evident that all of the models predict rejection rates grossly in excess of the
actual number of void runs without recorded values. In many cases the mismatch
is so extreme that p-values cannot be calculated readily by conventional
techniques. The least significant result is the value of 2.774 for the short tail
model on the 50-trial runs, with a two-tailed p-value of 0.006. This particular
model, however, can already be rejected on the basis of its distribution statistics
as discussed above. No model examined here can plausibly accommodate both
the observed distribution statistics and the known rate of run rejection.

       Additional General Arguments: Models with More Parameters
   Except for the cutoff model, which demonstrably has the smallest possible
filedrawer for a given mean shift, the selection models examined above are not
obviously optimal or extremal in their properties. While they show the same
relationship between distortion and filedrawer expected from the analysis of
normality-preserving selection, it seems worthwhile to sample the space of
possible s(x) in somewhat more detail to test the generality of this property.
   An immediate generalization can be obtained by noting that the cutoff
model and the fractional rejection model are two extreme members of a single,
two-parameter family of selection processes. This family of "fractional cutoff"
                    Statistical Consequences of Data Selection                      345

models rejects a fraction r of all data falling below a minimal cutoff boundary b.
The simple cutoff model is then the fractional cutoff model with b =a and r = 1;
the fractional rejection model is the fractional cutoff model with b = 0 and r = a.
A choice of a specific mean shift does not, of course, identify a specific member
of this two-parameter family, but rather determines a one-parameter subset of it.
   For more general investigations, one may begin to approximate the freedom
of the full, arbitrary s(x) by adopting a sectional selection model. For some n,
choose n - 1 boundaries bl, . . . , bWl in the real line, along with the notional
"boundaries" bo =                        For
                         and b, = 9. convenience, and without appreciable
loss of generality (since n is not fixed), these boundaries can be placed at
uniform quantiles of the inverse normal distribution, so that J, f(x)dx= lln for
each k E 1, . . . , n. The selection function s(x) is then taken as the piecewise
continuous function defined by s(x) = sk, bkPl < x < bk, for each k 6 1, . . . , n.
   It is obvious that for this n-section selection function S = ( l / n ) C i = , sk. The
higher moments are given by (A?) = (lIS)CI(=, skI(bk-,,bk,m),where the integral
function I is defined as I(a, b, m) = ~ , ~ f ( x ) d xnote that the I terms depend
only on the set of bs and are the same for any selection function using the same
set of sections.
   Figure 4 shows numerous selection models plotted against their distortion
index D and filedrawer quotient Q. All models in this figure are constructed
to have p = 0.0277, identical to the 50-trial runs subset. The shaded area in
the lower left corner identifies the limits of Q and D imposed by a Z < 2 criterion
of consistency with the observed characteristics of the 50-trial runs; any model
outside that area will be rejected at p < 0.05 (two-tailed) or better for its
distribution shape, its filedrawer prediction, or both.
   The four filled markers show the four one-parameter models, as labeled on the
figure. The smooth solid line shows the behavior of the normality-preserving
selection model, where the distortion index is purely driven by the change in
variance. The dotted line shows the evolution of the two-parameter fractional
cutoff model as it is smoothly changed from the cutoff model to the fractional
rejection model while maintaining constant mean shift. It is noteworthy that this
curve passes very close to the point characterizing the short left tail model,
despite their very different functional forms. It is also intriguing that as the
cutoff boundary h migrates toward zero, the index of distortion passes through
a minimum and then begins to increase again, while the filedrawer quotient
increases monotonically throughout.
   The remaining elements of this figure are based on sectional selection models
with 100 sections. The cross icon is the sectional model that most nearly matches
the optimal (maximal S) normality-preserving selection; as one might expect it
lies on the curve for such models, at its minimum in Q (which is its maximum in
S given Q = (1 - S)/S). The open circle is the 100-section sectional model
defined by sk= a. alk, where the linear parameters a0 and alare determined by
the joint constraints sloe = 1 and p = 0.0277.
   The dots are the result of an iterative optimization procedure applied to a 100-
                                      Y. H. Dobyns

                                                         Cutoff Model
                                         Fractional Rejection Model
                                                 Short Left Tail Model A
                                               lncreasin Bias Model +
                                  100-Section Smooth y lncreasin o
                                    100-Section Closest to Norma +
                                                                         7     7
                                    Iterative Improvement Attempts .
                                                 Normality Preserv~ng -
                                            Fractional Cutoff Family - - - -

                                        Distortion Index
                        Fig. 4.    Distortion vs. filedrawer tradeoff.

section model. At several starting points, including the smoothly increasing
model, the normal model, and three models close to the different members of the
fractional cutoff family, a gradient-following optimization algorithm was used
to try to reduce D, reduce Q, or both. For each starting point one algorithm
attempted to reduce D without regard to effects on Q, another minimized Q
without regard to D, and others attempted to minimize linear combinations of Q
and D with varying weights. These optimization algorithms halt when their
attempt to follow a downward gradient produces an increase rather than
a decrease in the target parameter, indicating that a local minimum lies within
the algorithm's finest resolution for adjustments to the individual sk.
   It is notable that the scatter of dots at the upper left represent one model-the
one minimizing D without regard to Q-          from each of the five starting points.
Similarly, several of the larger dots appearing along the "fractional cutoff
family" dotted curve result from efforts starting at different points to minimize
Q without regard to D. One, obscured by the large block showing the simple
                   Statistical Consequences of Data Selection                    347

cutoff model, essentially rediscovered that model to within the resolution of the
100-section parameterization. Other dots represent other local minima for the
different linear combinations a     +
                                    D PQ being minimized.
   The endpoints of these optimization attempts, along with the parametric curve
of the fractional cutoff model family, strongly indicate the existence of some
limiting curve below which the distortion index and the filedrawer parameter
cannot simultaneously be reduced. Moreover, these facts suggest that the
fractional cutoff family curve is either on or very near that limit at least up to its
inflection point, and that the limiting curve can be expected to have some
smooth continuation into the region populated by the D-minimizing endpoints at
the upper left. Finally, it is clear that the four one-parameter models examined in
detail are all quite near the joint lower limit of distortion and filedrawer, and
span much of the possible range of distortion values.
   Given the failure of any selection model to occupy the shaded region
statistically consistent with the observed REG data, even if the selection model
is allowed to optimize 99 free parameters in an effort to achieve such
consistency, it seems reasonable to conclude that the REG observations are
inconsistent with any form of selection hypothesis whatever. A graph similar to
Figure 4 could be drawn for the 100-trial runs; it would show even more
dramatic inconsistencies due to the larger effect size and smaller filedrawer. The
same cannot be said for the 1000-trial runs due to the poor statistical resolution
resulting from their relatively small population. However, even if this subset is
regarded as suspect due to the possibility of selection, the 50-trial and 100-trial
runs between them produce an aggregate composite Z = 3.9044 (using the
standard per-trial weighting), a result actually slightly stronger than that of the
REG database as a whole.

                    Conclusions from Model Comparisons
  On the basis of the statistical parameters of the data distribution, we are able
to reject the data-selection models producing the strongest distortion of the
source statistics as an explanation for the anomalous mean shift in the REG data
(Table 5). When we examine models with lower distortion indices, we find that
the distribution statistics become indistinguishable from a mean shift hypothesis,
but the required rejection rate for these models is so large as to be completely
incompatible with experimental records (Table 6).
  It has been proven above that for selection models preserving normality of the
output statistics, there is a tradeoff between the variance (the only free parameter
once the mean is fixed) and the filedrawer; the more closely the selection
process tries to preserve the original variance while imposing a nonzero mean
shift, the more data it must discard. Figures 3 and 4 illustrate that the tradeoff
between increasing filedrawer vs. increasing distortion appears to be a general
feature of selection models, even when the number of adjustable model
parameters is increased and when optimizing searches are made to try to
348                                Y. H. Dobyns

improve their performance in these features. If this generalization, supported by
all currently available evidence, is valid, then the fact that every selection model
considered is either strongly rejected by its distribution statistics, strongly
rejected by its predicted filedrawer population, or both, means that no form of
selection model can account for the data.
   As noted above, the experimental protocol involved logging and recording all
rejected or missing data along with the reasons for their rejection. Any
experimenter being misled by bias into making an invalid decision to discard
a dataset thus would leave a record of this act, even if the data values themselves
went unrecorded. The only possibility for selection rates large enough to induce
the effects therefore requires deliberate deception on the part of the experi-
menters, rather than simple bias. Experimenter fraud of this sort is frequently
invoked as a last-resort accusation for explaining away anomalous results. A
drawback of this "explanation" is that it is innately unfalsifiable: once it has
been decided that a given experimenter is fraudulent, there is no reason to
believe anything that experimenter says, nor any argument that the experimenter
can make to refute the accusation. Perhaps more to the point, experimenters who
set out to conduct a fraudulent experiment have far less labor-intensive ways to
do so than carefully hiding a selected subset of the experimental data after they
were generated.

                          A Subset of Special Interest
   Aside from the general question of data selection in the REG experiment,
there is a specific subset where the issue is of extra interest. As has been noted in
the past (Dunne & Jahn, 1995; Dobyns & Nelson, 1998), the operator assigned to
ID code 010 produced impressively large effect sizes in the early period of the
experiment. This early period is distinguished from this operator's later data, not
only by a temporal hiatus of over a year in which no data were generated, but
also by a change in device (a portion of the hiatus, for this operator and all
others, was caused by the delay of qualifying and calibrating the replacement
REG machine), and by a change in protocol (it was decided during this period
that all secondary parameters, such as volitional vs. instructed assignment of trial
intentions, must be held constant throughout a series, rather than being variable
on a session-by-session basis as had been the case previously). The early data for
Operator 010 have an effect size more than an order of magnitude larger than
any other database; they are statistically distinguishable not only from the vast
bulk of other operator performances, but from the contemporaneous early data of
other operators, and from the later performance of the same operator as well.
The reasons for the larger effect are not clearly understood.
   The distinctive character of this early 010 dataset, and its lack of explanation,
mandate that all reasonable hypotheses for its outcome must be carefully
scrutinized. Therefore, it seems appropriate to apply the formalism developed
herein to the possibility that this extraordinary database was produced by
deliberate deception on the part of the operator.
                       Statistical Consequences of Data Selection              349

   The formal data in this set comprise 503 runs, with a mean Z-score of 0.2556
(composite Z = 5.732). All runs are in the 50-trial length. There are 38 void runs
with values in this database; these voids have a mean value of -0.2037
(composite Z =-1.256, nonsignificant). Combining the voids with known values
with the formal data would lower the effect size (mean Z-score) to 0.2233, with
an associated composite Z-score of 5.194. As in the general database exami-
nation, the voids with known values have an apparent negative bias that is
nevertheless well within the range of plausible chance variation; the change
between effect size with and without the void data is within the statistical
uncertainty of the measured effect size in the formal data, and the Z-score of the
recombined data remains highly significant. We may conclude that, as in the
general analysis, the voids with values do not appreciably impact the experi-
mental conclusions, and they need not be considered further.
   The database also contains 10 voids with no recorded values. Unfortunately,
since we are here considering the possibility of deliberate concealment of data
in addition to experimental bias, this does not give us a value for the actual
filedrawer. Indeed, the actual filedrawer in a case of deliberate data selection is
both unknown and unknowable. Our interpretation of theoretical filedrawer
predictions of model fits must, rather, be based on the credibility of the
operator's having managed to discard the requisite amount of data without
detection by the experimenters-assuming a selection process that is consistent
with the statistical parameters of the data.
   Table 7 presents the standard deviation, skewness, and kurtosis of the actual
database, as contrasted with the selection models developed previously. The
filedrawer quotient Q required for the selection model to fit the observed mean
shift is also reported.
   An analysis of statistical power indicates that the most sensitive test parameter
is the skewness, in the case of the fractional rejection model, and the standard
deviation for all other models. In contrast to the examination of the general

                                       TABLE 7
                            Model Comparisons for Special Subset

Actual data
Fractional rejection
Short tail
Increasing bias
350                               Y. H. Dobyns

database, we generate a p < 0.05 rejection of the selection model in every case.
Moreover, those models for which the rejection is weakest (p = 0.0013 for
fractional rejection, p = 0.020 for increasing bias, both two-tailed) predict large
filedrawer quotients; generating these data by the increasing bias selection
model would require discarding very nearly one run for every two which were
recorded. While experimenter vigilance might be less than perfect, the redundant
measures deployed to prevent operators from concealing the fact that they have
generated data (the continuous hardcopy of the data is perhaps the most relevant
to the current instance) make it difficult to credit that an operator could succeed
in concealing one experimental run out of every three.

                                 Final Summary
   The space of possible selection processes is, essentially, the space of all
functions of x bounded by [0, I]. A set of four relatively simple one-parameter
families of selection processes nevertheless allows some conclusions to be
drawn. The cutoff model provably demonstrates the existence of a minimum
level of discarded data for any target level of mean shift. The performance of
numerous sectional selection models allowing multi-parameter optimization
indicates a lower limit to the filedrawer for any given level of statistical
distortion, and indicates further that the minimal filedrawer increases as the
distortion decreases, and that the four one-parameter models are close to if not
actually occupying this minimal limit.
    For the effects in the database as a whole, the effect size is small enough that
some selection models can produce statistics indistinguishable from those
observed in the data. The possibility that the apparent effect was constructed by
biased selection can be refuted, however, by comparing the actual population of
discarded runs with the population required by those selection models that
produce adequate fits to the data statistics. Surveying the space of optimized
multi-parameter selection models confirms that the conclusions drawn from the
single-parameter models can be generalized to all selection models with high
confidence. Since failure to record the existence of a discarded run would
require the deliberate circumvention of protocol rather than mere biases of
judgment, the thesis that the anomalous effect as a whole could be due to
unconscious selection of favorable data can be rejected.
    For a database that has been regarded as suspect due to its origin in the
exceptional performance of a single operator, all models examined predict
statistics significantly different from those actually present. Even the least ill-
fitting model requires a "filedrawer" of discarded data so large that its
successful concealment from the experimenters by a malevolent operator
becomes incredible. Since improving the statistical fit would require an even
larger filedrawer quotient, the hypothesis that Operator 0 10 produced a spurious
effect by concealing negative runs also can be regarded as refuted.
                   Statistical Consequences of Data Selection                    351

                Appendix: Relative Statistical Power of Tests
   It is common practice to compare distributions by using distribution tests such
as X2 goodness-of-fit tests or Kolmogorov-Smimov tests, to the extent that the use
of a moment-based test may strike some readers as archaic. However, moment
tests offer better statistical power than such distribution tests when a specific
hypothesis regarding a moment value is available.
   Since in the worst case of a selection effect we may be confronted with
a normal distribution that merely has a smaller standard deviation than expected,
the detection of reduced variance is used as a test case. Table A.l presents, for
a range of N, the probability of Type I1 error for an optimally sensitive X2 test,
given a change in a such that a simple variance test has a = P = 0.05. In other
words, each line of the table was computed by calculating, for that N, the value of
o that would produce a 5% chance of failing to be rejected by a p < 0.05 criterion
on a variance test. The X2 tests were based on uniform-population binning; the
number of bins was chosen by finding the bin number which minimized the Type
I1 error probability P. The P values given in the table assume a = 0.05, that is, that
the X2 test will reject the null hypothesis on a p < 0.05 criterion.
   Recalling that a = P = 0.05 for the direct moment test on the variance, it is
obvious that the X2 test has a much higher chance to overlook the same effect on
the same data. It is notable that as N increases, the loss of performance in the X2
test becomes worse.
   Similar considerations apply to the Kolmogorov-Smirnov test for distribution
differences. For a simple demonstration, a Monte Carlo test was run by
generating 10,000 sample distributions, each comprising 100 normal deviates
with a =0.767. As noted above a simple variance test will reject such samples at
the p < 0.05 level 95% of the time. The K-S test, in contrast, produced p < 0.05
rejection on only 2164 of the samples, indicating a Type I1 error probability of
approximately 78%. It was considered redundant to extend this investigation to
larger sample sizes.
                                         TABLE A.1
                                  Sensitivity of Optimal   X2

  N                           o                                 Bins               P

   This work was supported by donations from the Fetzer Institute, the Institut
fiir Grenzgebiete der Psychologie und Psychohygiene, and numerous private
donors including Laurance Rockefeller, George Ohrstrom, and Donald Webster.
                                       Y. H. Dobyns

Dobyns, Y. H., & Nelson, R. D. (1998). Empirical evidence against decision augmentation theory.
   Journal of Scientific Exploration, 12, 231-257.
Dunne, B. J., & Jahn, R. G. (1995). Consciousness and Anomalous Physical Phenomena. Technical
   note PEAR 95004, May 1995.
Gould, S. J. (1996). The Mismeasure of Man. Norton.
Jahn, R. G., Dunne, B. J., & Nelson, R. D. (1987). Engineering anomalies research. Journal of
   Scientific Exploration, 1, 21-50.
Jahn, R. G., Dunne, B. J., Nelson, R. D., Dobyns, Y. H., & Bradish, G. J. (1997). Correlations of
   random binary sequences with pre-stated operator intention: A review of a 12-year program.
   Journal of Scientific Exploration, 11, 345-367.

                         Comments from Mike1 Aickin
The Dobyns article gives the impression that all reasonable subconscious
("unconscious" in his terms) distortions of the PEAR REG data through data
selection have been ruled out. I believe this implication is untrue. I programmed
the simulation of a very simple automated strategy, which is oriented toward
moving the mean of the data, while arranging thin gs so that a statistical test of
Normality would be passed. I used a better Normality test than Dobyns did, so
that my simulation provides stronger evidence than his. In my simulations, I
counted a success when I could produce the results cited by Dobyns for the data
that he retained, without non-Normality being detected. I then recorded the
percent of data that had to be deleted, among the successful cases. Here are the
results for the data presented by Dobyns.

                                             Simulated chance of            Simulated % of data
 Type of             Reported %             successfully distorting            deleted among
Experiment           Data Deleted                  the data                 successful distortions

   It seems clear from this that it is possible to produce the PEAR REG results
through data selection. This says almost nothing, of course, about whether any
such selective distortions occurred. I would rely on the professional reputation
and integrity of the PEAR investigators, which I regard as beyond question, for
the validity of the data. Further, since Dobyns reports that, whether one includes
or excludes the "void" data, the overall conclusions about the experiments are
the same; it is not clear to me why any of this is of any importance.
Journal of Scientific Exploration, Vol. 21, No. 2, 353-356, 2007


                        The Wave Function Really Is a Wave

I would like to point out what I believe are certain misunderstandings and
inaccuracies in the recent article by M.G. Hocking, "Linking String and
Membrane Theory to Quantum Mechanics and Special Relativity Equations,
Avoiding Any Special Relativity Assumptions", Journal of Scientific Explora-
tion Vol 21, Number 1, 13-26 (Spring 2007).
   There are a number of relatively inconsequential errors:
   In the first sentence, of the 10 (11 in more recent theories, as noted)
dimensions in string/M theory, six (not seven as stated) are believed tightly
coiled up, leaving four (not the stated three) dimensions, including time, that we
perceive as uncoiled. In the subsequent paragraph, approximately 96%, not 90%
as stated, of the matter in the universe is "missing" (i.e., invisible.)
   The term "normalising" on page 16 is not simply "squaring" (sic) of the wave
function, but comprises setting the probability for finding a particular extant
particle somewhere in space to be 1 (quite a reasonable proposition.) Further,
this "squaring ' does not necessarily create a "matter wave" (sic), as stated,
since it can result in a non-moving probability density distribution (and waves,
by definition, move).
   On page 17, the quote attributed to Huxley "Nature is not only stranger than
we have thought, It is stranger than we can think!" may possibly have been said
by him, but it seems what was meant was the oft referenced remark by Sir Arthur
Eddington, "Not only is the universe stranger than we imagine, it is stranger than
we can imagine."
   The term "quark string theory" is used often, but string theory is not limited
to solely to quarks, and is never referred to in this way. Besides quarks, it also
includes leptons (such as electrons and neutrinos), bosons (such as photons,
gravitons, Higgs particles), and more.
   On page 24, force is said to be "an energy field or gradient". It is not an
energy field, but it is the gradient of the potential, quite a different thing. An
energy field has units such as ergs/cm3. Force has units such as dynes.
   More serious are the following:
   The hypothesis that the square of the absolute value of the wave function
is the particle probability density is denigrated without mentioning the
many experiments that appear to have clearly demonstrated it not only is so,
but that this density has been shown to be typically continuous, and not point
like. Indeed, scattering experiments, such as those done daily in particle
354                                     Letter to the Editor

   Pressure Plane Waves                            Wave Function Plane Waves

                                       Planes of                                       Planes of

Direction                                           Dlrect~on
of propagation                                      of propagation
       In 3D Physical Space                                In 3D Physical Space

            Direction of propagation                        Direction of propagation

       In "Pressure vs xu Space                         In 'Wave Function vs x" Space
                    Fig. 1. Real (Pressure) vs Complex (Wave Function) Waves.

accelerators around the world, show this to be the case. How about some
references here?
   On page 18, the article states "On the basis of the Big Bang theory with its
residual microwave radiation . . . there is an absolute reference point of origin . . .
in space. This negates the lStPrinciple of Special Relativity, which denies an
absolute reference point in space." This is a common misunderstanding, and is
wrong for several reasons.
   For one, special relativity is an idealization that deals with space devoid of
matter and unaffected by gravity, and this obviously does not include the Big
Bang universe. For another, within general relativity (the part of the theory
including matter sources of gravity), one often prefers certain reference frames
because they are more convenient for calculation (as is the case for the Big Bang
universe), but they are definitely not preferred in the sense that Nature is making
them the sole fundamental reference frame (as in Newtonian physics.) For yet
another, the first principle of special relativity does not address the issue of
reference points, but of reference frames. Indeed, Newtonian physics and
relativity hold the exact same position with respect to reference points, i.e., there
is no absolute such point. Finally, there is no "center" to our universe.
Fundamental symmetries of its expansion dictate that every point in the universe
is like every other point, in the sense that from any such point, all of the rest of
                                  Letter to the Editor

              Fig. 2.   Real and Imaginary Components of the Wave Function.

the universe appears to move away in precisely the same manner. That is, every
point in creation looks like its own center of the universe.'
   But the most fundamental issue I have is with the interpretation of the wave
function as "imaginary" and thus not real, at all but discrete points in our
spatially three dimensional world. The truth is that the wave function is complex,
not simply imaginary, at virtually all of those other points not shown in
Hocking's Fig. 1. Complex numbers have both real and imaginary components.
   So as time evolves, the wave function has both an imaginary part that varies
sinusoidally, and a real part that does so, as well. Implying it only has a real part
at particular 3D points is not correct. And thus it does not follow that the particle
somehow "jumps" from each of these points to the next.
   Fig. 1 illustrates the differences between real waves (such as pressure waves
with only a real number magnitude such a pressure) and complex waves (such as
the wave function with real plus imaginary components of its magnitude.)
   What 1 refer to in the figure as "Pressure vs x" and "Wave Function vs x" spaces
are what Hocking refers to as "configuration space". Note that the corkscrew
function in the lower right hand side is the wave function. Since this intersects the
Re vs x plane at specific points (of equal values separated by one wavelength),
one can see how some might be led to believe that $ only has real values at those
points. This is not true, however, as can be seen with the aid of Fig. 2 herein.
   In Fig. 2, only a short section of the "corkscrew" curve of Fig. 1 is displayed,
and that particular section does not intersect the Re $ vs x plane. However, as
shown, it has both real and imaginary components for every point along that
section. That is, it has a real value even though the curve of $ does not intersect
the plane formed by the Real \I, axis and the real world spatial direction x.
   Thus, the plot of the real component of $ vs x is a sinusoidal curve, which has
continuous, not discrete point, values. That is, the real values do not appear only
at spatially separated points, as claimed.
   The article (page 15) states that $ "is not a wave." But it is. It is simply
a complex wave, rather than a real one. Not only theory, but an enormous
number of experiments, demonstrate the wave nature of $.
   As an aside, I have long wished that authors of books on quantum mechanics
would include figures such as those herein when introducing the concept of the
wave function. Perhaps then, we would have less confusion about what is really
going on with the solution to the Schroedinger equation as it evolves through
   And thus, though there may be some merit in the subsequent arguments made
by the author, those arguments may need to be reconsidered in light of the
aforemade remarks.

                                                              Robert D. Klauber
                                                                 Fairfield, Iowa
                                                         rklauber @netscape.net

  ' Consider the commonly employed analogy for our expanding universe of
a balloon being blown up. Imagine that the balloon expands from being very
small, almost a point originally. If you live on the surface of the balloon, where
would you determine the "original center" of the balloon to be? It is actually no
one point on the surface. Rather, every point would seem to be the original
center to someone located at that point.
Journal of Scient$c Exploration, Vol. 21, No. 2, pp. 357-359, 2007

                          Hocking's Response to Klauber

The Wave Function !P has an extensive imaginary component:
   so it is just our imagination that thinks it is really a wave?
   Dr. R. B. Klauber's Letter to the Editor is replied to, below. Some of the
comments are agreed with and the letter does not invalidate the paper [I]
referred to.
   XP is not a wave in 3-0 space, as clearly shown by Dr. Klauber's Fig. 1 & 2.
My thesis is that !P can be interpreted as particle excursions into a higher
dimension, instead of the equally odd idea that matter is wavelike (which is the
probable cause of the difficulty of reconciling quantum theory and relativity).
   In numbers of dimensions, I was counting spatial dimensions, excluding time.
Dr. Klauber mentions 'your (not the stated three) dimensions, including time"
are perceived as uncoiled, but my "stated three" were the 3 spatial dimensions
excluding time (see first line of Abstract).
   Dr. Klauber comments 96% of the matter in the universe is "missing" or
invisible. I had stated "About 90%". I am happy to adopt with his figure.
   I agree "normalising" is not simply "squaring" and I did write "in efSect
to square" (page 16) for this reason, doing so to keep the paper concise.
Dr. Klauber also says it is a "reasonable proposition" to set the probability of
finding a particle somewhere in space to be 1, but this is just what the many
authors which I cited did before the notion of multidimensional space appeared.
My thesis is that this very assumption "forces " on Nature what our presumption
is of what Nature should be. If the probability is not unity for a series of short
times along a trajectory, this could mean the particle has moved to another
dimension for those short times, rather than become a "wave".
   Dr. Klauber comments that a non-moving probability density distribution can
occur but waves move. But such a distribution can be equated to a standing
   In saying on page 24 [ l ] that a force is defined in basic physics as an energy field
or gradient, dE/dx, this was intended as a general physics statement, not meaning
an electric potential field which Dr. Klauber assumes. Electric potential is not
involved here. Another simple non-electrical example is that an atom diffuses in
the direction in which its Gibbs Free Energy is reduced and the diffusing force on it
is d(AG/dx) where d(AG/dx) is a field (an energy field, specifically a Gibbs Free
Energy field) with units of J/m or erglcm. The units of an energy field (a force) are
not erg/cm3. Force can also be expressed in dynes and a dyne is 1 erglcm.
   On the points described as "more serious":
    (1) Taking !P2to be the particle probability density and that it is shown to be
        continuous by scattering experiments is a matter of interpretation of the
r                               Letter to the Editor

        meaning of the experimental results: as mentioned above, I took a simple
        model of one particle moving through free space. Taking XP to be "point-
        like" is only my basic model for a particle moving in free space and the
        results will obviously be modified in other situations. Once interactions
        occur with other objects, as in scattering experiments, then disruption of
        the simple trajectory will occur but scattering phenomena do not
        invalidate the existence of particles on the proposed model.
    (2) Comment made: "Special Relativity is an idealization . . . that obviously
        does not apply to the Big Bang universe". This is true, but even so,
        Special Relativity is widely used in practice in physics! I do not disagree
        of course that Relativity does not allow a "sole fundamental reference
        Comment made: "Newtonian Physics and Relativity hold the exact same
        position with respect to reference points, i.e. there is no absolute such
        point. "
        I would disagree that the Newtonian position excludes using reference
        Comment made: "There is no centre to our universe". I would disagree
        that there is enough evidence to suppose that every point in the universe
        is like every other point and each would look like its own centre of the
        universe. Space may have existed before the Big Bang, which would void
        the argument that there is no centre of the universe, which would then
        obviously be at the point of origin of the Big Bang.
        I do not accept the balloon surface inflation model of space in Dr.
        Klauber's footnote.
        These points in this heading (2) do not, in any case, invalidate the
        derivations of the Special Relativity equations given assuming absolute
    (3) Interpretation of wave function as imaginary:
        Again, I would say that the model given in my paper [I] is for a particle
        moving in a free-space trajectory. The ordinary graph of Q given in [I] is
        certainly a series of points separated by gaps, for the equation $ = exp[-
        2in{(xmv/2h) - tE/h)] which is a well-known solution of Schroedinger's
        Equation, d$/dt = (hil4nm) [d2+/dx2].
        All I am saying here is that the gaps between the points are where the
        value of $ is imaginary in the sense that can mean that the particle can
        exist in a higher dimension in between the real points. A distinction
        between "imaginary" and "complex" would not invalidate my thesis. I
        thus do not argue against the Figure 1 given by Dr. Klauber, which does
        not negate the interpretation that I have made. I am just trying to interpret
        the meaning of the imaginary/complex as being a periodic excursion
        into a higher dimension. That there is an imaginary component cannot be
                                    Letter to the Editor                            359

        discounted and I also do not argue against Fig.2 of Dr. Klauber. The plot
        of $ requires an imaginary axis at right angles to any of the 3 directions
        in 3-D, which is suggestive of a 4-D involvement. The real component
        shown in Dr. Klauber s Fig. 2 corresponds to a fractional probability
        which is also suggestive of a transit into 4-D for the curve segment in
        Fig. 2.
        There could be a semantic problem here, as Dr. Klauber says, "the plot of
        the real component of \I, vs x is a sinusoidal curve, which has continuous,
        not discrete point, values", whereas I am saying a plot of the total value
        (not the real component) of t / ~(my Fig. 1) is a series of discrete points.
   (4) Comment on the point saying that @ "is not a wave". Again, I am
       referring to the total value of $, not what Dr. Klauber refers to as the real
       component of $. I do not deny that \I, appears to be a wave in many
       experiments but it is possible to give an alternative explanation of this
       apparent wave nature in terms of particle excursions into a higher

                                                                            M.G. Hocking

   Erratum: In reference [I], on page 16 line7, "where and when x and t are
both integers" should read, of course, "where and when xlh and tv are both
1. Hocking, M.G., Journal of Scientific Exploration 21 (I), 13-26 (2007).
Journal of Scientijic Exploration, Vol. 21, No. 2, pp. 361-364, 2007

          IN MEMORIAM GEORGE SASSOON 1936-2006

Beyond compare is the intellectual stimulation and companionship I have
enjoyed through the Society for Scientific Exploration. Editing the Journal of
Scientific Exploration brought me into contact with even more fascinating
characters than I had been able to encounter at the Society's meetings. Among
those extraordinary individuals was George Sassoon, to whom the much-
misused term "unique" happens to be literally appropriate.
   Obituaries in the Daily Telegraph, The Independent, and elsewhere give a sense
of the man's diverse talents. He "attained distinction as a scientist, electronic
engineer, linguist, translator of scientific papers, player of the piano accordion and
investigator into extra-terrestrial phenomena. His book The Manna-Machine
(1978), and its companion volume The Kabbalah Decoded of the same year,
investigated the origins of the manna that sustained the Israelites in the desert;
Sassoon went back to Jewish texts, particularly the Zohar, a collection of 13th-
century writings which he translated from Aramaic". Sassoon commanded
a knowledge not only of Aramaic (and of course English, French, and German) but
also of Serbo-Croat, Hebrew, and Klingon. He was a radio ham as well as
a professional engineer; his book, The Radio Hacker's Codebook (1980), was not
welcomed by the authorities because of its insights into encryption.
    Various of the obituaries mention-or more than mention-the idiosyncratic and
difficult family circumstances in which George Sassoon grew up. He was the
only child of the poet Siegfried Sassoon, enduringly famous for his depiction of
the personally experienced horrors of World War I; a well-received biography of
Siegfried was published quite recently1. (It was rather a surprise to find
a "Siegfried" to be the scion of Sephardic Jews.)
    George Sassoon and I had corresponded by e-mail about the pendulum studies
of Allais, about cold fusion and sonoluminescence, about Tunguska, and about
much else. At times he would come up with the sort of puzzlers that are featured
in The Scientist Speculates, I. J. Good's collection of "partly baked ideas"; for
   "Space is curved", said Einstein. If this is so, the value of pi will be different
in practical measurements to that obtained by calculation, which assumes a flat
universe. Has anyone considered doing a practical measurement to the required
degree of precision? If so, how?
    George once told me about a German book about possible dinosaur survivals2
and sent me a copy of his translation of it. He had been unsure about the German
titles "Freiherr" and "Rittmeister", and I was able to check his guesses-which
362                                 Obituary

were correct-in the old Muret-Sanders dictionary3 that I had inherited from my
father. In response, George mentioned that he had once taught himself to read the
old German handwriting script in order to translate a letter for his uncle.
   Sassoon was not just a passive observer, he drew on his knowledge of
engineering to make measurements of a variety of phenomena. Concerning David
Deming's review of the "Hum", Sassoon wrote that his wife experiences it, but
only at the shore; and he prepared to carry out measurements to check the
possibility that very-low-frequency electromagnetic waves might be the source. As
to "cold fusionm-which has come to be more commonly described as "low energy
nuclear reactions" (LENR) or "condensed matter nuclear science"-Sassoon
carried out experiments himself to check the claims that welding could give rise to
nuclear transformations, as revealed by resulting radioactivity. Also concerning
cold fusion, a neighbor of Sassoon's at his residence in southern England was the
electrochemist Martin Fleischmann, discoverer of cold fusion; they would
occasionally get together at a local pub, and via George I was able to recall with
Martin the year I had spent in his Department at Southampton University.
   George Sassoon had a residence on the Isle of Mull in Scotland as well. His
neighbors there included Lionel Leslie, who had explored for lake monsters in
Irish loughs, and Christopher James, son of David James who had led important
expeditions to Loch Ness in the 1960s and 1970s. Again via George, Christopher
and I exchanged interesting information about Loch Ness matters,
   I had hoped to meet George Sassoon in person during one of my trips to
Scotland, but it was not to be, to my great regret. Yet I learned much through our
correspondence and his exceptional range and depth of interests.
   Ron Bracewell knew George Sassoon personally, and offers the following
further recollections.

 Max Egremont, Siegfried Sassoon: A Life, Farrar, Straus and Giroux (2005)
2~artwig  Hausdorf, Die Riickkehr der Drachen (The Return of the Dragons),
  Herbig (Germany), 2003
 Muret-Sanders Encyclopaedic English-German and German-English Dictionary
  (Abridged Edition-for School and Home), 17'h ed., Berlin-Schlineberg, 1908

             Episodes from the Life of George Sassoon

My old friend George was well known in many contexts, first of all as the son
of Siegfried Sassoon, the World War I poet. George, however, had a technical
bent, and became mathematically proficient at Cambridge, where he attended
                                   Obituary                                  363

King's College. Publication, for him, did not exert the driving force that
animates the inhabitants of refereed journals. This is not to say that he did not
write. Indeed, to illustrate one of the outstanding features of his intellect, an
interest in languages, he published a translation of the Zohar, a component
of the Kabbalah so abstruse that his was the first English translation. Why had
this centuries-old Jewish text been so neglected? It appeared to consist in part
of a record of the dimensions of God, the distance from His knee to His elbow,
the number of hairs in His beard, and other data deemed to be without wide
interest. Thinking about this, George recalled that in the Sioux language the
parts of an automobile are named after parts of the human body or of some
animal or vegetable. A few parallels with this can be found in English; the
word "nut" (that which screws onto a bolt) is a mechanical word borrowed
from botany. But in Sioux the headlights are eyes, the wheels are legs, the
doors are wings, and even the exhaust pipe has an anatomical name. Possibly
then, the Zohar could be understood as a specification of a machine. To
explore this hypothesis George learnt mediaeval Aramaic, the language of the
Zohar. That is not something that you or I would undertake lightly, nor would
we have known about Sioux automobile mechanics. The companion of this
published translation was his book The Manna Machine in which he
hypothesises that the text specifies the construction of the Ark of the
Covenant. In support of this interpretation he noted that Uzzah was smitten
dead when he inadvertently put out his hand to steady the Ark, that the
Philistines, who captured it from the Israelites were stricken with "emerods",
that the unfortunates who carted it back to the owners were smitten, and that
when Israelites of the Exodus pitched camp, access to the tent protecting the
Ark was restricted to Moses and Aaron and that the tent had no roof-the air
above it glowed red, indicating dangerous radioactivity. Possibly then, the Ark
was of extraterrestrial origin and dangerously radioactive. Later, in the
Temple of Solomon, Aaron wore a breastplate famous for its twelve glowing
jewels. Was this the computer keyboard with which Aaron communicated
with the supernatural entity? The present whereabouts of the Ark are
uncertain. Many copies exist because its precise dimensions are on record.
Possibly it is the one guarded today in a church in Axum, Ethiopia; George
reasoned that it may be on the bed of the Tiber. As a learned student of the
Torah and Talmud, George could discuss exotica such as whether rhinoceros
meat was kosher and whether the ban on killing insects on the Sabbath applied
to head lice.
   To my knowledge George spoke Serbo-Croatian fluently, was fond of his
ability to persuade his computer to print Armenian script, lundly provided me
with a disc containing a grammar and dictionary of Maltese (which I was happy
to pass on to a Maltese-American friend for the edification of his son). He gave
me a Latin-Sorbian dictionary in case I ever encountered Sorbs-who, as every
Ukrainian knows, survive only in Germany; and he provided me with a grammar
and dictionary of Klingon, a language hardly heard of in Central Europe.
364                                 Obituary

Nobody knows much about the origin of Basque but George was of the opinion
that "bai eta ez" (yes and no) sounded a bit like Georgian.
   On the technical side George consulted for the oil-well drillers in the North
Sea. He gave much attention to encryption and, when the United States
government restricted the publication of codes that the CIA could not decrypt,
became unpopular by publishing "The Radio-Hackers Codebook". His program,
written in BASIC, for generating prime numbers, is a gem. It goes far beyond the
nominal 12-digit limit of your everyday computer. At home in Mull he studied
the radioactivity of pitchblende and looked for correlation with terrestrial and
solar weather, magnetograms, and seismograms. A web-camera on a hilltop
outside his house provided internet users a view of the current weather on Mull,
should they need to know. He lived not far from a tower built by the Knights
Templar in the days of the Crusades. When the Knights were banished from
France and Spain they travelled to Ireland and then on to Mull in the 14th
century where, George concluded, they seeded the practice of freemasonry in
   In the thirties, when the radio transmitter in Luxembourg was the most
powerful in Europe, listeners reported strange echoes. They would hear what the
announcer said, and then about eight seconds later would hear it faintly again.
Since it takes only one-seventh of a second for radio waves to travel all round
the Earth, some strange extraterrestrial phenomenon seemed to be at play. The
measured echo delay was consistent with an echo from something nearly as far
away as the Moon. One candidate for such reflections would be an accumulation
of interplanetary flotsam at the Lagrangian point L1, where the gravitational
fields of the Sun, Earth, and Moon cancel to zero. An object at such a point is in
unstable equilibrium; if it moves away a little, it will keep going. But particles
moving under the net gravitational field of three attractors will slow down as
they pass the equilibrium point; therefore an accumulation of matter might be
expected in that vicinity from time to time. Efforts to observe long-delay echoes
by direct experiment were conducted at the Cavendish Laboratory in the late
fifties but with no success. However, reports of long-delay echoes had come
from French naval ships in French Indochina. Having in mind the earlier
thinking, George calculated the elevation angle above the horizon at the times
and dates of the detection of echoes. This was a most ingenious idea; he found
that on all occasions the point L1 was above the local horizon. This investigation
was well worthy of publication in an international scientific journal, an exercise
that was not on George's list of priorities.
   He was a jovial, good-natured, and inspiring friend and is very much missed.

                                                        RONALD N. BRACEWELL
                                    Terman Professor of Electrical Engineering
                                                           Stanford University
Journal of Scientijic Exploration, Vol. 21, No. 2, pp. 365-371, 2007            0892-3310107


      Stagnant Science: Why Are There No AIDS Vaccines?

                           Professor Emeritus of Chemistry & Science Studies
                                   Dean Emeritus of Arts & Sciences
                            Virginia Polytechnic Institute & State University
                                        e-mail: hhbauer@vt.edu

Shots in the Dark: The Wayward Search for an AIDS Vaccine by Jon Cohen.
W. W. Norton, 2001. 464 pp. $15.95 (paper).

Big Shot: Passion, Politics, and the Struggle for an AIDS Vaccine by Patricia
Thomas. Public Affairs, 2001. 416 pp. $27.50 (hardcover).

AIDS Vaccine Research by Flossie Wong-Staal and Robert C. Gallo (eds.).
Marcel Dekker, 2002. 342 pp, $165.00 (hardcover).

In 1984, the Secretary for Health and Human Services (Margaret Heckler) an-
nounced that Robert Gallo had discovered the virus-later designated HIV, the
human immunodeficiency virus-that causes AIDS. On Gallo's advice (Cohen,
p. 8), she forecast that a vaccine would likely be available in a couple of years.
   More than two decades later, there is no vaccine. Nor has there been credible
progress toward a vaccine. Scores of attempts using a variety of approaches have
all failed to show promise-or, rather, successive claims greeted initially as
promising have all failed to bear fruit. Two books published in 2001 set out to
describe for a general audience the search for a vaccine; an edited volume
published in 2002 addresses a specialist audience.
   The salient question is, why has so much effort failed so resoundingly? Cohen
sees the answer in institutional and organizational terms: there has been no
coordinated effort drawing on every idea and experience. By contrast, Thomas
finds the answer in contingent personal experiences and fluctuating commercial
demands that happened to sabotage various efforts at critical times; her book
ends with readers left hanging as to the impending results of a large trial, but it is
now known that the tested vaccine showed no sign of efficacy whatsoever. The
technical specialists collected by Wong-Staal and Gallo cannot agree on what
approach might work, but they do agree on the central difficulty: HIV mutates so
366                                H. H. Bauer

prodigiously that even "Within a single HIV-1 infected human host, HIV-1
population represents a complex mixture, or swarm, of mutant virus variants, in
which all viruses are genetically related yet virtually every virus is uniquen'.
   Cohen and Thomas describe the mutability in less drastic terms, mentioning
only that designers of potential vaccines need to make an initial decision as to
which of the clades of HIV- through E- vaccine is intended to counter, or
                                A             a
which mixture of 2 clades; B is the most common in the USA, E in South-East
Asia; though Thomas (p. 161) mentions a "Thai E" with "A-type innards . . .
encased in an E-type envelope". The fact remains that no vaccine has shown
efficacy against even a single clade. ("Clades" are much the same thing as
"species" or "sub-species". For a fascinating discussion, see David Hull's
magisterial, sadly neglected account of the origins of this concept2.)
   In principle, the possible types of vaccine-used against polio, for example-

      "Whole killed virus": HIV inactivated in some way.
      "Attenuated" virus: HIV reproduced through stages that progressively
      weaken it.

   Most researchers regard these as too dangerous: not all the virus might be
inactivated or attenuated, or it might re-activate in the body. So most efforts
have been directed toward stimulating production of antibodies that might
neutralize the virus, or stimulating production of immune-system killer cells that
could recognize and attack cells infected with the virus, or finding ways to
safeguard cells against entry of virus.
   Mainstream discussions make it appear that much is known about the
composition and structure of the virus, and Cohen and Thomas echo that
uncritically: "AIDS researchers have turned the virus inside-out and carefully
detailed how it destroys the immune system"3; "molecularly cloned . . . HIV-      in
other words, pure virus" (Cohen, p. 125); "epitopes of the viral envelope are too
variable. Furthermore, the functionally important epitopes of the gp120 protein
are masked by glycosylation, trimerisation and receptor-induced conformational
changes making it difficult to block with neutralising antibodies4". Yet the plain
fact of the matter is that all these specifics are mere inferences based on indirect
experiments on mixtures of substances under complex protocols: pure HIV has
never been isolated. All published electron micrographs of "isolations" of HIV
reveal a motley array of different-sized particles, only some of which have the
shape and size of a retrovirus5. "Cloned" virus is synthetic RNA assumed,
inferred, to be a "clone" of actual viral RNA-begging the question, of which
strain of which clade?
   In addition to the direct evidence of electron micrographs, clues have long
abounded, that "isolates" are actually mixtures; for example, that "the viral
surface" supposedly contains not only viral protein but also proteins from the
cells from which the virus was thought to have budded (Cohen, pp. 131-2). But
           Stagnant Science: Why Are There No AIDS Vaccines?                 367

these books leave no room to doubt orthodox HIVIAIDS theory: "August 14,
1984 . . . scientists had conclusively proven three months earlier [that HIV] was
the cause of AIDS" (Cohen, p. 200). Not only is Cohen unreliable in this manner
as to scientific substance, he is also Panglossian-to put it most mildly-on the
ethics of research on humans. Referring to clinical trials that have been
universally condemned as unethical, Cohen writes: "Zagury's Zairian trials,
although no one would dare say it, had a positive impact on the field . . . :
without, apparently, hurting anyone, they offered a lesson to an inexperi-
enced world about how to identify and prevent unethical AIDS vaccine trials"
(Cohen, p. 339). The world did not need Josef Mengele to discover that
experimenting on live human beings is abhorrent.
   Contradictions of logic or fact in HIVIAIDS research go unremarked in these
books. "HIV tests" have been (until recently, exclusively) tests for antibodies--
the presence of antibodies was taken as sign of active infection, not immunity;
yet the initial hoped-for sign that a vaccine might be effective in producing
immunity is the production of antibodies. Furthermore, Robert ~ a l l sneered
at Peter Duesberg s statement that antibodies in healthy people are generally
taken to show that an infection has been encountered and defeated. The
mainstream's left hand does not know what the right hand is doing, apparently,
as to antibodies and vaccines.
   Lacking scientific insight, these books nevertheless have points of historical
interest; verging at times on prurient interest, as when remarking that the former
Secretary for Health and Human Services was not aware that gay sex may
involve anal intercourse (Cohen, pp. 6-7). It may be useful to have Robert Gallo
on record with the absurd claim that "Every single retrovirus in every single
species, from chicken to man, causes disease . . . 'Every. Single. One."' (Cohen,
p. 346). One learns how Donald Francis became one of Gallo s arch enemies
(p. 61); and about Daniel Zagury's irresponsible vaccine initiatives in Africa,
carried on in collaboration with Gallo (p. 66 ff.); about Fauci's bureaucrat-
typical behavior (Cohen, pp. 133 ff., 189; Thomas, pp. 300, 314); and much
about the statistically incompetent claims of Army researcher Robert Redfield,
MD (Cohen, p. 158 ff.; Thomas, p. 168 ff.). There are copious illustrations of
the intemperance of all-purpose gadfly John P. Moore (Cohen, pp. 170, 258-9,
274-5, 286; Thomas, pp. 285-6, 298, 301, 365, 378). Bureaucratic infighting
within and between the HIV researchers of the Army and of the National
Institutes of Health, and conflicts with commercial interests, are described at
some length by both Cohen and Thomas.
   A few points of substance are worth noting. Chapter 10 in Cohen s book
describes the many people known to have been in frequent sexual contact wi