Cheating the Millennium: The Mounting Explanatory Debts of Scientific Naturalism
Langan, C. M. (2003) Cheating the Millennium: The Mounting Explanatory Debts of Scientific Naturalism. In W. A. Dembski (Ed.) Uncommon Dissent: Intellectuals Who Find Darwinism Unconvincing. ISI Books
Republished in Chris Langan’s Major Papers 1989–2020 (hardcover edition)
Introduction: Thesis + Antithesis = Synthesis
In agreeing to write this essay, I have promised to explain why I find Darwinism unconvincing. In order to keep this promise, I will be compelled to acknowledge the apparently paradoxical fact that I find it convincing as well. I find it convincing because it is in certain respects correct, and in fact tautologically so in the logical sense; I find it unconvincing because it is based on a weak and superficial understanding of causality and is therefore incomplete. Explaining why this is so will require a rather deep investigation of the nature of causality. It will also require not only that a direction of progress be indicated, but that a new synthesis embracing the seemingly antithetical notions of teleology and natural selection be outlined. But first, some essential background.
It would be hard to imagine philosophical issues bearing more strongly on the human condition than the nature of life and the meaning of human existence, and it would be hard to imagine a scientific issue bearing more strongly on the nature and meaning of life than biological origins. Our view of evolutionary biology, whatever it happens to be at any particular juncture, tells us much of what we believe about who and what we are and why we are here, unavoidably affecting how we view (and ultimately, treat) ourselves and each other. Unfortunately, the prevailing theory of biological origins seems to be telling us that at least one of these questions, why are we here?, is meaningless1 … or at least this is the message that many of us, whether or not we are directly aware of it, seem to have received. As a result, the brightest hope of the new millennium, that we would see the dawn of a New Enlightenment in which the Meaning of it All would at last be revealed, already seems to have gone the way of an extravagant campaign promise at an inauguration ceremony.
The field of evolutionary biology is currently dominated by neo-Darwinism, a troubled marriage of convenience between post-Mendelian genetics and natural selection, a concept propounded by the naturalist Charles Darwin (1999) in his influential treatise On the Origin of Species. It has often been noted that the field and the theory appear to be inseparable; in many respects, it seems that evolutionary biology and Darwinism originated and evolve together, leading some to conclude that the field properly contains nothing that is not already accommodated by the theory.
Those attempting to justify this view frequently assert that the limitations of the theory are just the general limitations imposed on all scientific theories by standard scientific methodology, and that to exceed the expressive limitations of the theory is thus to transgress the boundaries of science. Others have noted that this seems to assume a prior justification of scientific methodology that does not in fact exist – merely that it works for certain purposes does not imply that it is optimal, particularly when it is evidently useless for others – and that in any case, the putative falsifiability of neo-Darwinism distinguishes it from any definition of science according to which the truth or falsity of such theories can be scientifically determined.2
Nevertheless, neo-Darwinism continues to claim exclusive dominion over the “science” of evolutionary biology.
Until the latter part of the 18th century, the story was quite different. People tended to regard the matter of biological origins in a religious light. The universe was widely considered to have been freely and purposively designed and created by God as described in the Book of Genesis, and divine purpose was thought to be immanent in nature and open to observation and study. This doctrine, called teleology, drew rational support from traditional theological “arguments from design” holding that nature could only have been designed and created by a supreme intelligence. But teleology began to wane with the rise of British empiricism, and by the time Darwin published his theory in 1859, the winds of change were howling his anthem. Since then, the decline of teleology has accelerated to a point at which every supposedly universal law of nature is confidently presented as “irrefutable evidence” that natural events unfold independently of intent, and that purpose, divine or otherwise, is irrelevant to natural causation.
The concept of teleology remains alive nonetheless, having recently been granted a scientific reprieve in the form of Intelligent Design theory. “ID theory” holds that the complexity of biological systems implies the involvement of empirically detectable intelligent causes in nature. Although the roots of ID theory can be traced back to theological arguments from design, it is explicitly scientific rather than theological in character, and has thus been presented on the same basis as any other scientific hypothesis awaiting scientific confirmation.3
Rather than confining itself to theological or teleological causation, ID theory technically allows for any kind of intelligent designer – a human being, an artificial intelligence, even sentient aliens. This reflects the idea that intelligence is a generic quality which leaves a signature identifiable by techniques already heavily employed in such fields as cryptography, anthropology, forensics and computer science. It remains only to note that while explaining the inherent complexity of such a material designer would launch an explanatory regress that could end only with some sort of Prime Mover, thus coming down to something very much like teleology after all, ID theory has thus far committed itself only to design inference. That is, it currently proposes only to explain complex biological phenomena in terms of design, not to explain the designer itself.4
With regard to deeper levels of explanation, the field remains open.
Because neo-Darwinism is held forth as a “synthesis” of Darwinian natural selection and post-Mendelian genetics, it is sometimes referred to as the “Modern Synthesis”. However, it appears to fall somewhat short of this title, for not only is its basic approach to evolutionary biology no longer especially modern, but despite the fact that it is a minority viewpoint counterbalanced by cogent and far more popular alternatives including theistic evolution5 and ID theory (Robinson, 1995) it actively resists meaningful extension. Many of its most influential proponents have dismissed ID theory virtually on sight, declaring themselves needless of justification or remedial dialectic despite the many points raised against them, and this is not something that the proponents of a “modern synthesis” would ordinarily have the privilege of doing. A synthesis is ordinarily expected to accommodate both sides of a controversy regarding its subject matter, not just the side favored by the synthesist.6
Given the dissonance of the neo-Darwinist and teleological viewpoints, it is hardly surprising that many modern authors and scientists regard the neo-Darwinian and teleological theories of biological evolution as mutually irreconcilable, dwelling on their differences and ignoring their commonalities. Each side of the debate seems intent on pointing out the real or imagined deficiencies of the other while resting its case on its own real or imagined virtues. This paper will take a road less traveled, treating the opposition of these views as a problem of reconciliation and seeking a consistent, comprehensive framework in which to combine their strengths, decide their differences, and unite them in synergy. To the extent that both theories can be interpreted in such a framework, any apparent points of contradiction would be separated by context, and irreconcilable differences thereby avoided.
The ideal reconciliatory framework would be self-contained but comprehensive, meaning that both theories could be truthfully interpreted within it to the maximum possible extent, and consistent, meaning that irreconcilable differences between the theories could not survive the interpretation process. It would also reveal any biconditionality between the two theories; were they in any way to imply each other, this would be made explicit. For example, were a logical extension of neo-Darwinism to somehow yield ID-related concepts such as teleological agency and teleological causation, these would be seen to emerge from neo-Darwinist premises; conversely, were ID-theoretic concepts to yield ingredients of neo-Darwinism, this too would be explicated. In any case, the result would wear the title of “synthesis” far more credibly than neo-Darwinism alone.
Two Theories of Biological Causality
In order to talk about origins and evolution, one must talk about causality, and because causality is a function of the system called “nature”, one must talk about nature. Theories of biological origins and evolution like Neo-Darwinism and ID theory are both theories of causality restricted to the context of biological origins and evolution, and because causality is a function of nature, each points toward an underlying theory of nature incorporating an appropriate treatment of causality. That is, biological origins and evolution, being for scientific purposes instances of causation or the outcomes of causal processes, require definitions, theories and models of nature and causality. But these definitions, theories and models involve deeper and more complex criteria than meet the casual eye, and even to experts in science and philosophy, it is not entirely obvious how to satisfy them. This is why causality remains a controversial subject.
A cause is something that brings about an effect or result, and causality is the quality or agency relating cause and effect. Because there are different requirements for bringing about an event or situation, there are different kinds of causation. In common usage, a “cause” may be an event which causes another event, the reason or rationale for an event, an agent or the motive thereof, the means by which an event transpires, supporting conditions for an event, or in fact anything satisfying any logical or physical requirement of a resultant effect. Because causal relationships would seem to exist in a causal medium providing some sort of basic connection between cause and effect, the study of causation has typically focused on the medium and its connectivity … i.e., on the “fabric of nature”.
The kinds of causation that are required in order to explain natural changes or events were enumerated by Aristotle in the 4th century BC. He posed four questions involving four types of causes:
What is changed to make the entity (Of what is it composed)?
What makes the entity change, and how?
What is the shape or pattern assumed by the entity as it changes?
What is the goal toward which the change of the entity is directed?
He respectively defined the answers to these questions as the material cause, the efficient cause, the formal cause, and the final cause. With its explicit allowance for formal and final causation, Aristotle's classification ultimately implies the existence of a purposive, pattern-generating Prime Mover, and thus laid the groundwork for a teleological explanation of nature that went all but unchallenged for well over a millennium.
But when the Age of Reason (circa 1650–1800) had finished taking its toll on traditional Scholastic doctrines largely based on Aristotelian insight, only material and efficient causes retained a place in scientific reasoning … and in the hands of philosophers like Hume and Kant, even these modes of causation were laid open to doubt. Hume (1975) claimed that causal relationships are nothing more than subjective expectations that certain sequences of events observed in the past will continue to be observed in the future, while Kant (1965) went on to assert that causality is a category of cognition and perception according to which the mind organizes its experience of basically unknowable objects. Nevertheless, contemporary science retains its concern for material and efficient causes while letting formal and final causes languish in a state of near-total neglect.7
Distilled to a single sentence, the prevailing scientific view of nature and causality is roughly this: “Nature is associated with a space, generalizable to a spacetime manifold, permeated by fields under the causal influence of which objects move and interact in space and time according to logico-arithmetical laws of nature.” Despite its simplicity, this is a versatile causal framework with the power to express much of our scientific knowledge. But the questions to which it leads are as obvious as they are unanswered. For example, where do these laws reside? Of what are they composed? How and why did they originate? What are their properties? How do they function, and how are they sustained?8
In addition to generating questions about natural laws in general, the prevailing oversimplification of causality contains further gaps which have done as much to impede our understanding of nature as to further it.9
The associated problems are numerous, and they lead to yet another set of questions. For example, is causality formally and dynamically contained, uncontained or self-contained? What is its source, on what does it function, and what additional structure does it predicate of that on which it functions? What is its substance – is it mental, physical or both? How does it break down, and if it is stratified, then what are its levels? These questions lead in turn to further questions, and until all of these questions are answered at least in principle, no theory of biological causality stands on terra firma.
But before attempting to answer this question, let us have a look at the models of causality on which neo-Darwinism and ID theory are currently based.
Causality According to Intelligent Design Theory
Teleological causation is “top-down” causation in which the design and design imperative reside at the top, and the individual actualization events that realize the design reside at the bottom. The model universe required for teleological causality must therefore incorporate:
A source and means of design, i.e. a designer or designing agency.
A design stage in which designs are generated and/or selected.
An actualization stage in which designs become physically real from the viewpoints of physical observers.
A means or mechanism for passing from the design stage to the actualization stage.
If such a model universe permits these observers to empirically detect interesting instantiations of teleology, so much the better.
Particular teleological model universes that have been proposed include any number of celestial hierarchies and heavenly bureaucracies with God at the top giving the orders, angels of various ranks serving on intermediate levels as messengers and functionaries, and humans at or near the bottom; the Aristotelian universe, incorporating formal and final causation and embodying the telos of a Prime Mover; teleologically “front-loaded” mechanistic universes in which causation resembles clockwork that has been set in autonomous motion by a purposive, mechanically talented designer; and the panentheistic universe explicated by (among others) Alfred North Whitehead (1985), in which the teleological will of the designer is immanent in nature because in some sense, nature is properly contained within the designer. Although each has its strengths, these and other well-known teleological models are inadequate as formulated, failing to support various logical implications of requirements 1–4.
The model universe of ID theory, which can be regarded as a generalization of traditional teleological design theory with respect to causal agency, has essentially the same requirements. However, it also contains certain novel ingredients including a focus on intelligence, an emphasis on mathematical and information-theoretic concepts, and two novel ingredients called irreducible complexity and specified complexity.
Irreducible complexity, which is intended to describe biological systems and subsystems unlikely to have been produced by gradual (piece-by-piece) evolution, is by definition a property of any integrated functional system from which the removal of any one or more core components critically impairs its original function (Behe, 1998). Although proposed examples have drawn fire – such examples include the bacterial flagellum, the human eye, the blood clotting cascade, and even the conventional spring-loaded mousetrap – the concept has a valid basis with roots in logic, graph theory and other branches of mathematics and engineering.
Specified complexity, which is intended as a more general description of the products of intelligent causation, is by definition a property of anything that exhibits a recognizable pattern with a very low probability of occurring by chance. Whereas irreducible complexity is based on the sheer improbability of complex functionally-coherent systems, specified complexity adds an intelligence (rational pattern generation and recognition) criterion that lets functional complexity be generalized to a pattern-based form of complexity better suited to probabilistic and information-theoretic analysis (Dembski, 1998).
Specified complexity amounts to a relationship between three attributes: contingency, complexity and specification. Contingency corresponds to freedom and variety (as when there are many distinct possibilities that may be selectively actualized), complexity corresponds to improbability, and specification corresponds to the existence of a meaningful pattern which, in conjunction with the other two attributes in sufficient measure, indicates an application of intelligence. Wherever all three of these attributes are coinstantiated, specified complexity is present.
Contingency is associated with specificational and replicational probabilistic resources. Specificational resources consist of a set or class of distinct pre-specified target events, while replicational resources consist of chances for at least one of the specified target events to occur. The chance of occurrence of an instance of specified complexity is the chance that these two kinds of resource will intersect in light of total contingency.
For example, the total contingency of a 4-digit lottery consists of the set of all possible drawings over unlimited trials and is associated with the numbers from 0000 to 9999, the specificational resources consist of a subset of distinct pre-specified 4-digit winning numbers to be replicated (matched or predicted), and the replicational resources consist of the tickets purchased. The chance that the lottery will have at least one winner equals the probability of intersection of the set of winning numbers and the set of tickets, given that there are ten thousand distinctly-numbered tickets that might have been purchased.
More topically, the total contingency of a particular evolutionary context consists of all possible (productive or dead-end) lines of evolution that might occur therein, the specificational resources consist of instances of specified complexity or “intelligent design”, and the replicational resources consist of all possible lines of evolution which can occur within some set of practical constraints imposed on the context, for example time or space constraints tending to limit replication. The chance that an instance of specified complexity will evolve equals the probability of intersection of the set of instances and the set of constrained lines of evolution, given the multiplicity of all of the possible lines of evolution that could occur. Where this probability is extremely low, some form of intelligent design is indicated.
Specified complexity is a powerful idea that yields insight crucial to the meaning and satisfaction of requirements 1–4. First, probability estimates for instances of specified complexity are so low as to require that specificational and replicational resources be linked in such a way that such events can actually occur, in effect raising their probability. It must therefore be determined whether the satisfaction of this requirement is consistent with the premise that low probabilities can actually be calculated for instances of specified complexity, and if so, how and why this can be reliably accomplished. And next, it must be shown that the required relationship implies intelligence and design.
Up to its current level of detail and coherence, the model universe of ID theory does not necessarily conflict with that of neo-Darwinism with respect to causality, but rather contains it, requiring only that causality be interpreted in light of this containment.
Causality According to Neo-Darwinism
Neo-Darwinism is the application of Darwinian natural selection to modern (post-Mendelian) genetics, which indifferently assumes that genetic mutations occur due to “random” DNA copying errors. This short but revealing description contains a certain amount of useful information. First, it reveals that causality is being at least partially reduced to some (ontic or epistemic) form of randomness. Even more revealingly, the phrase natural selection explicitly implies that nature is selective. Indeed, the term natural alone is instructive, for it reflects a naturalistic viewpoint according to which existence is ascribed exclusively to the natural world, i.e. “nature”.
In practice, most scientists consider nature to consist of that which is physical, observable and amenable to empirical investigation as prescribed by the scientific method, in their adherence to which they see themselves as following a naturalistic agenda. This is in keeping with scientific naturalism, a worldview of which neo-Darwinism is considered representative. Scientific naturalism ascribes existence strictly to the physical or natural world consisting of space, time, matter and energy. Two strains of naturalism are sometimes distinguished, philosophical and methodological. While philosophical naturalism claims ontological force, methodological naturalism is epistemological in flavor, merely asserting that nature might as well equal the physical world for scientific purposes. But in either case, scientific naturalism effectively confines the scientific study of nature to the physical. So inasmuch as neo-Darwinism is exemplary of scientific naturalism, it is physical or materialistic in character (Pigliucci, 2000).
In the picture of causality embraced by scientific naturalism, processes are either random or deterministic. In deterministic processes, objects are affected by laws and forces external to them, while in random processes, determinacy is either absent or unknown. A process can be “random” due to ignorance, statistics or presumed acausality … that is, because epistemological or observational limitations prevent identification of its hidden causal factors, because its causal outcomes are symmetrically or “randomly” distributed in the large, or because it is presumed to be nondeterministic. The first two of these possibilities are basically deterministic, while the last is (unverifiably) nondeterministic. So a neo-Darwinist either takes a deterministic view of causality or sees it in terms of the dichotomy between determinism and nondeterminism, in either case relying heavily on the theory of probability.
In fact, given that natural selection is based on the essentially trivial observation that nature imposes constraints on survival and reproduction,10 neo-Darwinism boils down to little more than probability theory, genetics and a very simple abstract but nominally physical model of biological causality based on “survival and reproduction of the fittest” or some minor variant thereof. Thus, when its practitioners claim to have generated a prediction, it is generally not a deep secret of nature unearthed by means of advanced theoretical manipulation, but merely the result of applying what amounts to a principle of indifference11 to some question about mutation, adaptation, selection or reproduction, running the numbers, and tracking the implications through its simplistic model universe. If there were no such “theory” as neo-Darwinism, the same conclusion might have been reached with a straightforward combination of biology, genetics, chemistry, physics, a statistics calculator and a bit of common sense. This is why neo-Darwinism is so astonishingly able to absorb new effects and mechanisms the minute they come out of the core sciences.
Something else that neo-Darwinism seems to do with astonishing ease is absorb what appear on their faces to be contradictions. For example, many people, some might say a large majority, find it to some degree incredible that what amounts to a principle of indifference can be seriously offered as a causal explanation for the amazing complexity of the biological world, or for that matter any other part of the world. The fact that a principle of indifference is essentially devoid of information implies that neo-Darwinism yields not a causal explanation of biological complexity, but merely an open-ended simulation in which every bit of complexity delivered as output must have been present as input, any appearances to the contrary notwithstanding. This implies that neo-Darwinism per se, as distinguished from the core sciences from which it routinely borrows, adds precisely nothing to our knowledge of biological complexity or its source.
In order to deal with this seemingly inescapable problem, the proponents of neo-Darwinism have eagerly adopted the two hottest slogans in the theory of complex systems, self-organization and emergence. Self-organization is a spontaneous, extrinsically unguided process by which a system develops an organized structure, while emergence refers to those global properties (functions, processes) of composite hierarchical systems that cannot be reduced to the properties of their component subsystems … the properties in which they are more than the sums of their parts. But the fact that these terms have been superficially defined does not imply that they have been adequately explained. Actually, they remain as much of a mystery in complexity theory as they are in biology, and can do nothing for neo-Darwinism but spin the pointer toward another hapless and equally helpless field of inquiry.
Because scientific naturalism denies that existence of any kind is possessed by anything of a supernatural or metaphysical character, including a designing intelligence, the definitions, theories and models of nature and causality on which it implicitly relies must be “physical”, at least in name. However, as we have already noted and will shortly explain in detail, what currently passes for an understanding of causality in the physical sciences leaves much to be desired. In particular, since the kind of causality treated in the physical sciences is ontologically and functionally dependent on the origin and evolution of the cosmos, scientific naturalists trying to answer questions about causality are obliged to consider all stages of causation and generation all the way back to the cosmic origin, constantly testing their answers to see if they continue to make sense when reformulated in more fundamental terms.
Unfortunately, this obligation is not being met. One reason is the reluctance of those who most need an understanding of causality to admit the extent of their ignorance. Another is the seeming intractability of certain problems associated with the causality concept itself.
A Deeper Look at Causality: The Connectivity Problem
Because causal relationships would seem to exist in a causal medium providing some sort of basic connection between cause and effect, the study of causation has typically focused on the medium and its connectivity … i.e., on the “fabric of nature”. How does this fabric permit different objects to interact, given that to interact is to intersect in the same events governed by the same laws and thus to possess a degree of sameness? How can multiple objects each simultaneously exhibit two opposite properties, sameness and difference, with respect to each other?
Equivalently, on what underlying form of connectivity is causality defined? When one asserts that one event “causes” another, what more general connection does this imply between the events? If there is no more general connection than the causal connection itself, then causality is underivable from any logically prior condition; it is something that happens ex nihilo, the sudden synthesis of a connection out of nothing. As Hume maintained, causal relationships are mere accidental correlations of subjectively-associated events.
But it can't be quite that simple. In fact, Hume's characterization of causality as mere random correlation presupposes the existence of a correlating agent who recognizes and unifies causal correlations through experience, and the abstractive, experiential coherence or consciousness of this correlation-inducing agent constitutes a prior connective medium. So in this case, explaining causality requires that the subjective medium of experience, complete with its correlative “laws of causality”, be related to the objective world of real events.
Unfortunately, Hume's thesis includes a denial that any such objective world exists. In Hume's view, experience is all there is. And although Kant subsequently registered his qualified disagreement, asserting that there is indeed an objective outside world, he pronounced it unknowable, relegating causality to the status of a category of perception.12
This, of course, perpetuated the idea of causal subjectivity by continuing to presuppose the existence of an a priori subjective medium.
How can the nature of subjective causality be understood? As Kant observed, perception and cognition are mutually necessary; concepts without percepts are empty, and percepts without concepts are blind.13
It must therefore be asked to what extent perceptual reality might be an outward projection of cognitive processes, and natural processes the mirror images of mental processes.
This leads to another problem, that of mind-matter dualism.
The Dualism Problem
The Kantian distinction between phenomenal and noumenal reality, respectively defined as those parts of reality14 which are dependent on and independent of perception, mirrors a prior philosophical viewpoint known as Cartesian (mind-matter) dualism. Associated with René Descartes, the polymath mercenary who laid the groundwork for analytic geometry by helping to develop the concept of coordinate spaces, this is a form of substance dualism which asserts that reality consists of two immiscible “substances”, mind and matter. Cartesian dualism characterizes a certain influential approach to the problem of mental causation: how does the mind influence the physical body?
Cartesian dualism leads to a problem associated with the connectivity problem we have just discussed: if reality consists of two different “substances”, then what connects these substances in one unified “reality”? What is the medium which sustains their respective existences and the putative difference relationship between them? One possible (wrong) answer is that their relationship is merely abstract, and therefore irrelevant to material reality and devoid of material influence; another is that like the physical epiphenomenon of mind itself, it is essentially physical. But these positions, which are seen in association with a slew of related philosophical doctrines including physicalism, materialism, naturalism, objectivism, epiphenomenalism and eliminativism, merely beg the question that Cartesian dualism was intended to answer, namely the problem of mental causation.
Conveniently, modern logic affords a new level of analytical precision with respect to the Cartesian and Kantian dichotomies. Specifically, the branch of logic called model theory distinguishes theories from their universes, and considers the intervening semantic and interpretative mappings. Calling a theory an object language and its universe of discourse an object universe, it combines them in a metaobject domain consisting of the correspondences among their respective components and systems of components, and calls the theory or language in which this metaobject domain is analyzed a metalanguage. In like manner, the relationship between the metalanguage and the metaobject domain can be analyzed in a higher-level metalanguage, and so on. Because this situation can be recursively extended, level by level and metalanguage by metalanguage, in such a way that languages and their universes are conflated to an arbitrary degree, reality can with unlimited precision be characterized as a “metalinguistic metaobject”.
In this setting, the philosophical dichotomies in question take on a distinctly mathematical hue. Because theories are abstract, subjectively-formed mental constructs,15 the mental, subjective side of reality can now be associated with the object language and metalanguage(s), while the physical, objective side of reality can be associated with the object universe and metauniverse(s), i.e. the metaobject domain(s). It takes very little effort to see that the mental/subjective and physical/objective sides of reality are now combined in the metaobjects, and that Cartesian and Kantian “substance dualism” have now been transformed to “property dualism”16 or dual-aspect monism. That is, we are now talking, in mathematically precise terms, about a “universal substance” of which mind and matter, the abstract and the concrete, the cognitive-perceptual and the physical, are mere properties or aspects.
Translating this into the scientific status quo is not difficult. Science regards causality as “objective”, taking its cues from observation while ignoring certain philosophical problems involving the nature of objectivity. But science also depends on theoretical reasoning, and this involves abstract analogues of causality to which science is equally indebted. To the extent that scientific theories accurately describe the universe, they are isomorphic to the universe; in order that nature be amenable to meaningful theorization, science must therefore assume that the basic cognitive ingredients of theories, and for that matter the perceptual ingredients of observations, mirror the corresponding ingredients of nature up to some minimal but assured level of isomorphism. Consistent theories of science thus require that physical and abstract causation be brought into basic correspondence as mandated by this necessity.
Abstract analogues of physical causation are already well-understood. Logically, causality is analogous to implication, an active or passive relationship between antecedents and consequents; theoretically, it is analogous to the application of rules of inference to expressions formulated within a theory; linguistically, it amounts to substitution or production according to the rules of a generative grammar; and mathematically, it amounts to the application of a rule, mapping, function, operation or transformation. In every case, the analogue is some form of recursive17 or iterative morphism to which a nonlogical interpretation may be attached.18
The object is therefore to understand physical reality in terms of such operations defined on an appropriate form of dual-aspect monism.
This leads directly to the structure problem.
The Structure Problem
A description or explanation of causality can only be formulated with respect to a particular “model universe” in which space, time and matter are defined and related to each other in such a way as to support the description. This relationship must account for the laws of nature and their role in natural processes. A little reflection should reveal that both neo-Darwinism and ID theory, as well as all other scientific theories, are currently deficient in this regard. At best, scientists have a very limited idea where the laws of nature reside, how they came to be, and how they work, and due to the limitations of their empirical methodology,19 they have no means of clarification.
We have already encountered Aristotle's four modes of causation: material, efficient, formal and final. These follow no special prescription, but are merely generic answers to questions about certain features of Aristotle's mental representation of nature … his model universe. There are as many additional modes of causation as there are meaningful questions regarding the structure and dynamics of a given model universe. For example, in addition to Aristotle's questions of what, who and how, what and why, we could also ask where (positional causation), when (order or timing of causation), by virtue of what (facilitative causation), and so forth. Thus, we could say that something happened because it was positioned in a medium containing its material cause and supporting its efficient cause, because the time was right or certain prerequisites were in place, because certain conditions were present or certain tools were available, et cetera.
On what kinds of model universe can a causality function be defined? Among the mathematical structures which science has long favored are coordinate spaces and differentiable manifolds. In differentiable coordinate spaces, laws of physics formulated as algebraic or differential equations may conveniently define smooth geometric curves which faithfully represent (e.g.) the trajectories of physical objects in motion. A model universe based on these constructs supports certain causal relationships to an impressive level of accuracy. However, it fails with respect to others, particularly those involving discrete or nonlocal20 changes or requiring high levels of coherence. In particular, it is incapable of modeling certain generative processes, including any generative process that might have led to its own existence, and beyond a certain point, its “continuity” attribute has eluded a completely satisfactory explanation.21
These and other difficulties have prompted some theorists to suggest model universes based on other kinds of mathematical structure. These include a new class of models to which the concepts of information and computation are essential. Called “discrete models”, they depict reality in terms of bits, quanta, quantum events, computational operations and other discrete, recursively-related units. Whereas continuum models are based on the notion of a continuum, a unified extensible whole that can be subdivided in such a way that any two distinct points are separated by an infinite number of intermediate points, discrete models reflect the fact that it is impossible to describe or define a change or separation in any way that does not involve a sudden finite jump in some parameter. Discrete models reflect the rising investment of the physical sciences in a quantum-theoretic view of reality, and the increasing dependence of science on computer simulation as an experimental tool.22
Discrete models have the advantage that they can more easily incorporate modern cybernetic concepts, including information, computation and feedback, which conduce to an understanding of reality as a control and communication system. In the context of such models, informational and computational reductionism is now pursued with a degree of enthusiasm formerly reserved for attempts to reduce the universe to matter and energy. However, certain difficulties persist. Discrete models remain dualistic, and they still cannot explain their own origins and existences. Nonlocality is still a problem for them, as are the general-relativistic spacetime deformations so easily formulated in continuum models. Because they allow the existence of discrete gaps between events, they tend to lack adequate connectivity. And in the shadow of these deficiencies, they can illuminate the interrelationship of space, time and object no more successfully than their continuum counterparts.
The unregenerate dualism of most discrete models demands particular attention. As we reasoned above, solving the problem of dualism requires that the mental and physical aspects of reality be brought into coincidence. Insofar as information and computation are essentially formal and abstract, reducing the material aspects of nature to information and computation should bring the concrete and the abstract, the material and the mental, into perfect coincidence. But because most discrete models treat information and computation as objective entities, tacitly incorporating the assumption that bits and computations are on the same ontological footing as particles and collisions, their mental dimension is overlooked. Because making no explicit provision for mind amounts to leaving it out of the mix, mind and matter remain separate, and dualism persists.
Is there another alternative? The model-theoretic perspective, which simultaneously juxtaposes and conflates subjective languages and their objective universes, suggests that reality embodies an ontic-nomothetic medium with abstract and physical aspects that are respectively related as syntax is related to language. For example, because scientific observation and theorization must be consistent, and logic is the backbone of consistency, the syntax of every scientific theory must incorporate logic. In the case of a geometric theory of physical reality like classical mechanics or relativity theory, this amounts (by model-theoretic implication) to the requirement that logic and geometry literally coincide. But where geometry is a property of “physical” spacetime, so then is logic, and if logic resides in spacetime, then so must logical grammar. This leads to the requirement that physical dynamics be objectively reconciled with the formal grammar of logic and logic-based theories, ultimately including abstract causality in its entirety.
Obviously, conventional continuum and discrete models of reality fail to meet this requirement. As far as they and those who embrace them are concerned, the physical world is simply not answerable to any theory whatsoever, even logic. According to the standard empirical doctrine of science, we may observe reality but never impose our conceptions upon it, and this means that theory – even a theory as necessary to cognition and perception as logic – is always the beggar and never the master at the scientific table. The reason for this situation is clear; scientists need a means of guarding against the human tendency to confuse their inner subjective worlds, replete with fantasy and prejudice, with the factual external world conventionally studied by science.
But there is a very clear difference between logic on one hand, and fantasy and prejudice on the other. While science never needs the latter, it always needs the former. By excluding logic from nature, mainstream science has nothing to gain and everything to lose; in not attributing its own most basic requirements to its subject matter, it is cheating itself in a crucial way. Whether or not a theory which fails to predicate on its universe the wherewithal of its own validity turns out to be valid, it can be neither more nor less so for its false and subtly pretentious humility. On the other hand, failing to attribute these requirements to its universe when its universe in fact exhibits them, and when its universe would in fact be unintelligible without them, can ultimately cost it every bit of truth that it might otherwise have had … particularly if its methodology is inadequate to identify the problem and mandate a remedy.
Because they fail to provide definitive answers for questions about causality, conventional continuum and discrete models of reality devolve to acausality or infinite causal regression. No matter what causal explanations they seem to offer, one of two things is implied: (1) a cause prior to that which is cited in the explanation, or (2) random, spontaneous, acausal emergence from the void, no explanation supposedly required. Given the seeming absence of alternatives to determinism or randomness, or extrinsic23 and null causation, how are meaningful causal explanations to be completed?
The Containment Problem
A certain philosophically controversial hypothesis about causality presently rules the scientific world by fiat. It asserts that physical reality is closed under causal regression: “no physical event has a cause outside the physical domain.”24
That is, if a physical event has a cause, then it has a physical cause. Obviously, the meaning of this principle is strongly dependent on the definition of physical, which is not as cut and dried as one might suppose.25
It also contradicts the obvious fact that causality is an abstraction, at best indirectly observable through its effects on matter, which functions independently of any specific item of material content. How, then, does this principle manage to maintain its hold on science? The answer: false parsimony and explanatory debt. Concisely, false parsimony is when a theory achieves deceptive simplicity in its native context by sweeping its unpaid explanatory debts (explanatory deficiencies) into unkempt piles located in or between other areas of science.
It is an ill-kept secret that the scientific community, far from being one big happy family of smoothly-connected neighborhoods, consists of isolated, highly-specialized enclaves that often tend toward mutual ignorance and xenophobia. Under these circumstances, it is only natural to expect that when caught between an observational rock and a theoretical hard place, some of these enclaves will take advantage of the situation and “pass the explanatory buck”, neither knowing nor caring when or where it comes to rest as long as the maneuver takes some of the heat off them and frees them to conduct business as usual. While the explanatory buck-passing is almost never productive, this can be conveniently hidden in the deep, dark cracks and crevices between disciplines. As a result, many pressing explanatory obligations have been successfully exiled to interdisciplinary limbo, an intellectual dead zone from which they cannot threaten the dominance of the physical causal closure thesis.
However, this ploy does not always work. Due to the longstanding scientific trend toward physical reductionism, the buck often gets passed to physics, and because physics is widely considered more fundamental than any other scientific discipline, it has a hard time deferring explanatory debts mailed directly to its address. Some of the explanatory debts for which physics is holding the bag are labeled “causality”, and some of these bags were sent to the physics department from the evolutionary biology department. These debt-filled bags were sent because the evolutionary biology department lacked the explanatory resources to pay them for itself. Unfortunately, physics can't pay them either.
The reason that physics cannot pay explanatory debts generated by various causal hypotheses is that it does not itself possess an adequate understanding of causality. This is evident from the fact that in physics, events are assumed to be either deterministic or nondeterministic in origin. Given an object, event, set or process, it is usually assumed to have come about in one of just two possible ways: either it was brought about by something prior and external to it, or it sprang forth spontaneously as if by magic. The prevalence of this dichotomy, determinacy versus randomness, amounts to an unspoken scientific axiom asserting that everything in the universe is ultimately either a function of causes external to the determined entity (up to and including the universe itself), or no function of anything whatsoever. In the former case there is a known or unknown explanation, albeit external; in the latter case, there is no explanation at all. In neither case can the universe be regarded as causally self-contained.
To a person unused to questioning this dichotomy, there may seem to be no middle ground. It may indeed seem that where events are not actively and connectively produced according to laws of nature, there is nothing to connect them, and thus that their distribution can only be random, patternless and meaningless. But there is another possibility after all: self-determinacy. Self-determinacy involves a higher-order generative process that yields not only the physical states of entities, but the entities themselves, the abstract laws that govern them, and the entire system which contains and coherently relates them. Self-determinism is the causal dynamic of any system that generates its own components and properties independently of prior laws or external structures. Because self-determinacy involves nothing of a preexisting or external nature, it is the only type of causal relationship suitable for a causally self-contained system.
In a self-deterministic system, causal regression leads to a completely intrinsic self-generative process. In any system that is not ultimately self-deterministic, including any system that is either random or deterministic in the standard extrinsic sense, causal regression terminates at null causality or does not terminate. In either of the latter two cases, science can fully explain nothing; in the absence of a final cause, even material and efficient causes are subject to causal regression toward ever more basic (prior and embedding) substances and processes, or if random in origin, toward primitive acausality. So given that explanation is largely what science is all about, science would seem to have no choice but to treat the universe as a self-deterministic, causally self-contained system.26
And thus do questions about evolution become questions about the self-generation of causally self-contained, self-emergent systems. In particular, how and why does such a system self-generate?
The Utility (Selection) Problem
As we have just noted, deterministic causality transforms the states of preexisting objects according to preexisting laws associated with an external medium. Where this involves or produces feedback, the feedback is of the conventional cybernetic variety; it transports information through the medium from one location to another and then back again, with transformations at each end of the loop. But where objects, laws and media do not yet exist, this kind of feedback is not yet possible. Accordingly, causality must be reformulated so that it can not only transform the states of natural systems, but account for self-deterministic relationships between states and laws of nature. In short, causality must become metacausality.27
Self-determination involves a generalized atemporal28 kind of feedback between physical states and the abstract laws that govern them. Whereas ordinary cybernetic feedback consists of information passed back and forth among controllers and regulated entities through a preexisting conductive or transmissive medium according to ambient sensory and actuative protocols – one may think of the Internet, with its closed informational loops and preexisting material processing nodes and communication channels, as a ready example – self-generative feedback must be ontological and telic rather than strictly physical in character.29
That is, it must be defined in such a way as to “metatemporally” bring the formal structure of cybernetics and its physical content into joint existence from a primitive, undifferentiated ontological groundstate. To pursue our example, the Internet, beginning as a timeless self-potential, would have to self-actualize, in the process generating time and causality.
But what is this ontological groundstate, and what is a “self-potential”? For that matter, what are the means and goal of cosmic self-actualization? The ontological groundstate may be somewhat simplistically characterized as a complete abeyance of binding ontological constraint, a sea of pure telic potential or “unbound telesis”. Self-potential can then be seen as a telic relationship of two lower kinds of potential: potential states, the possible sets of definitive properties possessed by an entity along with their possible values, and potential laws (nomological syntax) according to which states are defined, recognized and transformed.30
Thus, the ontological groundstate can for most purposes be equated with all possible state-syntax relationships or “self-potentials”, and the means of self-actualization is simply a telic, metacausal mode of recursion through which telic potentials are refined into specific state-syntax configurations. The particulars of this process depend on the specific model universe – and in light of dual-aspect monism, the real self-modeling universe – in which the telic potential is actualized.
And now we come to what might be seen as the pivotal question: what is the goal of self-actualization? Conveniently enough, this question contains its own answer: self-actualization, a generic analogue of Aristotelian final causation and thus of teleology, is its own inevitable outcome and thus its own goal.31
Whatever its specific details may be, they are actualized by the universe alone, and this means that they are mere special instances of cosmic self-actualization. Although the word “goal” has subjective connotations – for example, some definitions stipulate that a goal must be the object of an instinctual drive or other subjective impulse – we could easily adopt a reductive or functionalist approach to such terms, taking them to reduce or refer to objective features of reality. Similarly, if the term “goal” implies some measure of design or pre-formulation, then we could easily observe that natural selection does so as well, for nature has already largely determined what “designs” it will accept for survival and thereby render fit.
Given that the self-containment of nature implies causal closure implies self-determinism implies self-actualization, how is self-actualization to be achieved? Obviously, nature must select some possible form in which to self-actualize. Since a self-contained, causally closed universe does not have the luxury of external guidance, it needs to generate an intrinsic self-selection criterion in order to do this. Since utility is the name already given to the attribute which is maximized by any rational choice function, and since a totally self-actualizing system has the privilege of defining its own standard of rationality,32 we may as well speak of this self-selection criterion in terms of global or generic self-utility. That is, the self-actualizing universe must generate and retrieve information on the intrinsic utility content of various possible forms that it might take.
The utility concept bears more inspection than it ordinarily gets. Utility often entails a subject-object distinction; for example, the utility of an apple in a pantry is biologically and psychologically generated by a more or less conscious subject of whom its existence is ostensibly independent, and it thus makes little sense to speak of its “intrinsic utility”. While it might be asserted that an apple or some other relatively non-conscious material object is “good for its own sake” and thus in possession of intrinsic utility, attributing self-interest to something implies that it is a subject as well as an object, and thus that it is capable of subjective self-recognition.33
To the extent that the universe is at once an object of selection and a self-selective subject capable of some degree of self-recognition, it supports intrinsic utility (as does any coherent state-syntax relationship). An apple, on the other hand, does not seem at first glance to meet this criterion.
But a closer look again turns out to be warranted. Since an apple is a part of the universe and therefore embodies its intrinsic self-utility, and since the various causes of the apple (material, efficient and so on) can be traced back along their causal chains to the intrinsic causation and utility of the universe, the apple has a certain amount of intrinsic utility after all. This is confirmed when we consider that its taste and nutritional value, wherein reside its utility for the person who eats it, further its genetic utility by encouraging its widespread cultivation and dissemination. In fact, this line of reasoning can be extended beyond the biological realm to the world of inert objects, for in a sense, they too are naturally selected for existence. Potentials that obey the laws of nature are permitted to exist in nature and are thereby rendered “fit”, while potentials that do not are excluded.34
So it seems that in principle, natural selection determines the survival of not just actualities but potentials, and in either case it does so according to an intrinsic utility criterion ultimately based on global self-utility.
It is important to be clear on the relationship between utility and causality. Utility is simply a generic selection criterion essential to the only cosmologically acceptable form of causality, namely self-determinism. The subjective gratification associated with positive utility in the biological and psychological realms is ultimately beside the point. No longer need natural processes be explained under suspicion of anthropomorphism; causal explanations need no longer implicitly refer to instinctive drives and subjective motivations. Instead, they can refer directly to a generic objective “drive”, namely intrinsic causality … the “drive” of the universe to maximize an intrinsic self-selection criterion over various relational strata within the bounds of its internal constraints.35
Teleology and scientific naturalism are equally satisfied; the global self-selection imperative to which causality necessarily devolves is a generic property of nature to which subjective drives and motivations necessarily “reduce”, for it distributes by embedment over the intrinsic utility of every natural system.
Intrinsic utility and natural selection relate to each other as both reason and outcome. When an evolutionary biologist extols the elegance or effectiveness of a given biological “design” with respect to a given function, as in “the wings of a bird are beautifully designed for flight”, he is really talking about intrinsic utility, with which biological fitness is thus entirely synonymous. Survival and its requisites have intrinsic utility for that which survives, be it an organism or a species; that which survives derives utility from its environment in order to survive and as a result of its survival. It follows that neo-Darwinism, a theory of biological causation whose proponents have tried to restrict it to determinism and randomness, is properly a theory of intrinsic utility and thus of self-determinism. Athough neo-Darwinists claim that the kind of utility driving natural selection is non-teleological and unique to the particular independent systems being naturally selected, this claim is logically insupportable. Causality ultimately boils down to the tautological fact that on all possible scales, nature is both that which selects and that which is selected, and this means that natural selection is ultimately based on the intrinsic utility of nature at large.
But in light of causal self-containment, so is teleology. Why, then, do so many supporters of teleology and neo-Darwinism seem to think them mutually exclusive?
The Stratification Problem
It is frequently taken for granted that neo-Darwinism and ID theory are mutually incompatible, and that if one is true, then the other must be false. But while this assessment may be accurate with regard to certain inessential propositions attached to the core theories like pork-barrel riders on congressional bills,36 it is not so obvious with regard to the core theories themselves. In fact, these theories are dealing with different levels of causality.
The scientific method says that experiments must be replicable, and this means that the same laws must govern the same kinds of events under the same conditions throughout nature. So where possible, the laws of nature are scientifically formulated in such a way that they distribute over space and time, the same laws applying under similar conditions at all times and places. Science also requires that the laws of nature be formulated in such a way that the next state of an object depends only on its present state, including all of the forces impinging on it at the present moment, with no memory of prior states required. Little wonder that science enforces these two conditions with extreme prejudice wherever possible, for in principle, they guarantee its ability to predict the future of any physical system from a mere knowledge of its current state and the distributed laws of nature.37
Science imposes yet further constraints on causality. One, the empirical discernability criterion of the scientific method,38 guarantees the recognizability of physical states by insisting that they be formulated in terms of first-order properties39 called observables that can be unambiguously measured in conjunction with physical objects. Another, which we have already encountered, is the locality principle, which says that there can be no “nonlocal” jumps from one point in a physical manifold to another non-adjacent point.40
This adds an adjacency or continuity constraint to the Laplacian ideal; the laws of nature must not only be formulated in such a way that the next state of an object depends only on its present state, but in such a way that successive states are “near” each other, i.e. so that smaller amounts of time and energy correspond to smaller distances. This proportionality of distance and effect permits the laws of causality to be consistently applied on the macroscopic and microscopic scales.
Of all the preconceived restrictions and unnecessary demands imposed on causality by science, the least questioned is the requirement that the relationship between physical states and laws of nature be one-way, with states depending on laws but not vice versa. Science regards the laws of nature as immutable, states as existing and transforming at their beck and call, and the directional dependency relationship between laws and states as something that has existed for all time. When the laws dictate that an event should happen, it happens; on the other hand, any event chancing to occur without their guidance is uncaused and totally “random”. This leads to the determinacy-versus-randomness dichotomy already discussed in connection with the containment and utility problems.
Due to these criteria, what science calls a “law of nature” is typically an autonomous relationship of first-order properties of physical objects, and so for the laws of state transformation that govern causation. There can be little doubt that science has succeeded in identifying a useful set of such laws. Whether or not they suffice for a full description of nature and causality (and they do not), they are an important part of the total picture, and wherever possible, they should indeed be tracked down and exploited to their full descriptive and prescriptive potential. But at least one caveat is in order: they should be regarded as explaining only that which they can be empirically and/or rationally shown to explain. As with any other scientific assertion, they must be kept pure of any metaphysical prejudice tending to artificially inflate their scope or explanatory weight.
It is thus a matter of no small concern that in pursuing its policy of causal simplification, the scientific mainstream seems to have smuggled into its baggage compartment a certain piece of contraband which appears, despite its extreme resistance to rational or empirical justification, to be masquerading as a tacit “meta-law” of nature. It states that every higher-order relationship of objects and events in nature, regardless of complexity or level of dynamic integration, must be strictly determined by distributed laws of nature acting independently on each of its individual components. Along with the other items on the neo-Laplacian wish-list of causal conveniences to which the scientific mainstream insists that nature be held, this criterion betrays a marked preference for a “bottom-up” approach to causation, suggesting that it be called the bottom-up thesis.41
The bottom-up thesis merely underscores something that we already know about the scientific mainstream: it wants with all of its might to believe that in principle, the whole destiny of the natural world and everything in it can be exhaustively predicted and explained on the basis of (1) a Laplacian snapshot of its current details, and (2) a few distributed laws of nature from which to exhaustively develop the implications. So irresistible is this desire that some of those caught in its grip are willing to make a pair of extraordinary claims. The first is that science has completely explained some of nature's most complex systems in terms of microscopic random events simply by generically classifying the microscopic events that might possibly have been involved in their realization. The second is that observed distributions of such events, which they again call “random”, prove that no system in nature, regardless of its complexity, has ever come into being from the top down.
The genotype-to-phenotype mapping is a case in point. Many neo-Darwinists seem to have inferred that what happens near the endpoints of this mapping – the seemingly random mutation of genotypes and the brutal, deterministic competition among phenotypes – offers more insight regarding nature and causality than does the delicate, exquisitely complex ontogenic symphony performed by the mapping itself. In response to the observation that the theoretical emphasis has been lopsided, one hears that of course neo-Darwinists acknowledge the involvement of intermediate processes in the emergence of biological complexity from strings of DNA. For are not genes converted to proteins, which fold into functional forms and interact with other molecules to alter the timing of gene expression, which can lead to cytodifferentiation, pattern formation, morphogenesis and so on, and is this whole self-organizational process not highly sensitive to developmental interactions with the environment?
Unfortunately, where the acknowledged processes and interactions are still assumed to be micro-causal and deterministic, the acknowledgement is meaningless. In fact, the higher-order structure and processing of complex biological systems has only been shoveled into an unkempt pile sexily labeled “emergent phenomena” and bulldozed across the interdisciplinary divide into complex systems theory. And thus begins a ramose paper trail supposedly leading to the final owners of the explanatory debt, but instead looping, dead-ending or petering out in interdisciplinary limbo. The explanatory buck is thereby passed into oblivion, and the bottom-up thesis rolls like righteous thunder over any voice daring to question it.
In fact, the top-down and bottom-up approaches to causality are not as antithetical as they might seem. In the bottom-up view of causality, states evolve according to laws of nature in a temporal direction preferred by the second law of thermodynamics, which holds under the assumption that physical states are governed by laws of nature independent of state. But this assumption can hold only up to a point, for while the prevailing model universe supports only bottom-up causation, the situation is dramatically reversed with respect to cosmology. Because cosmological causal regression terminates with an ancestral cosmic singularity representing the whole of nature while omitting all of its details, standard cosmology ultimately supports only a top-down approach. The natural affinity of the cosmos for top-down causation – the fact that it is itself an instance of top-down causation – effectively relegates bottom-up causation to secondary status, ruling out the bottom-up thesis and thus making room for a new model universe supporting and reconciling both approaches.
It turns out that in a certain kind of model universe, the top-down and bottom-up approaches are to some degree mutually transparent.42
Two necessary features of such a model universe are (1) sufficient causal freedom to yield probabilistic resources in useful amounts, and (2) structural support for metacausal access to those resources. As it happens, a well-known ingredient of nature, quantum uncertainty, provides the required sort of causal freedom. But while nature exhibits quantum uncertainty in abundance and can thus generate probabilistic resources at a certain respectable rate, the prevailing model universe supports neither metacausal relationships nor sufficient access to these resources. In fact, it fails to adequately support even quantum mechanics itself.
The new model universe must remedy these shortcomings … but how?
Synthesis: Some Essential Features of a Unifying Model of Nature and Causality
Classical mechanics, inarguably one of the most successful theories in history, is often cited as a model of theoretical progress in the sciences. When certain problems arose that could not be solved within its conceptual framework, it was extended to create a metatheory in which it exists as a “limiting case”. In fact, this was done thrice in fairly rapid succession. The first extension created the Special Theory of Relativity, in which classical mechanics holds as a low-to-medium velocity limit. The second created the General Theory of Relativity, in the curved spacetime manifold of which the flat Minkowskian manifold of Special Relativity holds as a local limit. And the third created quantum mechanics, in which classical mechanics holds as a “decoherence limit”.43
Indeed, whenever a theory is extended by adjoining to it one or more new concepts, this creates a metatheory expressing the relationship between the adjoint concept(s) and the original theory.
The model universe of neo-Darwinism is just a special-purpose refinement of the continuous coordinate spaces of classical mechanics, and its causal limitations are shared by neo-Darwinism and most other scientific theories. This is because most sciences, not including certain branches of physics and engineering, have been unable to absorb and utilize the relativistic and quantum extensions of the classical model, each of which suffers in any event from many of the same difficulties with causality. It follows that another extension is required, and since neo-Darwinism holds true within a limited causal domain, it must hold in this extension as a limiting case (minus its inessential philosophical baggage). In other words, causality must become the objective, distributive limit of metacausality.
Such an extension has already been described (Langan 2002b), and it embodies solutions for all of the problems discussed in this paper. Concisely, it embeds physical reality in an extended logico-algebraic structure, a Self-Configuring Self-Processing Language or SCSPL. SCSPL incorporates a pregeometric44 conspansive manifold in which the classical spacetime manifold is embedded as a limiting configuration. SCSPL brings formal and physical causality into seamless conjunction by generically equating the laws of nature with SCSPL syntax, and then contracting the semantic, model-theoretic correspondence between syntax and state (or laws and observables)45 so that they coincide in syntactic operators, physical quanta of self-transducing information. Through properties called hology (syntactic self-similarity) and triality (space-time-object conflation), total systemic self-containment is achieved. In particular, the system is self-deterministically closed under causation.
SCSPL evolves by telic recursion, a higher-order process46 of which causality is the physical limit (as required). In standard causality, physical states evolve according to laws of nature; in telic recursion, syntax-state relationships evolve by maximization of intrinsic utility. The temporal phase of telic recursion is conspansion, a dual-aspect process coordinating formal/telic and physical modes of evolution. By virtue of conspansive duality, SCSPL simultaneously evolves like a (metacausal, telic-recursive) generative grammar and a physical dynamical system, at once implementing top-down and bottom-up causation. Conspansion involves an alternation between self-replication and self-selection, thus constituting a generalization of Darwinian evolution in which specificational and replicational probabilistic resources are rationally linked. In this way, neo-Darwinist and design-theoretic (bottom-up and top-down) modes of causality become recognizable as complementary aspects of a single comprehensive evolutionary process.
From a formal standpoint, SCSPL has several unique and interesting features. Being based on logic,47 it identifies itself with the logical syntax of its perceptual universe on grounds of logical-perceptual isomorphism. This eliminates the conventional model-theoretic distinction among theory, universe and theory-universe correspondence, contracting the problematic mapping between abstract and concrete reality on the syntactic (nomological) level. This brings the physical world into coincidence with its logical counterpart, effecting dual-aspect monism and putting logical attributes on the same explanatory footing as physical attributes. SCSPL thus adjoins logic to nature, injecting48 nature with the abstract logical infrastructure of perception and theorization and endowing physical reality with the remedial conceptual apparatus demanded by the problems, paradoxes and explanatory deficiencies straining its classical descriptions. At the same time, it adjoins nature to logic in the form of perceptual categories and necessary high-level properties including closure, comprehensiveness, consistency and teleo-nomological coherence, thus opening logical routes to physical insight.
SCSPL offers yet further advantages. In defining nature to include logic and cognition, it relates physics and mathematics on a basic level, thus merging the rational foundations of mathematics with the perceptual foundations of physics and letting each provide crucial support for the other. By affording an integrated conceptual framework for prior conflicting extensions of classical reality, it sets the stage for their ultimate reconciliation. And its cross-interpretation of the cognitive and physical aspects of nature renders the universe self-explaining and self-modeling, thus effecting self-containment on the theoretic and model-theoretic levels. That is, SCSPL self-containment effects not just causal and generative closure, but closure under the inverse operations of explanation and interpretation, thus permitting nature to physically model and teleo-nomologically justify its own self-configurative determinations. In SCSPL, natural laws and physical states are seen as expressions of the intrinsic utility of nature by and for nature.
The reflexive self-processing and (telic) self-configuration functions of SCSPL imply that nature possesses generalized functional analogues of human self-awareness and volition, and thus a generalized capacity for utilitarian self-design. The self-design and self-modeling capacity of nature suggests that the universe is a kind of stratified “self-simulation” in which the physical and logico-telic aspects of reality can be regarded as respectively “simulated” and “simulative” in a generalized quantum-computational sense. This makes SCSPL relevant to self-organization, emergence and other complexity-theoretic phenomena increasingly attractive to the proponents of neo-Darwinism and other causally-challenged theories. At the same time, the fact that SCSPL evolution is both nomologically coherent and subject to a rational intrinsic utility criterion implies that the universe possesses properties equivalent to generalized intelligence, suggesting the possibility of an integrated SCSPL approach to the problems of consciousness and evolution.
The overall theory which logically extends the concepts of nature and causality to SCSPL and telic recursion, thereby merging the perceptual manifold with its cognitive and telic infrastructure, is known as the Cognitive-Theoretic Model of the Universe or CTMU, and its approach to biological origins and evolution is called Teleologic Evolution.49
Based on the concept of telic-recursive metacausation, Teleologic Evolution is a dynamic interplay of replication and selection through which the universe creates itself and the life it contains. Teleologic Evolution is a stratified process which occurs on levels respectively associated with the evolution of the cosmos and the evolution of life, thus permitting organic evolution to mirror that of the universe in which it occurs. It improves on traditional approaches to teleology by extending the concept of nature in a way eliminating any need for “supernatural” intervention, and it improves on neo-Darwinism by addressing the full extent of nature and its causal dynamics.
Due to their implicit reliance on different notions of causality, teleology and evolution were once considered mutually exclusory. While teleology appears to require a looping kind of causality consistent with the idea that ends are immanent in nature (even in beginnings), evolution seems to require that mutation and natural selection exhibit some combination of nondeterminacy and linear determinacy. In contrast, the phrase Teleologic Evolution reflects their complementarity within a coherent self-configurative ensemble identifying nature with its own utilitarian self-actualization imperative. In the associated metacausal extension of physical reality, the two central processes of evolution, replication and selection, are seen to occur on at least two mutually-facilitative levels respectively associated with the evolution of the universe and that of organic life.50
Meanwhile, the intrinsic utility criterion of self-selection implies that nature, as rationally defined in the CTMU, possesses a generalized form of intelligence by which all levels of evolution are driven and directed, equating selection with specification and metacausally relating it to replication.51
Reality is united with its generative principle by the rational linkage between the domain and codomain of the teleological, meta-Darwinian level of natural selection.
Because nature consists of all that is logically relevant to perception, and logic consists of the rules of thought and therefore comprises an essential theory of cognition, the CTMU couples mind and nature in a way suggestive of Ouroboros divided and reunited … two intimately entwined constrictors, estranged centuries ago by mind-body dualism but now locked in a renewed embrace, each swallowing the other's entailments. Perhaps this reunion will deter the militant torch-bearers of scientific naturalism from further reneging on their explanatory debts and fleecing mankind of its millennial hopes and dreams after all. And if so, then perhaps mankind can snuff the rapidly dwindling fuse of its insidious ontological identity crisis while these hopes and dreams still have a fighting chance of realization, and the intrinsic utility of mankind is still salvageable.
References
Bacon, F. (1997) Thoughts on the Nature of Things. Kila, MT: Kessinger Publishing. Reprinted excerpt. Originally published in 1824 as Miscellaneous Tracts. In B. Montagu (Ed.) The Works of Francis Bacon, Lord Chancellor of England, pp. 406–455. London: William Pickering.
Behe M. J. (1998) Darwin’s Black Box: The Biochemical Challenge to Evolution. New York: Simon & Schuster.
Darwin, C. (1999) On the Origin of Species. New York: Bantam Classics. Original work published in 1859.
Dembski, W. A. (1998) The Design Inference: Eliminating Chance through Small Probabilities. Cambridge: Cambridge University Press.
Ho, M. W. & Saunders, P. T. (1979) Beyond neo-Darwinism—An Epigenetic Approach to Evolution. Journal of Theoretical Biology, Vol. 78, Issue 4, pp. 573–591.
Hume, D. (1975) Enquiries Concerning Human Understanding and Concerning the Principles of Morals. 3rd Edition. Edited by L. A. Selby-Bigge & P. H. Nidditch. Oxford: Oxford University Press. Reprinted from the posthumous edition of 1777. Original work published in 1748 and 1751.
Kant, I. (1965) The Critique of Pure Reason. Translated by N. K. Smith. New York: St. Martin’s Press. Original work published in 1781.
Kim, J. (2000) Mind in a Physical World: An Essay on the Mind-Body Problem and Mental Causation. Cambridge, MA: MIT Press.
Langan, C. M. (2002) The Cognitive-Theoretic Model of the Universe: A New Kind of Reality Theory. Princeton, MO: Mega Foundation Press. Originally published in Progress in Complexity, Information, and Design, Double Issue, Vols. 1.2-3.
Laplace, P. S. (1902) A Philosophical Essay on Probabilities. New York: John Wiley & Sons. Original work published in 1814.
Patton, C. M. & Wheeler, J. A. (1975) Is Physics Legislated by Cosmogony? In C. J. Isham, R. Penrose, D. W. Sciama (Eds.) Quantum Gravity: An Oxford Symposium, pp. 538–605. Oxford: Clarendon Press.
Pigliucci, M. (2000) Methodological vs. Philosophical Naturalism, or Why We Should Be Skeptical of Religion. In Tales of the Rational: Skeptical Essays About Nature and Science. Smyrna, GA: Freethought Press.
Robinson, B. A. (1995) Public Beliefs about Evolution and Creation. Ontario Consultants on Religious Tolerance.
Whitehead, A. N. (1975) Process and Reality. New York: The Free Press. Originally published in 1929.
Wolfram, S. (2002) A New Kind of Science. Champaign, IL: Wolfram Media.
“Meaning” entails recognition, referring specifically to a recognizable and therefore informational relationship among related entities. Since information is abstract, so is recognition, and so is meaning (whether or not the related entities are themselves physical and concrete). Naturalism, of which the theory of evolution is an example, is an essentially materialistic viewpoint which denies or disregards abstract modes of existence, thus limiting meaning to “material” drives and instincts. But where the abstract contains the physical, capturing its structure in the form of meaningful informational patterns called “laws of nature”, abstraction and meaning are plainly essential to both science and nature.
Science is a two-step, two-level process concerned with (1) formulating hypotheses about nature, and (2) proving or disproving these hypotheses to some degree of confirmation. Relative to level 1, level 2 requires a higher level of discourse incorporating truth-functional criteria independent of any particular falsifiable hypothesis. Because maintaining this distinction helps to insure that false hypotheses do not figure in their own “validation”, purportedly falsifiable (level 1) theories like neo-Darwinism should not be confused with the confirmational level of science.
Properly speaking, science includes both the empirical and mathematical sciences. Most of those who call themselves “scientists”, as well as many proponents of ID theory, assume that scientific confirmation can only be achieved by strict application of the scientific method and must thus be empirical. However, this is an oversimplification. The empirical sciences are not only mathematical in structure, but too heavily indebted to mathematical reasoning to exclude mathematical methods as possible means of confirming facts about nature. So with regard to the scientific status of ID theory, both empirical and mathematical methods of confirmation must be duly considered.
Properly speaking, science includes both the empirical and mathematical sciences. Most of those who call themselves “scientists”, as well as many proponents of ID theory, assume that scientific confirmation can only be achieved by strict application of the scientific method and must thus be empirical. However, this is an oversimplification. The empirical sciences are not only mathematical in structure, but too heavily indebted to mathematical reasoning to exclude mathematical methods as possible means of confirming facts about nature. So with regard to the scientific status of ID theory, both empirical and mathematical methods of confirmation must be duly considered.
Theistic evolution is a simple conjunction of theism and Darwinism which pays no real attention to their mutual consistency or the model-theoretic implications of combining them.
“… neo-Darwinism exhibits a great power of assimilation, incorporating any opposing viewpoint as yet another ‘mechanism’ in the grand ‘synthesis’. But a real synthesis should begin by identifying conflicting elements in the theory, rather than in accommodating contradictions as quickly as they arise.” (Ho & Sanders, 1979, p. 574)
While material and efficient causation are superficially physical and can be described in more or less materialistic terms, formal and final causation are more abstract. Francis Bacon (1997), who strongly influenced scientific methodology, classified these abstract modes of causation as metaphysics rather than physics.
These questions about laws of causality address the nature and origin of causality itself, and are thus metacausal analogues of Aristotle's questions about causality. The answers presented in this paper – roughly, that laws are elements of syntax of the language of nature, that they are composed of telesis and self-transducing metainformation, that they reside in syntactic (space-time-object) operators whose states they govern, that they arose through the metacausal self-configuration of the language of nature, that their properties include closure, comprehensiveness, consistency and coherence, and that their functionality and maintenance rely on intrinsic features of the language of nature – are thus metacausally analogous to Aristotle's modes of causation.
For example, there is the gap between mind and matter; the gap between abstract and concrete existence; the gap between causality and generative cosmogony; the gap between classical and quantum mechanics, and so on. Because these gaps are serious, there is no reason to think that causality can be adequately explained as long as they exist.
At the time that Charles Darwin made this observation and formulated his natural selection thesis, it was still obscured by centuries of teleological dominance.
The canonical principle of indifference (or insufficient reason) states that where there is no positive reason for assigning different probabilities to competing statistical or predictive assertions, e.g. different possible mutations weighted by relative frequency, equal probabilities must be assigned to all. Since this is essentially how neo-Darwinism calculates its random distributions of mutations and other events, it is just a biological variant of the principle of indifference.
Kant (1965) p. 83: “Things which we see are not by themselves what we see … It remains completely unknown to us what the objects may be by themselves and apart from the receptivity of our senses. We know nothing but our manner of perceiving them.”; p. 147: “We ourselves introduce that order and regularity in the appearance which we entitle ‘nature’. We could never find them in appearances had we not ourselves, by the nature of our own mind, originally set them there.”
Ibid., p. 93: “Thoughts without content are empty, intuitions without concepts are blind.”
For cognitive (and thus for theoretical and scientific) purposes, reality consists of perception plus the abstract cognitive apparatus required to generate, support and sustain it.
It makes no difference that scientific theories are based on “objective” empirical observations; the key point is that scientific observation and theorization require subjectively conscious agents called “scientists”, and that there exists no possible means of ruling out subjectivity on the part of any other kind of observer-theorist. Whatever reality “might have been without us”, our presence immediately implies that it possesses a subjective dimension.
Property dualism asserts that the properties mental and physical, while essentially different, apply to the same objects. Dual aspect monism asserts that these two properties together characterize the fundamental “substance” of nature.
The Church–Turing Thesis asserts that the class of recursive functions and the class of effectively computable functions are the same. This is generally taken to imply an isomorphism between the formal, abstract realm of recursive functions and the physical, mechanical realm in which abstract Turing machines are instantiated. For theoretical purposes, this isomorphism must be taken for granted; without it, theoretical instances of recursion could not be model-theoretically interpreted in physical reality, and physical reality could not be scientifically explained.
… even if what gets iterated is a “continuous” function representing motion in a differentiable manifold.
Scientific methodology conforms to the scientific method, which prescribes that nature be treated as if it were everywhere both discernable and replicable, and the related doctrine of falsifiability, which asserts that science is concerned only with hypotheses that are conceivably false and susceptible to empirical disproof. However, nature cannot be meaningfully defined in such a way that these criteria always hold within it. For example, no full description of nature can exclude references to universal, unconditional and therefore unfalsifiable properties of nature, and such unfalsifiable properties need not be scientifically trivial.
In physics, spatial and spatiotemporal manifolds are usually constrained by the locality principle, according to which nothing travels faster than light. Locality can be more fundamentally defined as the condition that in relocating from one point to another in a metric space, an object must traverse the entire sequence of adjacent finite or infinitesimal intervals comprising some intervening path within the metric on which locality is being enforced. In other words, locality means “no sudden jumps from one point to another, through the space containing the points or any external space thereof.” The bearing on causality is obvious.
Continuity is understood in terms of infinitesimal displacements. Several approaches exist to the topic of infinitesimals, some more controversial than others. The most common is the Cauchy–Weierstrass epsilon-delta formalism; the most sophisticated is that of which A. Robinson's (1966) non-standard analysis is the earliest and most successful representative.
Perhaps the most fashionable discrete model universe is explicitly based on a computational paradigm, the cellular automaton. An encyclopedic account of this paradigm can be found in Wolfram (2002).
This means “extrinsic to the object affected by causality”. For example, consider the problem of the origin of the real universe. Where the real universe is defined to contain all that is perceptible and/or of relevance to that which is perceptible, anything sufficiently real to have originated, caused or influenced it is contained within it by definition. Thus, extrinsic causality (standard determinacy) cannot be invoked to explain the origin of the real universe. Because every instance of causation within the real universe ultimately leads back to the origin of reality by causal regression, standard determinacy fails as a causal paradigm.
This particular formulation of the “physical causal closure thesis” is due to the contemporary philosopher Jaegwon Kim (2000). By the mathematical definition of closure, causal closure implies reflexive self-determinism. Because the physical causal closure thesis instead relies on standard determinism, it is conceptually deficient and powerless to effect causal closure.
Physical is a rather ambiguous term that currently means “of or relating to matter and energy or the sciences dealing with them, especially physics”. It thus refers to a relationship of unspecified extent, namely the extended relational plexus generated by the concepts of matter and energy. While causality does indeed relate to matter and energy, it can be neither held in the hand nor converted to heat, and because it thus bears description as neither matter nor energy, it resides elsewhere in this extended relationship. It follows that causality is more than physical. Where physical is further defined as “belonging to the class of phenomena accessible to the scientific method”, only those levels of causality which are both discernable and replicable may be called “physical”.
In any case, the self-containment of the real universe is implied by the following contradiction: if there were any external entity or influence that were sufficiently real to affect the real universe, then by virtue of its reality, it would by definition be internal to the real universe.
Metacausality is the causal principle or agency responsible for the origin or “causation” of causality itself (in conjunction with state). This makes it responsible for its own origin as well, ultimately demanding that it self-actualize from an ontological groundstate consisting of unbound ontic potential.
Where time is defined on physical change, metacausal processes that affect potentials without causing actual physical changes are by definition atemporal.
Telesis is a convergent metacausal generalization of law and state, where law relates to state roughly as the syntax of a language relates to its expressions through generative grammar … but with the additional stipulation that as a part of syntax, generative grammar must in this case generate itself along with state. Feedback between syntax and state may thus be called telic feedback.
Beyond a certain level of specificity, no detailed knowledge of state or law is required in order to undertake a generic logical analysis of telesis.
To achieve causal closure with respect to final causation, a metacausal agency must self-configure in such a way that it relates to itself as the ultimate utility, making it the agency, act and product of its own self-configuration. This 3-way coincidence, called triality, follows from self-containment and implies that self-configuration is intrinsically utile, thus explaining its occurrence in terms of intrinsic utility.
It might be objected that the term “rationality” has no place in the discussion … that there is no reason to assume that the universe has sufficient self-recognitional coherence or “consciousness” to be “rational”. However, since the universe does indeed manage to consistently self-recognize and self-actualize in a certain objective sense, and these processes are to some extent functionally analogous to human self-recognition and self-actualization, we can in this sense and to this extent justify the use of terms like “consciousness” and “rationality” to describe them. This is very much in the spirit of such doctrines as physical reductionism, functionalism and eliminativism, which assert that such terms devolve or refer to objective physical or functional relationships. Much the same reasoning applies to the term utility.
In computation theory, recognition denotes the acceptance of a language by a transducer according to its programming or “transductive syntax”. Because the universe is a self-accepting transducer, this concept has physical bearing and implications.
The concept of potential is an essential ingredient of physical reasoning. Where a potential is a set of possibilities from which something is actualized, potential is necessary to explain the existence of anything in particular (as opposed to some other partially equivalent possibility).
Possible constraints include locality, uncertainty, blockage, noise, interference, undecidability and other intrinsic features of the natural world.
Examples include the atheism and materialism riders often attached to neo-Darwinism, and the Biblical Creationism rider often mistakenly attached to ID theory.
This view was captured by the French astronomer and mathematician Pierre Simon Laplace (1749–1827) in his 1814 Philosophical Essay on Probabilities (1902, p. 4): “Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it—an intelligence sufficiently vast to submit these data to analysis—it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present to its eyes.” This view, called Laplacian determinism, went virtually unchallenged until the first half of the 20th century, when it was undermined by such new concepts as quantum uncertainty and theoretic undecidability. But even though such problems seem to rule out an explicit calculation of the sort that Laplace envisioned, his ideal is still very much a driving force in science.
The scientific method mandates a constructive relationship between empirical observation and rational theorization that is designed for the investigation of phenomena possessing two criteria, discernability and replicability. That is, it confines scientific attention to that which can be exclusively and repeatedly observed under similar conditions anywhere in time or space; it does not cover any indiscernible or localized natural influence that is not conditionally (and thus falsifiably) distributed over space and time. Yet, indiscernables and unconditional universals must exist in order for nature to be stable – e.g., the universal, unconditional and intangible logical syntax which enforces consistency throughout the universe – and the exclusion of localized causal influences from nature is rationally insupportable.
In addition to other criteria, relations and properties are distinguished by arity and order. The arity (adicity, cardinality) of a relation is just the number of relands or things related, while its order depends on whether its relands are individual elements, relations of elements, relations of relations of elements, or so on. Similarly, a property (attribute, predicate) is distinguished by whether it is attributed to individual elements, properties of elements, properties of properties of elements, or et cetera.
Some manifolds come with special provisions for motion and causality, e.g. metrics defining the notion of distance, derivatives defining the notion of movement, and affine connections permitting the parallel transport of vectors through space and thereby supporting the concept of fields.
The bottom-up thesis is insidious in the way it carries the apparent randomness of experimental distributions of mutation events upward from low-order to high-order relationships, all the way to the phenotypic and social realms. This is what encourages many neo-Darwinists (and those whom they influence) to view mankind, and life in general, as “random” and “purposeless”.
Within a given set of constraints, many possible future states of a physical system may be causally compatible with a single present state, and many alternative present states may be causally compatible with a single future state. Thus, higher-order and lower-order causal relationships describing the same system need not uniquely determine each other by top-down and bottom-up causation respectively. The physical situation is suggestive of formal model-theoretic ambiguity as captured by (e.g.) the Duhem–Quine thesis, according to which a given set of observations may be consistent with multiple theories of causation, and a single Laplacian snapshot can result in many possible predictions or retrodictions depending on the causal influences that are physically active or theoretically presumed to be active. Dual-aspect monism ultimately transforms model-theoretic ambiguity into causal freedom, revealing nature as its own creative theoretician and physical modeler and thereby effecting causal closure.
These extensions are to some extent mutually incompatible. In order to reconcile the outstanding conflicts and conceptual dissonances between General Relativity and quantum mechanics, yet another metatheoretic extension is now required.
Patton & Wheeler (1975) use the term pregeometry in reference to “… something deeper than geometry, that underlies both geometry and particles … no perspective seems more promising than the view that it must provide the Universe with a way to come into being.” The SCSPL extension of physical reality fits this description.
Where laws of nature incorporate not only observables but the abstractions relating them, bringing physical states and natural laws into coincidence reduces the set of physical (observable) properties to a subset of the set of abstract properties. Thus, the abstract is recognized as a natural generalization of the concrete.
Telic recursion is quantum metaprocess based on a generalized form of recursion maximizing intrinsic utility over entire (pregeometric) regions of spacetime through telic feedback under the guidance of coherent metacausal invariants called telons.
SCSPL is developed by adjoining to (propositional and predicate) logic a limiting form of model theory from which it acquires certain necessary high-level properties of any possible valid theory of reality at large. Thus, its syntactic and semantic validity can be logically established. By its method of construction, SCSPL is classified as a metaphysical tautology or supertautology.
Inflationary cosmology, membrane theory and various other theories have been assumed to require extensions external to physical reality. In contrast, SCSPL conspansive duality permits the extension mandated by SCSPL, as well as all other valid extensions of physical reality, to be physically internalized in a certain specific sense relating to conspansive duality.
See TELEOLOGIC EVOLUTION for a description of Teleologic Evolution.
Human psycho-intellectual, sociopolitical and technological modes of evolution may also be distinguished on various levels of aggregation.
In the CTMU, instances of irreducible and specified complexity are metacausally generalized to dynamic syntax-state relationships called telons which self-actualize by telic recursion.
This exceptional essay was commissioned for the book *Uncommon Dissent: Intellectuals Who Find Darwinism Unconvincing*. Many people have been asking about Chris' views on evolution, and this engaging essay addresses that topic and more. A special thanks to Michal Sz for providing this wonderfully formatted document!
Question. Dr Wilhelm Reich in contact with space seemed to prove that Orgone energy is pervasive in a lot of places on earth and it is the basic bioelectrical component of life. He goes so far to say that in certain sulfurs Orgone energy exists and it’s why we find life deep at the bottom of ocean near volcanic vents. Could that be the missing piece? Dan Davidson in shape power provide Orgone/chi/prana was collected in the center of a pyramid. Our smallest shape in the human body is a tetrahedron coincidence?