God and Science

III. Is the World Necessary?

The Fine-tuned Universe and the Emergence of Life

Table of Contents

I. Introductory Notes

II. Is God Necessary?

III. Is the World Necessary?

IV. Theodicy: The Problem of Evil

V. Miracles, Wonders, Signs: God's Interactions with the World

VI. Appendix: Scientific Theories and Science's Life Cycles

By: Dr. Sam Vaknin

Malignant Self Love - Buy the Book - Click HERE!!!

Relationships with Abusive Narcissists - Buy the e-Books - Click HERE!!!

READ THIS: Scroll down to review a complete list of the articles - Click on the blue-coloured text!
Bookmark this Page - and SHARE IT with Others!

"The more I examine the universe, and the details of its architecture, the more evidence I find that the Universe in some sense must have known we were coming." — Freeman Dyson

"A bottom-up approach to cosmology either requires one to postulate an initial state of the Universe that is carefully fine-tuned — as if prescribed by an outside agency — or it requires one to invoke the notion of eternal inflation, a mighty speculative notion to the generation of many different Universes, which prevents one from predicting what a typical observer would see." — Stephen Hawking

"A commonsense interpretation of the facts suggests that a super-intellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature. The numbers one calculates from the facts seem to me so overwhelming as to put this conclusion almost beyond question." - Fred Hoyle

(Taken from the BioLogos Website)

I. The Fine-tuned Universe and the Anthropic Principle

The Universe we live in (possibly one of many that make up the Multiverse) is "fine-tuned" to allow for our existence. Its initial conditions and constants are such that their values are calibrated to yield Life as we know it (by aiding and abetting the appearance, structure, and diversity of matter). Had these initial conditions and/or constants deviated from their current levels, even infinitesimally, we would not have been here. Any theory of the Universe has to account for the existence of sapient and sentient observers. This is known as the "Anthropic Principle".

These incredible facts immediately raise two questions:

(i) Is such outstanding compatibility a coincidence? Are we here to observe it by mere chance?

(ii) If not a coincidence, is this intricate calibration an indication of (if not an outright proof for) the existence of a Creator or a Designer, aka God?

It is useful to disentangle two seemingly inextricable issues: the fact that the Universe allows for Life (which is a highly improbable event) and the fact that we are here to notice it (which is trivial, given the first fact). Once the parameters of the universe have been "decided" and "set", Life has been inevitable.

But, who, or what set the parameters of the Universe?

If our Universe is one of many, random chance could account for its initial conditions and constants. In such a cosmos, our particular Universe, with its unique parameters, encourages life while an infinity of other worlds, with other initial states and other constants of nature, do not. Modern physics - from certain interpretations of quantum mechanics to string theories - now seriously entertains the notion of a Multiverse (if not yet its exact contours and nature): a plurality of minimally-interacting universes being spawned repeatedly.

Yet, it is important to understand that even in a Multiverse with an infinite number of worlds, there is no "guarantee" or necessity that a world such as ours will have arisen. There can exist an infinite set of worlds in which there is no equivalent to our type of world and in which Life will not appear.

As philosopher of science Jesus Mosterνn put it:

“The suggestion that an infinity of objects characterized by certain numbers or properties implies the existence among them of objects with any combination of those numbers or characteristics [...] is mistaken. An infinity does not imply at all that any arrangement is present or repeated. [...] The assumption that all possible worlds are realized in an infinite universe is equivalent to the assertion that any infinite set of numbers contains all numbers (or at least all Gφdel numbers of the [defining] sequences), which is obviously false.”

But rather than weaken the Anthropic Principle as Mosterνn claims, this criticism strengthens it. If even the existence of a Multiverse cannot lead inexorably to the emergence of a world such as ours, its formation appears to be even more miraculous and "unnatural" (in short: designed).

Still, the classic - and prevailing - view allows for only one, all-encompassing Universe. How did it turn out to be so accommodating? Is it the outcome of random action? Is Life a happy accident involving the confluence of hundreds of just-right quantities, constants, and conditions?

As a matter of principle, can we derive all these numbers from a Theory of Everything? In other words: are these values the inevitable outcomes of the inherent nature of the world? But, if so, why does the world possess an inherent nature that gives rise inevitably to these specific initial state and constants and not to others, more inimical to Life?

To say that we (as Life-forms) can observe only a universe that is compatible with and yielding Life is begging the question (or a truism). Such a flippant and content-free response is best avoided. Paul Davies calls this approach ("the Universe is the way it is and that's it"): "The Absurd Universe" (in his book "The Goldilocks Enigma", 2006).

In all these deliberations, there are four implicit assumptions we better make explicit:

(i) That Life - and, more specifically: Intelligent Life, or Observers - is somehow not an integral part of the Universe. Yielded by natural processes, it then stands aside and observes its surroundings;

(ii) That Life is the culmination of Nature, simply because it is the last to have appeared (an example of the logical fallacy known as "post hoc, ergo propter hoc"). This temporal asymmetry also implies an Intelligent Designer or Creator in the throes of implementing a master plan;

(iii) That the Universe would not have existed had it not been for the existence of Life (or of observers). This is known as the Participatory Anthropic Principle and is consistent with some interpretations of Quantum Mechanics;

(iv) That Life will materialize and spring forth in each and every Universe that is compatible with Life. The strong version of this assumption is that "there is an underlying principle that constrains the universe to evolve towards life and mind." The Universe is partial to life, not indifferent to it.

All four are forms of teleological reasoning (that nature has a purpose) masquerading as eutaxiological reasoning (that order has a cause). To say that the Universe was made the way it is in order to accommodate Life is teleological. Science is opposed to teleological arguments. Therefore, to say that the Universe was made the way it is in order to accommodate Life is not a scientific statement.

But, could it be a valid and factual statement? To answer this question, we need to delve further into the nature of teleology.

II. System-wide Teleological Arguments

A teleological explanation is one that explains things and features by relating to their contribution to optimal situations, or to a normal mode of functioning, or to the attainment of goals by a whole or by a system to which the said things or features belong. It often involves the confusion or reversal of causes and effects and the existence of some "intelligence" at work (either self-aware or not).

Socrates tried to understand things in terms of what good they do or bring about. Yet, there are many cases when the contribution of a thing towards a desired result does not account for its occurrence. Snow does not fall IN ORDER to allow people to ski, for instance.

But it is different when we invoke an intelligent creator. It can be convincingly shown that intelligent creators (human beings, for instance) design and maintain the features of an object in order to allow it to achieve an aim. In such a case, the very occurrence, the very existence of the object is explained by grasping its contribution to the attainment of its function.

An intelligent agent (creator) need not necessarily be a single, sharply bounded, entity. A more fuzzy collective may qualify as long as its behaviour patterns are cohesive and identifiably goal oriented. Thus, teleological explanations could well be applied to organisms (collections of cells), communities, nations and other ensembles.

To justify a teleological explanation, one needs to analyze the function of the item to be thus explained, on the one hand and to provide an etiological account, on the other hand. The functional account must strive to elucidate what the item contributes to the main activity of the system, the object, or the organism, a part of which it constitutes, or to their proper functioning, well-being, preservation, propagation, integration (within larger systems), explanation, justification, or prediction.

The reverse should also be possible. Given information regarding the functioning, integration, etc. of the whole, the function of any element within it should be derivable from its contribution to the functioning whole. Though the practical ascription of goals (and functions) is problematic, it is, in principle, doable.

But it is not sufficient. That something is both functional and necessarily so does not yet explain HOW it happened to have so suitably and conveniently materialized. This is where the etiological account comes in. A good etiological account explains both the mechanisms through which the article (to be explained) has transpired and what aspects of the structure of the world it was able to take advantage of in its preservation, propagation, or functioning.

The most famous and obvious example is evolution. The etiological account of natural selection deals both with the mechanisms of genetic transfer and with the mechanisms of selection. The latter bestow upon the organism whose features we seek to explain a better chance at reproducing (a higher chance than the one possessed by specimen without the feature).

Hitherto, we have confined ourselves to items, parts, elements, and objects within a system. The system provides the context within which goals make sense and etiological accounts are possible. What happens when we try to apply the same teleological reasoning to the system as a whole, to the Universe itself? In the absence of a context, will such cerebrations not break down?

Theists will avoid this conundrum by positing God as the context in which the Universe operates. But this is unprecedented and logically weak: the designer-creator can hardly also serve as the context within which his creation operates. Creators create and designers design because they need to achieve something; because they miss something; and because they want something. Their creation is intended (its goal is) to satisfy said need and remedy said want. Yet, if one is one's own context, if one contains oneself, one surely cannot miss, need, or want anything whatsoever!

III. The Issue of Context

If the Universe does have an intelligent Creator-Designer, He must have used language to formulate His design. His language must have consisted of the Laws of Nature, the Initial State of the Universe, and its Constants. To have used language, the Creator-Designer must have been possessed of a mind. The combination of His mind and His language has served as the context within which He operated.

The debate between science and religion boils down to this question: Did the Laws of Nature (the language of God) precede Nature or were they created with it, in the Big Bang? In other words, did they provide Nature with the context in which it unfolded?

Some, like Max Tegmark, an MIT cosmologist, go as far as to say that mathematics is not merely the language which we use to describe the Universe - it is the Universe itself. The world is an amalgam of mathematical structures, according to him. The context is the meaning is the context ad infinitum.

By now, it is a trite observation that meaning is context-dependent and, therefore, not invariant or immutable. Contextualists in aesthetics study a work of art's historical and cultural background in order to appreciate it. Philosophers of science have convincingly demonstrated that theoretical constructs (such as the electron or dark matter) derive their meaning from their place in complex deductive systems of empirically-testable theorems. Ethicists repeat that values are rendered instrumental and moral problems solvable by their relationships with a-priori moral principles. In all these cases, context precedes meaning and gives interactive birth to it.

However, the reverse is also true: context emerges from meaning and is preceded by it. This is evident in a surprising array of fields: from language to social norms, from semiotics to computer programming, and from logic to animal behavior.

In 1700, the English empiricist philosopher, John Locke, was the first to describe how meaning is derived from context in a chapter titled "Of the Association of Ideas" in the second edition of his seminal "Essay Concerning Human Understanding". Almost a century later, the philosopher James Mill and his son, John Stuart Mill, came up with a calculus of contexts: mental elements that are habitually proximate, either spatially or temporally, become associated (contiguity law) as do ideas that co-occur frequently (frequency law), or that are similar (similarity law).

But the Mills failed to realize that their laws relied heavily on and derived from two organizing principles: time and space. These meta principles lend meaning to ideas by rendering their associations comprehensible. Thus, the contiguity and frequency laws leverage meaningful spatial and temporal relations to form the context within which ideas associate. Context-effects and Gestalt  and other vision grouping laws, promulgated in the 20th century by the likes of Max Wertheimer, Irvin Rock, and Stephen Palmer, also rely on the pre-existence of space for their operation.

Contexts can have empirical or exegetic properties. In other words: they can act as webs or matrices and merely associate discrete elements; or they can provide an interpretation to these recurrent associations, they can render them meaningful. The principle of causation is an example of such interpretative faculties in action: A is invariably followed by B and a mechanism or process C can be demonstrated that links them both. Thereafter, it is safe to say that A causes B. Space-time provides the backdrop of meaning to the context (the recurrent association of A and B) which, in turn, gives rise to more meaning (causation).

But are space and time "real", objective entities - or are they instruments of the mind, mere conventions, tools it uses to order the world? Surely the latter. It is possible to construct theories to describe the world and yield falsifiable predictions without using space or time or by using counterintuitive and even "counterfactual' variants of space and time.

Another Scottish philosopher, Alexander Bains, observed, in the 19th century, that ideas form close associations also with behaviors and actions. This insight is at the basis for most modern learning and conditioning (behaviorist) theories and for connectionism (the design of neural networks where knowledge items are represented by patterns of activated ensembles of units).

Similarly, memory has been proven to be state-dependent: information learnt in specific mental, physical, or emotional states is most easily recalled in similar states. Conversely, in a process known as redintegration, mental and emotional states are completely invoked and restored when only a single element is encountered and experienced (a smell, a taste, a sight).

It seems that the occult organizing mega-principle is the mind (or "self"). Ideas, concepts, behaviors, actions, memories, and patterns presuppose the existence of minds that render them meaningful. Again, meaning (the mind or the self) breeds context, not the other way around. This does not negate the views expounded by externalist theories: that thoughts and utterances depend on factors external to the mind of the thinker or speaker (factors such as the way language is used by experts or by society). Even avowed externalists, such as Kripke, Burge, and Davidson admit that the perception of objects and events (by an observing mind) is a prerequisite for thinking about or discussing them. Again, the mind takes precedence.

But what is meaning and why is it thought to be determined by or dependent on context?

Many theories of meaning are contextualist and proffer rules that connect sentence type and context of use to referents of singular terms (such as egocentric particulars), truth-values of sentences and the force of utterances and other linguistic acts. Meaning, in other words, is regarded by most theorists as inextricably intertwined with language. Language is always context-determined: words depend on other words and on the world to which they refer and relate. Inevitably, meaning came to be described as context-dependent, too. The study of meaning was reduced to an exercise in semantics. Few noticed that the context in which words operate depends on the individual meanings of these words.

Gottlob Frege coined the term Bedeutung (reference) to describe the mapping of words, predicates, and sentences onto real-world objects, concepts (or functions, in the mathematical sense) and truth-values, respectively. The truthfulness or falsehood of a sentence are determined by the interactions and relationships between the references of the various components of the sentence. Meaning relies on the overall values of the references involved and on something that Frege called Sinn (sense): the way or "mode" an object or concept is referred to by an expression. The senses of the parts of the sentence combine to form the "thoughts" (senses of whole sentences).

Yet, this is an incomplete and mechanical picture that fails to capture the essence of human communication. It is meaning (the mind of the person composing the sentence) that breeds context and not the other way around. Even J. S. Mill postulated that a term's connotation (its meaning and attributes) determines its denotation (the objects or concepts it applies to, the term's universe of applicability).

As the Oxford Companion to Philosophy puts it (p. 411):

"A context of a form of words is intensional if its truth is dependent on the meaning, and not just the reference, of its component words, or on the meanings, and not just the truth-value, of any of its sub-clauses."

It is the thinker, or the speaker (the user of the expression) that does the referring, not the expression itself!

Moreover, as Kaplan and Kripke have noted, in many cases, Frege's contraption of "sense" is, well, senseless and utterly unnecessary: demonstratives, proper names, and natural-kind terms, for example, refer directly, through the agency of the speaker. Frege intentionally avoided the vexing question of why and how words refer to objects and concepts because he was weary of the intuitive answer, later alluded to by H. P. Grice, that users (minds) determine these linkages and their corresponding truth-values. Speakers use language to manipulate their listeners into believing in the manifest intentions behind their utterances. Cognitive, emotive, and descriptive meanings all emanate from speakers and their minds.

Initially, W. V. Quine put context before meaning: he not only linked meaning to experience, but also to empirically-vetted (non-introspective) world-theories. It is the context of the observed behaviors of speakers and listeners that determines what words mean, he said. Thus, Quine and others attacked Carnap's meaning postulates (logical connections as postulates governing predicates) by demonstrating that they are not necessary unless one possesses a separate account of the status of logic (i.e., the context).

Yet, this context-driven approach led to so many problems that soon Quine abandoned it and relented: translation - he conceded in his seminal tome, "Word and Object" - is indeterminate and reference is inscrutable. There are no facts when it comes to what words and sentences mean. What subjects say has no single meaning or determinately correct interpretation (when the various interpretations on offer are not equivalent and do not share the same truth value).

As the Oxford Dictionary of Philosophy summarily puts it (p. 194):

"Inscrutability (Quine later called it indeterminacy - SV) of reference (is) (t)he doctrine ... that no empirical evidence relevant to interpreting a speaker's utterances can decide among alternative and incompatible ways of assigning referents to the words used; hence there is no fact that the words have one reference or another" - even if all the interpretations are equivalent (have the same truth value).

Meaning comes before context and is not determined by it. Wittgenstein, in his later work, concurred.

Inevitably, such a solipsistic view of meaning led to an attempt to introduce a more rigorous calculus, based on concept of truth rather than on the more nebulous construct of "meaning". Both Donald Davidson and Alfred Tarski suggested that truth exists where sequences of objects satisfy parts of sentences. The meanings of sentences are their truth-conditions: the conditions under which they are true.

But, this reversion to a meaning (truth)-determined-by-context results in bizarre outcomes, bordering on tautologies: (1) every sentence has to be paired with another sentence (or even with itself!) which endows it with meaning and (2) every part of every sentence has to make a systematic semantic contribution to the sentences in which they occur.

Thus, to determine if a sentence is truthful (i.e., meaningful) one has to find another sentence that gives it meaning. Yet, how do we know that the sentence that gives it meaning is, in itself, truthful? This kind of ratiocination leads to infinite regression. And how to we measure the contribution of each part of the sentence to the sentence if we don't know the a-priori meaning of the sentence itself?! Finally, what is this "contribution" if not another name for .... meaning?!

Moreover, in generating a truth-theory based on the specific utterances of a particular speaker, one must assume that the speaker is telling the truth ("the principle of charity"). Thus, belief, language, and meaning appear to be the facets of a single phenomenon. One cannot have either of these three without the others. It, indeed, is all in the mind.

We are back to the minds of the interlocutors as the source of both context and meaning. The mind as a field of potential meanings gives rise to the various contexts in which sentences can and are proven true (i.e., meaningful). Again, meaning precedes context and, in turn, fosters it. Proponents of Epistemic or Attributor Contextualism link the propositions expressed even in knowledge sentences (X knows or doesn't know that Y) to the attributor's psychology (in this case, as the context that endows them with meaning and truth value).

On the one hand, to derive meaning in our lives, we frequently resort to social or cosmological contexts: to entities larger than ourselves and in which we can safely feel subsumed, such as God, the state, or our Earth. Religious people believe that God has a plan into which they fit and in which they are destined to play a role; nationalists believe in the permanence that nations and states afford their own transient projects and ideas (they equate permanence with worth, truth, and meaning); environmentalists implicitly regard survival as the fount of meaning that is explicitly dependent on the preservation of a diversified and functioning ecosystem (the context).

Robert Nozick posited that finite beings ("conditions") derive meaning from "larger" meaningful beings (conditions) and so ad infinitum. The buck stops with an infinite and all-encompassing being who is the source of all meaning (God).

On the other hand, Sidgwick and other philosophers pointed out that only conscious beings can appreciate life and its rewards and that, therefore, the mind (consciousness) is the ultimate fount of all values and meaning: minds make value judgments and then proceed to regard certain situations and achievements as desirable, valuable, and meaningful. Of course, this presupposes that happiness is somehow intimately connected with rendering one's life meaningful.

So, which is the ultimate contextual fount of meaning: the subject's mind or his/her (mainly social) environment?

This apparent dichotomy is false. As Richard Rorty and David Annis noted, one can't safely divorce epistemic processes, such as justification, from the social contexts in which they take place. As Sosa, Harman, and, later, John Pollock and Michael Williams remarked, social expectations determine not only the standards of what constitutes knowledge but also what is it that we know (the contents). The mind is a social construct as much as a neurological or psychological one.

To derive meaning from utterances, we need to have asymptotically perfect information about both the subject discussed and the knowledge attributor's psychology and social milieu. This is because the attributor's choice of language and ensuing justification are rooted in and responsive to both his psychology and his environment (including his personal history).

Thomas Nagel suggested that we perceive the world from a series of concentric expanding perspectives (which he divides into internal and external). The ultimate point of view is that of the Universe itself (as Sidgwick put it). Some people find it intimidating - others, exhilarating. Here, too, context, mediated by the mind, determines meaning.

To revert to our original and main theme:

Based on the discussion above, it would seem that a Creator-Designer (God) needs to have had a mind and needs to have used language in order to generate the context within which he had created. In the absence of a mind and a language, His creation would have been meaningless and, among other things, it would have lacked a clear aim or goal.

IV. Goals and Goal-orientation as Proof of Design

Throughout this discourse, it would seem that postulating the existence of a goal necessarily implies the prior forming of an intention (to realize it). A lack of intent leaves only one plausible course of action: automatism. Any action taken in the absence of a manifest intention to act is, by definition, an automatic action.

The converse is also true: automatism prescribes the existence of a sole possible mode of action, a sole possible Nature. With an automatic action, no choice is available, there are no degrees of freedom, or freedom of action. Automatic actions are, ipso facto, deterministic.

But both statements may be false. The distinction between volitional and automatic actions is not clear-cut. Surely we can conceive of a goal-oriented act behind which there is no intent of the first or second order. An intent of the second order is, for example, the intentions of the programmer as enshrined and expressed in a software application. An intent of the first order would be the intentions of the same programmer which directly lead to the composition of said software.

Consider, for instance, house pets. They engage in a variety of acts. They are goal oriented (seek food, drink, etc.). Are they possessed of a conscious, directional, volition (intent)? Many philosophers argued against such a supposition. Moreover, sometimes end-results and by-products are mistaken for goals. Is the goal of objects to fall down? Gravity is a function of the structure of space-time. When we roll a ball down a slope (which is really what gravitation is all about, according to the General Theory of Relativity) is its "goal" to come to a rest at the bottom? Evidently not.

Still, some natural processes are much less clear-cut. Natural processes are considered to be witless reactions. No intent can be attributed to them because no intelligence can be ascribed to them. This is true, but only at times.

Intelligence is hard to define. The most comprehensive approach would be to describe it as the synergetic sum of a host of processes (some conscious or mental, some not). These processes are concerned with information: its gathering, its accumulation, classification, inter-relation, association, analysis, synthesis, integration, and all other modes of processing and manipulation.

But isn't the manipulation of information what natural processes are all about? And if Nature is the sum total of all natural processes, aren't we forced to admit that Nature is (intrinsically, inherently, of itself) intelligent? The intuitive reaction to these suggestions is bound to be negative.

When we use the term "intelligence", we seem not to be concerned with just any kind of intelligence, but with intelligence that is separate from and external to what is being observed and has to be explained. If both the intelligence and the item that needs explaining are members of the same set, we tend to disregard the intelligence involved and label it as "natural" and, therefore, irrelevant.

Moreover, not everything that is created by an intelligence (however "relevant", or external) is intelligent in itself. Some products of intelligent beings are automatic and non-intelligent. On the other hand, as any Artificial Intelligence buff would confirm, automata can become intelligent, having crossed a certain quantitative or qualitative level of complexity. The weaker form of this statement is that, beyond a certain quantitative or qualitative level of complexity, it is impossible to tell the automatic from the intelligent. Is Nature automatic, is it intelligent, or on the seam between automata and intelligence?

Nature contains everything and, therefore, contains multiple intelligences. That which contains intelligence is not necessarily intelligent, unless the intelligences contained are functional determinants of the container. Quantum mechanics (rather, its Copenhagen interpretation) implies that this, precisely, is the case. Intelligent, conscious, observers determine the very existence of subatomic particles, the constituents of all matter-energy. Human (intelligent) activity determines the shape, contents and functioning of the habitat Earth. If other intelligent races populate the universe, this could be the rule, rather than the exception. Nature may, indeed, be intelligent.

Jewish mysticism believes that humans have a major role to play: to fix the results of a cosmic catastrophe, the shattering of the divine vessels through which the infinite divine light poured forth to create our finite world. If Nature is determined to a predominant extent by its contained intelligences, then it may well be teleological.

Indeed, goal-orientated behaviour (or behavior that could be explained as goal-orientated) is Nature's hallmark. The question whether automatic or intelligent mechanisms are at work really deals with an underlying issue, that of consciousness. Are these mechanisms self-aware, introspective? Is intelligence possible without such self-awareness, without the internalized understanding of what it is doing?

Kant's third and fourth dynamic antinomies deal with this apparent duality: automatism versus intelligent acts.

The third thesis relates to causation which is the result of free will as opposed to causation which is the result of the laws of nature (nomic causation). The antithesis is that freedom is an illusion and everything is pre-determined. So, the third antinomy is really about intelligence that is intrinsic to Nature (deterministic) versus intelligence that is extrinsic to it (free will).

The fourth thesis deals with a related subject: God, the ultimate intelligent creator. It states that there must exist, either as part of the world or as its cause a Necessary Being. There are compelling arguments to support both the theses and the antitheses of the antinomies.

The opposition in the antinomies is not analytic (no contradiction is involved) - it is dialectic. A method is chosen for answering a certain type of questions. That method generates another question of the same type. "The unconditioned", the final answer that logic demands is, thus, never found and endows the antinomy with its disturbing power. Both thesis and antithesis seem true.

Perhaps it is the fact that we are constrained by experience that entangles us in these intractable questions. The fact that the causation involved in free action is beyond possible experience does not mean that the idea of such a causality is meaningless.

Experience is not the best guide in other respects, as well. An effect can be caused by many causes or many causes can lead to the same effect. Analytic tools - rather than experiential ones - are called for to expose the "true" causal relations (one cause-one effect).

Experience also involves mnemic causation rather than the conventional kind. In the former, the proximate cause is composed not only of a current event but also of a past event. Richard Semon said that mnemic phenomena (such as memory) entail the postulation of engrams or intervening traces. The past cannot have a direct effect without such mediation.

Russell rejected this and did not refrain from proposing what effectively turned out to be action at a distance involving backward causation. A confession is perceived by many to annul past sins. This is the Aristotelian teleological causation. A goal generates a behaviour. A product of Nature develops as a cause of a process which ends in it (a tulip and a bulb).

Finally, the distinction between reasons and causes is not sufficiently developed to really tell apart teleological from scientific explanations. Both are relations between phenomena ordained in such a way so that other parts of the world are effected by them. If those effected parts of the world are conscious beings (not necessarily rational or free), then we have "reasons" rather than "causes".

But are reasons causal? At least, are they concerned with the causes of what is being explained? There is a myriad of answers to these questions. Even the phrase: "Are reasons causes?" may be considered to be a misleading choice of words. Mental causation is a foggy subject, to put it mildly.

Perhaps the only safe thing to say would be that causes and goals need not be confused. One is objective (and, in most cases, material), the other mental. A person can act in order to achieve some future thing but it is not a future cause that generates his actions as an effect. The immediate causes absolutely precede them. It is the past that he is influenced by, a past in which he formed a VISION of the future.

The contents of mental imagery are not subject to the laws of physics and to the asymmetry of time. The physical world and its temporal causal order are. The argument between teleologists and scientist may, all said and done, be merely semantic. Where one claims an ontological, REAL status for mental states (reasons) - one is a teleologist. Where one denies this and regards the mental as UNREAL, one is a scientist.

But, regardless of what type of arguments we adopt, physical (scientific) or metaphysical (e.g. teleological), do we need a Creator-Designer to explain the existence of the Universe? Is it parsimonious to introduce such a Supreme and Necessary Being into the calculus of the world?

V. Parsimonious Considerations regarding the Existence of God

Occasionalism is a variation upon Cartesian metaphysics. The latter is the most notorious case of dualism (mind and body, for instance). The mind is a "mental substance". The body – a "material substance". What permits the complex interactions which happen between these two disparate "substances"? The "unextended mind" and the "extended body" surely cannot interact without a mediating agency, God. The appearance is that of direct interaction but this is an illusion maintained by Him. He moves the body when the mind is willing and places ideas in the mind when the body comes across other bodies.

Descartes postulated that the mind is an active, unextended, thought while the body is a passive, unthinking extension. The First Substance and the Second Substance combine to form the Third Substance, Man. God – the Fourth, uncreated Substance – facilitates the direct interaction among the two within the third.

Foucher raised the question: how can God – a mental substance – interact with a material substance, the body. The answer offered was that God created the body (probably so that He will be able to interact with it). Leibniz carried this further: his Monads, the units of reality, do not really react and interact. They just seem to be doing so because God created them with a pre-established harmony. The constant divine mediation was, thus, reduced to a one-time act of creation. This was considered to be both a logical result of occasionalism and its refutation by a reductio ad absurdum argument.

But, was the fourth substance necessary at all? Could not an explanation to all the known facts be provided without it? The ratio between the number of known facts (the outcomes of observations) and the number of theory elements and entities employed in order to explain them is the parsimony ratio. Every newly discovered fact either reinforces the existing worldview or forces the introduction of a new one, through a "crisis" or a "revolution" (a "paradigm shift" in Kuhn's abandoned phrase).

The new worldview need not necessarily be more parsimonious. It could be that a single new fact precipitates the introduction of a dozen new theoretical entities, axioms and functions (curves between data points). The very delineation of the field of study serves to limit the number of facts, which could exercise such an influence upon the existing worldview and still be considered pertinent. Parsimony is achieved, therefore, also by affixing the boundaries of the intellectual arena and / or by declaring quantitative or qualitative limits of relevance and negligibility. The world is thus simplified through idealization. Yet, if this is carried too far, the whole edifice collapses. It is a fine balance that should be maintained between the relevant and the irrelevant, what matters and what could be neglected, the comprehensiveness of the explanation and the partiality of the pre-defined limitations on the field of research.

This does not address the more basic issue of why do we prefer simplicity to complexity. This preference runs through history: Aristotle, William of Ockham, Newton, Pascal – all praised parsimony and embraced it as a guiding principle of work scientific. Biologically and spiritually, we are inclined to prefer things needed to things not needed. Moreover, we prefer things needed to admixtures of things needed and not needed. This is so, because things needed are needed, encourage survival and enhance its chances. Survival is also assisted by the construction of economic theories. We all engage in theory building as a mundane routine. A tiger beheld means danger – is one such theory. Theories which incorporated fewer assumptions were quicker to process and enhanced the chances of survival. In the aforementioned feline example, the virtue of the theory and its efficacy lie in its simplicity (one observation, one prediction). Had the theory been less parsimonious, it would have entailed a longer time to process and this would have rendered the prediction wholly unnecessary. The tiger would have prevailed.

Thus, humans are Parsimony Machines (Ockham Machines): they select the shortest (and, thereby, most efficient) path to the production of true theorems, given a set of facts (observations) and a set of theories. Another way to describe the activity of Ockham Machines: they produce the maximal number of true theorems in any given period of time, given a set of facts and a set of theories.

Poincare, the French mathematician and philosopher, thought that Nature itself, this metaphysical entity which encompasses all, is parsimonious. He believed that mathematical simplicity must be a sign of truth. A simple Nature would, indeed, appear this way (mathematically simple) despite the filters of theory and language. The "sufficient reason" (why the world exists rather than not exist) should then be transformed to read: "because it is the simplest of all possible worlds". That is to say: the world exists and THIS world exists (rather than another) because it is the most parsimonious – not the best, as Leibniz put it – of all possible worlds.

Parsimony is a necessary (though not sufficient) condition for a theory to be labeled "scientific". But a scientific theory is neither a necessary nor a sufficient condition to parsimony. In other words: parsimony is possible within and can be applied to a non-scientific framework and parsimony cannot be guaranteed by the fact that a theory is scientific (it could be scientific and not parsimonious). Parsimony is an extra-theoretical tool. Theories are under-determined by data. An infinite number of theories fits any finite number of data. This happens because of the gap between the infinite number of cases dealt with by the theory (the application set) and the finiteness of the data set, which is a subset of the application set. Parsimony is a rule of thumb. It allows us to concentrate our efforts on those theories most likely to succeed. Ultimately, it allows us to select THE theory that will constitute the prevailing worldview, until it is upset by new data.

Another question arises which was not hitherto addressed: how do we know that we are implementing some mode of parsimony? In other words, which are the FORMAL requirements of parsimony?

The following conditions must be satisfied by any law or method of selection before it can be labeled "parsimonious":

  1. Exploration of a higher level of causality: the law must lead to a level of causality, which will include the previous one and other, hitherto apparently unrelated phenomena. It must lead to a cause, a reason which will account for the set of data previously accounted for by another cause or reason AND for additional data. William of Ockham was, after all a Franciscan monk and constantly in search for a Prima Causa.
  1. The law should either lead to, or be part of, an integrative process. This means that as previous theories or models are rigorously and correctly combined, certain entities or theory elements should be made redundant. Only those, which we cannot dispense with, should be left incorporated in the new worldview.
  1. The outcomes of any law of parsimony should be successfully subjected to scientific tests. These results should correspond with observations and with predictions yielded by the worldviews fostered by the law of parsimony under scrutiny.
  1. Laws of parsimony should be semantically correct. Their continuous application should bring about an evolution (or a punctuated evolution) of the very language used to convey the worldview, or at least of important language elements. The phrasing of the questions to be answered by the worldview should be influenced, as well. In extreme cases, a whole new language has to emerge, elaborated and formulated in accordance with the law of parsimony. But, in most cases, there is just a replacement of a weaker language with a more powerful meta-language. Einstein's Special Theory of Relativity and Newtonian dynamics are a prime example of such an orderly lingual transition, which was the direct result of the courageous application of a law of parsimony.
  1. Laws of parsimony should be totally subjected (actually, subsumed) by the laws of Logic and by the laws of Nature. They must not lead to, or entail, a contradiction, for instance, or a tautology. In physics, they must adhere to laws of causality or correlation and refrain from teleology.
  1. Laws of parsimony must accommodate paradoxes. Paradox Accommodation means that theories, theory elements, the language, a whole worldview will have to be adapted to avoid paradoxes. The goals of a theory or its domain, for instance, could be minimized to avoid paradoxes. But the mechanism of adaptation is complemented by a mechanism of adoption. A law of parsimony could lead to the inevitable adoption of a paradox. Both the horns of a dilemma are, then, adopted. This, inevitably, leads to a crisis whose resolution is obtained through the introduction of a new worldview. New assumptions are parsimoniously adopted and the paradox disappears.
  1. Paradox accommodation is an important hallmark of a true law of parsimony in operation. Paradox Intolerance is another. Laws of parsimony give theories and worldviews a "licence" to ignore paradoxes, which lie outside the domain covered by the parsimonious set of data and rules. It is normal to have a conflict between the non-parsimonious sets and the parsimonious one. Paradoxes are the results of these conflicts and the most potent weapons of the non-parsimonious sets. But the law of parsimony, to deserve it name, should tell us clearly and unequivocally, when to adopt a paradox and when to exclude it. To be able to achieve this formidable task, every law of parsimony comes equipped with a metaphysical interpretation whose aim it is to plausibly keep nagging paradoxes and questions at a distance. The interpretation puts the results of the formalism in the context of a meaningful universe and provides a sense of direction, causality, order and even "intent". The Copenhagen interpretation of Quantum Mechanics is an important member of this species.
  1. The law of parsimony must apply both to the theory entities AND to observable results, both part of a coherent, internally and externally consistent, logical (in short: scientific) theory. It is divergent-convergent: it diverges from strict correspondence to reality while theorizing, only to converge with it when testing the predictions yielded by the theory. Quarks may or may not exist – but their effects do, and these effects are observable.
  1. A law of parsimony has to be invariant under all transformations and permutations of the theory entities. It is almost tempting to say that it should demand symmetry – had this not been merely an aesthetic requirement and often violated.
  1. The law of parsimony should aspire to a minimization of the number of postulates, axioms, curves between data points, theory entities, etc. This is the principle of the maximization of uncertainty. The more uncertainty introduced by NOT postulating explicitly – the more powerful and rigorous the theory / worldview. A theory with one assumption and one theoretical entity – renders a lot of the world an uncertain place. The uncertainty is expelled by using the theory and its rules and applying them to observational data or to other theoretical constructs and entities. The Grand Unified Theories of physics want to get rid of four disparate powers and to gain one instead.
  1. A sense of beauty, of aesthetic superiority, of acceptability and of simplicity should be the by-products of the application of a law of parsimony. These sensations have been often been cited, by practitioners of science, as influential factors in weighing in favor of a particular theory.
  1. Laws of parsimony entail the arbitrary selection of facts, observations and experimental results to be related to and included in the parsimonious set. This is the parsimonious selection process and it is closely tied with the concepts of negligibility and with the methodology of idealization and reduction. The process of parsimonious selection is very much like a strategy in a game in which both the number of players and the rules of the game are finite. The entry of a new player (an observation, the result of an experiment) sometimes transforms the game and, at other times, creates a whole new game. All the players are then moved into the new game, positioned there and subjected to its new rules. This, of course, can lead to an infinite regression. To effect a parsimonious selection, a theory must be available whose rules will dictate the selection. But such a theory must also be subordinated to a law of parsimony (which means that it has to parsimoniously select its own facts, etc.). a meta-theory must, therefore, exist, which will inform the lower-level theory how to implement its own parsimonious selection and so on and so forth, ad infinitum.
  1. A law of parsimony falsifies everything that does not adhere to its tenets. Superfluous entities are not only unnecessary – they are, in all likelihood, false. Theories, which were not subjected to the tests of parsimony are, probably, not only non-rigorous but also positively false.
  1. A law of parsimony must apply the principle of redundant identity. Two facets, two aspects, two dimensions of the same thing – must be construed as one and devoid of an autonomous standing, not as separate and independent.
  1. The laws of parsimony are "back determined" and, consequently, enforce "back determination" on all the theories and worldviews to which they apply. For any given data set and set of rules, a number of parsimony sets can be postulated. To decide between them, additional facts are needed. These will be discovered in the future and, thus, the future "back determines" the right parsimony set. Either there is a finite parsimony group from which all the temporary groups are derived – or no such group exists and an infinity of parsimony sets is possible, the results of an infinity of data sets. This, of course, is thinly veiled pluralism. In the former alternative, the number of facts / observations / experiments that are required in order to determine the right parsimony set is finite. But, there is a third possibility: that there is an eternal, single parsimony set and all our current parsimony sets are its asymptotic approximations. This is monism in disguise. Also, there seems to be an inherent (though solely intuitive) conflict between parsimony and infinity.
  1. A law of parsimony must seen to be at conflict with the principle of multiplicity of substitutes. This is the result of an empirical and pragmatic observation: The removal of one theory element or entity from a theory – precipitates its substitution by two or more theory elements or entities (if the preservation of the theory is sought). It is this principle that is the driving force behind scientific crises and revolutions. Entities do multiply and Ockham's Razor is rarely used until it is too late and the theory has to be replaced in its entirety. This is a psychological and social phenomenon, not an inevitable feature of scientific progress. Worldviews collapse under the mere weight of their substituting, multiplying elements. Ptolemy's cosmology fell prey to the Copernican model not because it was more efficient, but because it contained less theory elements, axioms, equations. A law of parsimony must warn against such behaviour and restrain it or, finally, provide the ailing theory with a coup de grace.
  1. A law of parsimony must allow for full convertibility of the phenomenal to the nuomenal and of the universal to the particular. Put more simply: no law of parsimony can allow a distinction between our data and the "real" world to be upheld. Nor can it tolerate the postulation of Platonic "Forms" and "Ideas" which are not entirely reflected in the particular.
  1. A law of parsimony implies necessity. To assume that the world is contingent is to postulate the existence of yet another entity upon which the world is dependent for its existence. It is to theorize on yet another principle of action. Contingency is the source of entity multiplication and goes against the grain of parsimony. Of course, causality should not be confused with contingency. The former is deterministic – the latter the result of some kind of free will.
  1. The explicit, stated, parsimony, the one formulated, formalized and analyzed, is connected to an implicit, less evident sort and to latent parsimony. Implicit parsimony is the set of rules and assumptions about the world that are known as formal logic. The latent parsimony is the set of rules that allows for a (relatively) smooth transition to be effected between theories and worldviews in times of crisis. Those are the rules of parsimony, which govern scientific revolutions. The rule stated in article (a) above is a latent one: that in order for the transition between old theories and new to be valid, it must also be a transition between a lower level of causality – and a higher one.

Efficient, workable, parsimony is either obstructed, or merely not achieved through the following venues of action:

  1. Association – the formation of networks of ideas, which are linked by way of verbal, intuitive, or structural association, does not lead to more parsimonious results. Naturally, a syntactic, grammatical, structural, or other theoretical rule can be made evident by the results of this technique. But to discern such a rule, the scientist must distance himself from the associative chains, to acquire a bird's eye view , or, on the contrary, to isolate, arbitrarily or not, a part of the chain for closer inspection. Association often leads to profusion and to embarrassment of riches. The same observations apply to other forms of chaining, flowing and networking.
  1. Incorporation without integration (that is, without elimination of redundancies) leads to the formation of hybrid theories. These cannot survive long. Incorporation is motivated by conflict between entities, postulates or theory elements. It is through incorporation that the protectors of the "old truth" hope to prevail. It is an interim stage between old and new. The conflict blows up in the perpetrators' face and a new theory is invented. Incorporation is the sworn enemy of parsimony because it is politically motivated. It keeps everyone happy by not giving up anything and accumulating entities. This entity hoarding is poisonous and undoes the whole hyper-structure.
  1. Contingency – see (r) above.
  1. Strict monism or pluralism – see (o) above.
  1. Comprehensiveness prevents parsimony. To obtain a description of the world, which complies with a law of parsimony, one has to ignore and neglect many elements, facts and observations. Gφdel demonstrated the paradoxality inherent in a comprehensive formal logical system. To fully describe the world, however, one would need an infinite amount of assumptions, axioms, theoretical entities, elements, functions and variables. This is anathema to parsimony.
  1. The previous excludes the reconcilement of parsimony and monovalent correspondence. An isomorphic mapping of the world to the worldview, a realistic rendering of the universe using theoretical entities and other language elements would hardly be expected to be parsimonious. Sticking to facts (without the employ of theory elements) would generate a pluralistic multiplication of entities. Realism is like using a machine language to run a supercomputer. The path of convergence (with the world) – convergence (with predictions yielded by the theory) leads to a proliferation of categories, each one populated by sparse specimen. Species and genera abound. The worldview is marred by too many details, crowded by too many apparently unrelated observations.
  1. Finally, if the field of research is wrongly – too narrowly – defined, this could be detrimental to the positing of meaningful questions and to the expectation of receiving meaningful replies to them (experimental outcomes). This lands us where we started: the psychophysical problem is, perhaps, too narrowly defined. Dominated by Physics, questions are biased or excluded altogether. Perhaps a Fourth Substance IS the parsimonious answer, after all.

It would seem, therefore, that parsimony should rule out the existence of a Necessary and Supreme Being or Intelligence (God). But is Nature really parsimonious, as Poincare believed? Our World is so complex and includes so many redundancies that it seems to abhor parsimony. Doesn't this ubiquitous complexity indicate the existence of a Mind-in-Chief, a Designer-Creator?

VI. Complexity as Proof of Design

"Everything is simpler than you think and at the same time more complex than you imagine."
(Johann Wolfgang von Goethe)

Complexity rises spontaneously in nature through processes such as self-organization. Emergent phenomena are common as are emergent traits, not reducible to basic components, interactions, or properties.

Complexity does not, therefore, imply the existence of a designer or a design. Complexity does not imply the existence of intelligence and sentient beings. On the contrary, complexity usually points towards a natural source and a random origin. Complexity and artificiality are often incompatible.

Artificial designs and objects are found only in unexpected ("unnatural") contexts and environments. Natural objects are totally predictable and expected. Artificial creations are efficient and, therefore, simple and parsimonious. Natural objects and processes are not.

As Seth Shostak notes in his excellent essay, titled "SETI and Intelligent Design", evolution experiments with numerous dead ends before it yields a single adapted biological entity. DNA is far from optimized: it contains inordinate amounts of junk. Our bodies come replete with dysfunctional appendages and redundant organs. Lightning bolts emit energy all over the electromagnetic spectrum. Pulsars and interstellar gas clouds spew radiation over the entire radio spectrum. The energy of the Sun is ubiquitous over the entire optical and thermal range. No intelligent engineer - human or not - would be so wasteful.

Confusing artificiality with complexity is not the only terminological conundrum.

Complexity and simplicity are often, and intuitively, regarded as two extremes of the same continuum, or spectrum. Yet, this may be a simplistic view, indeed.

Simple procedures (codes, programs), in nature as well as in computing, often yield the most complex results. Where does the complexity reside, if not in the simple program that created it? A minimal number of primitive interactions occur in a primordial soup and, presto, life. Was life somehow embedded in the primordial soup all along? Or in the interactions? Or in the combination of substrate and interactions?

Complex processes yield simple products (think about products of thinking such as a newspaper article, or a poem, or manufactured goods such as a sewing thread). What happened to the complexity? Was it somehow reduced, "absorbed, digested, or assimilated"? Is it a general rule that, given sufficient time and resources, the simple can become complex and the complex reduced to the simple? Is it only a matter of computation?

We can resolve these apparent contradictions by closely examining the categories we use.

Perhaps simplicity and complexity are categorical illusions, the outcomes of limitations inherent in our system of symbols (in our language).

We label something "complex" when we use a great number of symbols to describe it. But, surely, the choices we make (regarding the number of symbols we use) teach us nothing about complexity, a real phenomenon!

A straight line can be described with three symbols (A, B, and the distance between them) - or with three billion symbols (a subset of the discrete points which make up the line and their inter-relatedness, their function). But whatever the number of symbols we choose to employ, however complex our level of description, it has nothing to do with the straight line or with its "real world" traits. The straight line is not rendered more (or less) complex or orderly by our choice of level of (meta) description and language elements.

The simple (and ordered) can be regarded as the tip of the complexity iceberg, or as part of a complex, interconnected whole, or hologramically, as encompassing the complex (the same way all particles are contained in all other particles). Still, these models merely reflect choices of descriptive language, with no bearing on reality.

Perhaps complexity and simplicity are not related at all, either quantitatively, or qualitatively. Perhaps complexity is not simply more simplicity. Perhaps there is no organizational principle tying them to one another. Complexity is often an emergent phenomenon, not reducible to simplicity.

The third possibility is that somehow, perhaps through human intervention, complexity yields simplicity and simplicity yields complexity (via pattern identification, the application of rules, classification, and other human pursuits). This dependence on human input would explain the convergence of the behaviors of all complex systems on to a tiny sliver of the state (or phase) space (sort of a mega attractor basin). According to this view, Man is the creator of simplicity and complexity alike but they do have a real and independent existence thereafter (the Copenhagen interpretation of a Quantum Mechanics).

Still, these twin notions of simplicity and complexity give rise to numerous theoretical and philosophical complications.

Consider life.

In human (artificial and intelligent) technology, every thing and every action has a function within a "scheme of things". Goals are set, plans made, designs help to implement the plans.

Not so with life. Living things seem to be prone to disorientated thoughts, or the absorption and processing of absolutely irrelevant and inconsequential data. Moreover, these laboriously accumulated databases vanish instantaneously with death. The organism is akin to a computer which processes data using elaborate software and then turns itself off after 15-80 years, erasing all its work.

Most of us believe that what appears to be meaningless and functionless supports the meaningful and functional and leads to them. The complex and the meaningless (or at least the incomprehensible) always seem to resolve to the simple and the meaningful. Thus, if the complex is meaningless and disordered then order must somehow be connected to meaning and to simplicity (through the principles of organization and interaction).

Moreover, complex systems are inseparable from their environment whose feedback induces their self-organization. Our discrete, observer-observed, approach to the Universe is, thus, deeply inadequate when applied to complex systems. These systems cannot be defined, described, or understood in isolation from their environment. They are one with their surroundings.

Many complex systems display emergent properties. These cannot be predicted even with perfect knowledge about said systems. We can say that the complex systems are creative and intuitive, even when not sentient, or intelligent. Must intuition and creativity be predicated on intelligence, consciousness, or sentience?

Thus, ultimately, complexity touches upon very essential questions of who we, what are we for, how we create, and how we evolve. It is not a simple matter, that...

VII. Summary

The fact that the Universe is "fine-tuned" to allow for Life to emerge and evolve does not necessarily imply the existence of a Designer-Creator (although this cannot be ruled out conclusively). All forms and manner of Anthropic Principles are teleological and therefore non-scientific. This, though, does not ipso facto render them invalid or counterfactual.

Still, teleological explanations operate only within a context within which they acquire meaning. God cannot serve as His own context because he cannot be contained in anything and cannot be imperfect or incomplete. But, to have designed the Universe, He must have had a mind and must have used a language. His mind and His language combined can serve as the context within which he had labored to create the cosmos.

The rule of parsimony applies to theories about the World, but not to the World itself. Nature is not parsimonious. On the contrary: it is redundant. Parsimony, therefore, does not rule out the existence of an intelligent Designer-Creator (though it does rule out His incorporation as an element in a scientific theory of the world or in a Theory of Everything).

Finally, complexity is merely a semantic (language) element that does not denote anything in reality. It is therefore meaningless (or at the very least doubtful) to claim the complexity of the Universe implies (let alone proves) the existence of an intelligent (or even non-intelligent) Creator-Designer.

<--- Previous             Next --->


Read Note on Teleology: Legitimizing Final Causes

Read Note on Context, Background, Meaning

Read Note on Parsimony – The Fourth Substance

Read Note on Complexity and Simplicity

Read Note on Scientific Theories and the Life Cycles of Science

Also Read

Atheism in a Post-Religious World

The Science of Superstitions

Copyright Notice

This material is copyrighted. Free, unrestricted use is allowed on a non commercial basis.
The author's name and a link to this Website must be incorporated in any reproduction of the material for any use and by any means.

Go to Home Page!

Malignant Self Love - Narcissism Revisited

After the Rain - Countries in Transition

Internet: A Medium or a Message?

Write to me: palma@unet.com.mk  or narcissisticabuse-owner@yahoogroups.com