ISSN: 2332-0915
Research Article - (2017) Volume 5, Issue 4
The proposed hypothesis for biological evolution considers modern evolution theories and established facts. It addresses some puzzling issues numerous researchers pointed out. New approach is based on the analogy between supercomputer and Biosphere. The Biosphere stores enormous amount of digital data and may act as an engine partially directing evolution changes. The main direction of the living world changes and speciation is not only the survival of the fittest but growing computing complexity of the living creatures in the course of evolution.
Keywords: Naturally directed evolution; Biological evolution
Evolution theories
Modern evolution theories stand on two pillars: random mutations and natural selection. Factors in the environment may influence the rate of mutation but are not generally thought to influence the direction of mutation. Natural selection is the process that enhances survival and reproduction of organisms with a random uncontrolled mutation. It has become commonplace in all biology to rely upon these two assumptions. “Darwinism is not a theory of random chance. It is a theory of random mutation plus non-random cumulative natural selection. . . Natural selection . . . is a non-random force, pushing towards improvement” [1].
That is, random mutations being accumulated over time and pushed by the natural selection may cause major changes in genotype and phenotype of the living creatures. These changes result in an emergence of new species, genus, families, phyla.
This common belief is the subject of intense debate within and outside the biological community. The critique about random mutation’s capability to create new species has gained fresh prominence in the recent decades. Some researchers claim that multiple coordinated mutations are needed for the appearance of a new species. Single random mutation is rarely beneficial and has no reason to be passed onto the next generation. Others argue that natural selection is a “very weak force”.
The survivability as a dominant selective factor is not always supported by historical evidence. The most resilient species on Earth are single-cell organisms. Some of them reside unchanged for billions of years. Global bacterial mass is calculated to be about 450 billion of tons of carbon surpassing total weight of humans three orders of magnitude. Small crustaceans such as krill have a total mass twice as big as the total mass of all humans. These and other primitive creatures can survive in far more harsh conditions that we, human beings, cannot stand at all: temperature range, food diversity, etc.
In fact, a homo genus is arguably one of the least accommodated to live on this planet. None of the homo habilis, homo erectus, and later inhabitants passed the survivability test. Furthermore, they all extinct. Homo sapiens , the only survivor, also was on the brink of extinction at least three times. Twice we barely escaped the extinction having slipped through the bottleneck of “mitochondrial Eve” [2] and “Ychromosome Adam” [3]. After evading these Scylla and Charybdis, powerful Toba super-eruption occurred about 74 thousand years ago. According to some theories, that event brought the human population to a mere 3,000-10,000 individuals. There is a growing body of literature casting a doubt on the very pillar of Darwinian theory: The natural selection by the survival of the fittest [4-8].
Should natural selection had been a single cause of the evolution then fragile humans should gradually evolve into some fitter races. The natural evolution would change hairless bipedal feeble creatures into more robust primates, then to better fit mammals, and, eventually, to krill and single-celled bacteria. Some prominent scientists are not fond of and warn about “the obsession with natural selection” [9].
Evolution vector
The irrefutable fact is that new species evolve, and the evolution goes steadily in the different dominant direction. As Richard Bird [8] put it: “Life increases in complexity in one specific sense; computational complexity.” Such statement is so obvious that it hardlycan be argued against. If we take into account a historical evolvement of only one “computational complexity” parameter such as a relative brain weight with respect to the body weight of the animals, the following chart (Figure 1) may be produced.
Limited selection of species is represented here and just a single factor is considered. Notwithstanding these limitations, the chart suggests that computational complexity steadily increases in the process of evolution. Secondly, it shows accelerated growths of the most complicated and advanced living matter which is the biosphere’s brain mass.
How does that fit into Darwin’s dogma of survival of the fittest? In particular, humans’ brains were not of much help in a struggle for existence.
Another non-obvious observation may arise while keeping track on a time-scale of evolution history. The Earth was inhabited by prokaryotes from approximately 4 billion years ago. No obvious changes in morphology or cellular organization occurred in these organisms over the next few billion years. No new or enhanced brain power came about, and none was, apparently, needed during such immense period. Eukaryotic cells emerged after almost two billion years of nature’s hesitation. They bear more digital genetic bits of information, which are thousand times greater than the lacking nucleus prokaryotic cells. That incremental step quenched a nature’s thirst for the computational complexity for another 1.5 billion years.
There is a growing body of literature that recognizes that the cells function like miniature digital computers [4,8,10-13] (Figure 2).
Then another revolutionary development took place about 600 million years ago, when multicellular organisms began to appear. At that time an occasion dubbed the Cambrian explosion originated. Before that global event, most of the organisms were simple, composed of individual cells sometimes merged into colonies. Over the next 70-80 million years the life rapidly diversified and brought almost all of the phyla that exist until today.
During this period, as some scientists infer, the first brain structure emerged in worms. The evolution process was accelerating more rapidly. Amphibians first came to life around 360 million years ago, followed by first mammals, early amniotes and birds around 150 million years ago. Hominidae came into existence 10 million years ago and modern humans 200,000 years ago. Global computational complexity was growing at a much faster pace than ever before.Despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth. Most of the biomass on the planet is still held by prokaryotes.
Notwithstanding growing intricacy of the fresh living organisms, the time between more and more complex species occurrences rapidly shrunk. The difference of levels separating computational complexity of primitive animals from that of high primates is immensely higher that separates prokaryotic from eukaryotic bacteria. Some scientists even hold that more advanced cells evolved due to a mere symbiosis of the simpler ones [14]. In any case, the level of intricacy which separated nuclear-free and the nucleate cell is many orders of magnitude lower than the barrier between primitive mammals and Hominidae. Surprisingly, the time between the emergences of the latter from the former is significantly smaller than that has passed between prokaryotes and eukaryotes occurrence.
No plausible mechanism of multiple accelerated and coordinated mutations in higher organisms has been proposed. Why has evolution accelerated at this pace when organisms are becoming more and more advanced?
Despite being visible on the surface, an agreement between the biologists on the inviolability of the Darwinian theories, some scientists bring up an increasing concern that some of the major statements of evolution theory are overestimated and/or dogmatically held. Even one of the most prominent and prolific Russian proponents of the Darwinism Dr. Alexander Markov (sometimes called “the Russian Dawkins”) claimed in his recent book that “today classical Darwinism and classical synthetic theory of evolution more resemble museum exhibits than living and working theories. Many thinks that the biology development is on hold giving the absence of an adequate theoretical base, comprehensive new theory” [15]. The data supportingthe need for Darwinian theory revision gets stronger every day [6,16,17]. Nonetheless, obviously haunted by the reincarnation of Intelligent Design (ID), evolution¬ary biologists wish to show a united front to those “hostile” to science [17].
The enticement of purposeful design is nevertheless the powerful one. Some biologists cannot resist the temptation of using teleological terms to describe speciation [17]. Some prefer to brush out the very hint of any intelligence in the evolution process as” a delusion”: “the living results of Natural Selection overwhelmingly impress us with the appearance of design as if by a master watchmaker, impress us with the illusion of design and planning” [18].
Our quest is, however, for the search of the natural and plausible cause of bio-evolution.
Different authors put forward a concept of self-evolution [4,10-12,19-21]. Late Israeli scientist Ben-Jacob, in particular, wrote: “The power of the Darwinian picture lies not only in its achievements but also in the dismay evoked by what seems to be the only alternative - Vitalism. But is Vitalism the only alternative? Or could there be another picture, neither Darwinian nor Vitalistic? My basic assumption is that the observed creativity in nature is not an illusion but part of an objective reality, and as such should be included in our scientific description of reality. However, if we understand science as the ability to predict the future state and behavior of a system based on the present knowledge about the system, then a creative process contradicts the tenets of scientific description. After all, creation means emergence of something new and unpredictable, something not directly derivable from the present. My proposed solution to the above paradox leads to a new evolutionary picture, where progress is not a result of successful accumulation of mistakes in replication of the genetic code, but is rather the outcome of designed creative processes. Progress happens when organisms are exposed to paradoxical environmental conditions - conflicting external constraints that force the organism to respond in contradicting manners. Clearly, an organism cannot do it within its current framework. The new picture of creative cooperative evolution is based on the cybernetic capacity of the genome and the emergence of creativity as the solution cooperative complex systems apply to an existential paradox.” [10].
This initial hypothesis was followed by Horgan [6] and Ruchlenko [22], Sir. Paul Bateson, and a number of others. It should be noted that Ben-Jacob’s “cybernetic capacity of the genome” may be sufficient for some limited tasks like making spores out of bacteria. Real speciation, i.e., creating new species that are genetically and morphologically undoubtedly different from their parents, would require a different level of cybernetic power. As Bird suggests: “The cell functions like a miniature digital computer. If these processes are carried out in each cell, then the whole body is capable of acting as a massive parallel computer. An important consequence of this mode of evolution is that, since speciation takes place in a single step from one generation to the next, there is no intermediate stage between species in the chain of evolution and hence no “missing link” between an existing species and a new one which evolves from it” [8]. The suggestion that a body can produce more advanced living body, contradicts, as Ben-Jacob suggests, “a lemma extended from Gödel's theorem sets limitations on self-improvements. Simply put, it would state that a system cannot selfdesign another system which is more advanced than itself.
Note that a system can be improved by successful accumulation of random changes but not in a self-designed manner” [10].
A note of caution should be taken here while comparing the biological to the “mechanized” (computer) word. The biological matter is self-replicating and proliferating by itself. The mechanical world at the current stage of the industrial progress is not. During the replication living cells transfer information with possible natural or intended deviations (mutations). Those are the triggers of evolution. In the mechanical world such “deviations” are mostly prevented and unintended.
Biosphere as a Single Organism
Does Gödel's theorem signify that a supernatural power is needed for the creation of a new species? The answer is in a search of more sophisticated living body than a single albeit complex organism. I suggest that such a body is the Biosphere itself. The idea of a Single (and only) Living Organism inhabiting our planet is not nearly new. Some of the advocates of such an idea express flamboyant views and theories [23], whilst some of the thinkers, including Isaac Newton, that share this view, possess prominent recognition among scientific community [24]. Such a view was to the certain degree supported by Vernadsky, who coined the term “Noosphere”, and Timiryazev, two of the biggest figures in Russian geochemistry and biology. Similar view was from different points expressed by such outstanding thinkers as James Lovelock, Jeon [25], and Scott Turner [26]. Such insights derive from different fields but fit together with surprising coherence [27].
The reference to the Biosphere as being an intelligent entity is a very strong statement with far-reaching implications. The picture of a creative living Nature as a natural being is very appealing. It may explain a lot of conundrums in evolution. Yet nagging conceptual difficulties are present. One of them is a life origin.
Origin of Life
Progress about the origins of life has been considerable although the nut is still hard to crack.
It is a widely held view that RNAs have been the precursors to all life on Earth [28]. One of the most serious problems with this concept was a possibility of RNA origin and self-producing. This issue was to the certain extent addressed by Eigen [19] and Kaufman [20]. The most obvious findings to emerge from these researches is that even complex, information enriched, self-replicating molecules might originate naturally in the pre-biotic world. Another serious issue was a discrepancy between the need for bringing together two apparently incompatible requirements: separation of the biochemical reactions from the environment (by a membrane), and exchange between the environment and the cell. A solution to this problem was be provided by Chetverin [29] who discovered and patented molecular colonies (also called polonies), which form when RNAs or DNAs are replicated in a solid medium having pores of a nanometer size. Molecular colonies (nanocolonies) are the clusters of nanomolecules that form around RNA or DNA templates when those are replicated in a porous solid medium having nanometer-sized pores, such as agarose or polyacrylamide. Chetverin concluded that those molecular colonies might have served as a pre-cellular form in the RNA World.
Truly remarkable is that pre-cellular RNA colonies possess the same properties as their compartmentalized counterparts. Pre-cellular RNAs may replicate and change their structure. They recombine and exchange parts between the molecules as well as between colonies. The most striking RNAs property is their ability to pass the genetic information to the descendants by replicating RNA out of fragments. (This event is also supported by work of Duke et al. who worked with cells [30]). With growing colony’s size, the information, it contains also increases. The information volume outruns the colony growth rate.
Computer Analogy
The analogy may be more clearly defined by simple digital computer memory storage that is built on switches (transistors). Number of researchers rightfully point out that complex molecules like RNAs or proteins have no chance to develop by random amino acid selection. While this is undeniable truth one should realize that the computers, including super-computers, are all consist of elementary bits that may take only two positions: “yes” or “no”. That is implemented by a transistor having one of two states: “on” or “off”. In the “off” state the transistor does not conduct an electrical current and its drain terminal holds a high voltage level. In the “on” state the transistor is open, current flows through it and the drain is at a low voltage level. The elementary switch’s binary information capacity is limited to 1 bit. With a growing number of switches, the information volume is based on powers of 2. Thus 8 transistors may store not 16 (2 × 8) but 256 = 28 different values representing 1 byte.
More is Different
There is a universal natural law of transformation of quantitative into qualitative changes [25,26]. Single switch cannot perform any calculations. Big number of the switches makes a computer. Supercomputer may surpass human beings in many intelligent tasks. One can assume that the truly enormous number of simple self-replicating pre-biotic molecules makes life. The roughly estimated total memory capacity of all data-centers on the Earth to-date is about 1024 bytes (1 yottabyte). This enormous number is pale in comparison to the digital information stored in living organisms. Very crude estimation of all prokaryotes population in the pre-eukaryotic world (about 2 billion years ago) gave an impressive number of 1030 cells. Each prokaryotic cell contains several thousand base nucleotide pairs. Each part of the pair may consist of one of four basic amino-acids: thymine, adenine, cytosine, or guanine. Unlike of transistor switch, the nucleotide is more advanced since it may possess one of four states using four aminoacids. Therefore, one codon, consisting of three neighboring nucleotides, may represent 64 = 43 positions while three transistors have only 8 = 23. One can assess that total digital memory stored in the pre-historical world Biosphere exceeds the capacity of all data-centers built to the date by an order of billions.
Biological information stored in both pre-biotic and biotic molecules is a subject of constant change, alteration, and natural selection [19-20,29].
As such it is analogous to the digital computer of enormous capacity. The idea that large group of elementary living entities may possess computer-like properties was expressed by a number of researchers [4,10-12,31]. Ben-Jacob suggested that genomic web is, in fact, a “super-mind” relative to the individual genome” capable of thinking collectively and even be involved in speciation. As an example of the latter Ben-Jacob described a sporulation as a “vertical genomic leap”. The question arises: if a single bacteria colony consisting of billions of bacteria is capable of limited speciation like sporulation, would much more numerous living elementary entities be capable to a speciation of a different level. Would gigantic, enormous colony, like the Biosphere as a whole, be capable of producing new species if necessary?
Critical Mass
Let’s take a computer analogy again. Suppose you need to build a machine that plays perfect tic-tac-toe game and never loses. To do so such a computer needs to memorize all possible positions of noughts and crosses that may ever occur. The number of the positions is rather modest and is equal to 39, i.e., less than 20 thousand. In order to store such information, one needs only 15 transistors (bits). One more transistor would be needed to manipulate with the main 15 bits, making total 16 bits computer.
To play chess game, 15 bits is not nearly enough. Some estimate that a total number of the positions on a chess board is about 1364, and a number of unique games of chess equal to 10120. However, the computer PDP-8 built in 1960th was capable of playing a chess game. It contained just 519 bits (transistors). Deep Blue II that defeated Garry Kasparov in 1997 has 720 million bits. For each task, certain minimum computational complexity would be required. Let’s call the minimum computational complexity that is needed for a certain task, a “critical mass”. The objective of new species creation needs much greater “critical mass” than playing any human invented game. If the computer has fewer than the minimum number of bits, it is not capable of playing chess no matter how much time it takes to make a single move. The critical mass for tic-tac-toe is 16 bits, for a chess game – around 500 bits. 1030 cells contain a number of “bits” that exceeds total capacity of the computers on the Earth by the order of billions and trillions. Would that enormous number be exceeding “the critical mass” required for producing new living organism? This is an open question, but it would be safe to presume a positive answer considering an enormous length of time during which new organisms had been emerging on the planet. Playing the “life creation game” does not require time control. No chess clock is on the table and “the game” may continue thousands, millions, and billions of years until it is won.
If Ben-Jacob’s colony produced new species which are spores, why not assume that a much bigger “colony” named the Biosphere is capable to “invent” something more complex? We know that each living cell possesses quite an impressive intelligence [13,28,31]. Cell colony’s intelligence as any other group of living creatures grows with a number of cells (creatures) exponentially [32,33]. Each level of quantity generally requires a new degree of hierarchical organization and at a certain level, it obtains new quality. If we assume that the Biosphere as a whole is a gigantic super-computer with enormous intelligence, then a task of living organism generation is within a reasonable reach.
Such a super-computer idea carries obvious doubts that in mind of some researchers would prevent it to be a true thinking machine. First, it lacks a programmer who would develop and run software. Second, it is not clear how molecules that are located at a distance from each other would communicate (transistors are electrically or optically connected to each other). Third and the most puzzling one is a common goal or common criteria forcing this bio-computer to work and invent new species at all. (The Darwinian evolution, in view of majority of the researchers, has no foresight).
Who is a Programmer?
It is a common credence that the computer needs software. This is the true claim for the digital computers. There is another kind of computing systems, however, so-called analog computers. An analog computer uses the continuously changeable aspects of natural physical phenomena such as electrical, mechanical, gravitational or hydraulic quantities to model the problem being solved. Both digital and analog computers may resolve the same task albeit by using different procedures. Let’s consider 3D surface with several maxima and minima (Figure 3).
Both kinds of the computers can find and memorize local and global minimum coordinates. The digital computer needs software implementing an appropriate method of nonlinear programming. The analog computer may find the same minimum by simply flowing water or rolling a ball on the surface in question. There are also hybrid computers that exhibit features of analog and digital machines.
Once the minimum (or the best solution) coordinates are found they may be stored in the computer’s digital memory. That resolves the first puzzle of bio-computer usability for a complex task solving. No software is needed for an analog or hybrid computer. It may initially run by itself by analog action and store the optimum result in digital memory.
Biological World Communication
For the second issue, which is a communication between the cells and molecules, the answer is also in realm of natural science. Different parts such as cells and organisms of our hypothetical computer may use a number of efficient and well-known ways for mutual communication. There is no need to refer to enigmatic “biofields” which existence was never proved experimentally. The communication may be conducted by the means that are listed below. For the sake of brevity, I just list them here (detailed description will be done on the oncoming publication):
Direct physical interactions, cell-to-cell and organism-to-organism [34]
• Chemical (pheromones) [35]
• Electrostatics (ions transfers) [36]
• Electrostatic field [37]
• Electromagnetic (wave generations, light, UV light) [38]
• Magnetic field [39]
• Microwave transmission [40]
• By universal patterns [41]
• By relay, i.e., transferring signal from one body to another using intermediate body [42]
• Hierarchically [43]
• Transferring information by viruses and bacteria [24,44,45]
• Using phased antenna array principle [46]
• By signal amplification including multi-stage cascades [47]
• By using a resonance [48]
• By the means of quantum communication [49-51]
The above spectrum of available communication means enables biosphere’s organisms for both close range as well as distant information transmission. Signals transferring are protected from distortion by certain codes’ patterns that are presumably universal for all organisms of different evolution level [41].
Evolution Dynamics
Finally, the third puzzling issue mentioned above should be resolved. Darwin suggested that the main factor forcing the living creatures to transform into new species is a natural selection or a survival of the fittest. While this claim is plausible within the species, the transition from one species to another requires three factors: 1) simultaneous coordinated change in genetic code, and 2) stimulus for a speciation. If natural selection is indeed a “weak force”, what may cause stable species to lunge into the complex and risky transition into a different one with an unpredictable outcome? If our presumption of the Biosphere capable of computing is a smooth, how may it work? Given an enormous complexity of a super-computer the mere task of understanding its logic seems unsurmountable. Let’s try to explain it using closest analogy to the Biosphere. What is the known intelligent community on the earth? This is humankind society and one may search for a hint of the Biosphere’s operation by examining human’s modus operandi. Through human race history, a common feature singles our civilization out of other living matters. That is our zeal for memory storage. Keeping various records of past event and experience is traced back by several millenniums. The amount of information, as well as existence of adequate means for information safe and efficient depository, is one of the main distinctions of the humankind. Starting with petroglyphs at the dawn of civilization we came to the massive libraries and digital storing data-centers. The total volume of globally stored information steadily grows due to the fact that each year new information is added to the past information body. The global volume of digitally stored information is measured in bytes. In 1986 total capacity of all data centers was estimated as 2.6 × 1018 bytes. In 1993 – 15.8 × 1018 bytes, in 2004 – 54.5 × 1018 bytes and in 2007 – 295 × 1018 bytes (Figure 4).
Generally, a function above may be described by the following formulae:
F(n) = F(n-1) + F(n-2)
Such a formula is defined as a recursive one. (The most prominent recursive function is a Fibonacci algorithm.) This curve astoundingly resembles the chart in Figure 1 depicting relative brain weight of the living creatures in a course of biological evolution. The analogy of human society as a smaller scale model for the Biosphere gives us a key to the latter historical development. The main evolution’s driver for speciation along with the survival of the fittest is an inescapable necessity of biological intelligence increase and memory storage growth. It should be clearly stated that the biological evolution has the dependency on the non-biological traits. But the evolution does not solely dependent on the non-biological world. It is, as appears, only a secondary factor in the evolution progress. In the mechanical “evolution” a single flow may turn catastrophic. The biological world is capable for dynamic correction of any information transition errors.
Now our Hypothesis arrived at completion. The following postulates may summarize it:
• Biological evolution as a natural life origin and development is a reality.
• The evolution is not only random changes and natural selection but to some extent a coordinated and controlled process.
• One of the evolution main development vectors is a growing computational complexity of the Biosphere and living organisms’ intelligence.
• The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like software that is stored and controlled by the Biosphere.
• This software is initiated, powered and stimulated by random mutations, as is stipulated by Darwinian Evolution Theories, and also by the growing demand for the memory storage and computational complexity .
• Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of Biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic.
• New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) Biosphere computational complexity reaches critical mass capable of producing more advanced creatures.
The Hypothesis presented here does not contradict the naturalistic concept of life creation and evolution. It is not meant for Darwinian concepts’ denial. It simply shows a different degree of the natural processes. The proposed concept may not be proven yet. I do not have a good evidence for the most claims and must rely on intuition.
However, as Karl Popper suggested a good theory is the one that has greater explanatory power. The Hypothesis logically resolves many puzzling problems with current state evolution theory. Some of them are listed below (I will address these issues at length in the oncoming publication):
• Speciation , as a result of partially controlled process.
• Evolution development vector, as a need for better Biosphere complexity.
• Punctuated equilibrium, happening when two conditions above a) and b) are met.
• Cambrian explosion, as a most pronounced case of punctuated equilibrium.
• Why lab mutation long-term experiments do not result in new speciation ? In these experiments “the critical mass” was not reached.