Metamorphic Star Epilogue
It does not end. Discoveries keep coming in even as we continue to reconstruct Earth’s cosmic past, including its geological history, the evolution and extinction of its past forms of life, and, needless to say, our own place in all of it. What we thought we knew even a few decades ago is hardly the same as what we think we now know. It is not simply that what we now know is more refined. What it has come to is that, in some cases, it is entirely different, and sometimes contrary, to what we had been sure we knew.
The Solar System in which we all live has been supplying astrophysicists with so many conundrums that it is leaving this particular branch of science in a morass of clashing theories. Just as an example, and without getting into technicalities, the contents of some meteorites have led some astronomers to believe that our System “may have been formed very quickly from the ashes of other stars.”1 The ashes of “other” stars connote more than one destroyed star. Others, however, now maintain that the Solar System “may have been born inside the remains of a single star that ran away from its family, rather than from a tight-knit clan of stars.”2 The single star scenario, however, has been said to run into trouble when attempting to explain “how hot gas from the star could mix with surrounding material efficiently enough to form the solar system quickly.”3
It is not that we personally accept any of the above propositions, but it is quite true, as the report in question declares, that our galactic home is “more unusual than previously thought.”4 Chances of ever really coming to grips with its origin are not, however, likely. Not at this late date. We are on much better ground when it comes to the System’s evolution.
Even so, a runaway star like the one theorized above has actually been spotted by the Hubble Space Telescope. It is said to be “racing away from its home stellar nursery after being kicked out by some of its much heftier stellar siblings.”5 Having first been spotted by the Anglo-Australian Telescope at the Siding Spring Observatory in Australia, the rogue star has been tracked since 2006. It may not even be the only runaway on the outer fringes of the Tarantula Nebula in our Galaxy’s neighbor, the Large Magellanic Cloud.6
Stars, however, are not the only vagrants traveling alone through space. On the basis of what is actually seen, in addition to what is merely detected, it has become apparent that there are an “untold” number of worlds that have either “fallen into their suns” or have been “flung out of their systems to become ‘floaters’ that wander in eternal darkness.”7 These free-floaters
1 R. Courtland, “Runaway Star May Have Spawned the Solar System,”
newscientist.com (March 31, 2010).
are now believed to be twice as numerous as stars.1 As we have now noted more than once, there is therefore nothing strange in our own theoretical construct that calls for a nascent Earth traveling through space outside the realm of our present Sun.
DWARF STAR SYSTEMS
Unlike the planets mentioned above, however, Earth was not alone. As we have been demonstrating, our world had been traveling in the company of a brown dwarf star that acted as its primordial source of heat. Such a theorized system might also have been thought highly unlikely a few decades ago. Look at the evidence now.
In August 2005, an entire planetary system was discovered orbiting a red dwarf star, the one known as Gliese 581. The system this star harbors consists of four individual planets. Located 20.3 light years from us, it was not the first such system to come to light. What made it especially interesting is that, by 2009, one of its planets was exposed as the lightest ever detected up to that time. Because it only contains 1.9 Earth masses, this particular planet, very much like our own world, is probably of a rocky composition. But because it orbits much too close to its host star, its temperature is said to be too high to support a substantial atmosphere.2
The outer planet in the same system, however, orbits within a region that “possesses temperatures and conditions favorable for liquid water and life as we know it on Earth.” On the other hand, unlike its inner orbiting sibling, it is considered too massive to be rocky.3
Another planet described as a super-Earth was also discovered orbiting another red dwarf star, 40 light-years from Earth, in December of 2009. Much more massive than the inner one orbiting Gliese 581, it is also thought to be too hot to sustain life. Nevertheless, its density indicates that it is probably part rock, but mostly water, while it might also possess an atmosphere.4
Critics may point out that these systems are centered on red dwarf stars and not brown dwarfs as posited in our scenario. But apart from the fact that brown dwarf stars only differ from their so-called red siblings in being slightly less massive, it is slowly being discovered that, in the main, they behave no differently. As the Bulgarian physicist Vladimir Damgov pointed out, brown dwarf stars “generate enough heat to shine,” even if mostly, but not entirely, in the infra-red. They have, moreover, “been found in so many bizarre configurations that researchers are scrambling to figure out whether they are dealing with one class of objects or several.”5
“Lone brown dwarfs have been spotted wandering through space fairly close to Earth. Others have been detected at vast distances from other stars, forming nests. Brown dwarfs might even spawn their own planetary systems.”6
1 “Free-Floating Planets May be More Common Than Stars,”
sciencedaily.com (May 18, 2011).
And although there was a time when such dwarfs were believed to be entirely different from gaseous planets, astrophysicists are now struggling and arguing “over the specific differences between brown dwarfs and planets, especially about how and where they are born.”1
CATASTROPHIC SIGNS WITHIN THE KUIPER BELT
The chaotic disarray of the bodies within the outer region of the Solar System known as the Kuiper Belt continue to give astrophysicists quite a headache. Among the thousands of bodies littering this belt, some are quite small, while others are “hundreds and even thousands of miles across.”2
“The discovery of the Kuiper Belt redrew our map of the solar system,” wrote the planetary scientist S. Alan Stern. “After all, the Kuiper Belt is the largest structure in the entire planetary system,” he went on. “It dwarfs the asteroid belt in scale, in mass, and in sheer numbers, and stretches more than twice the breadth of the giant planet region.”3
As already stated, proto-Saturn would have had to cross this region with Earth in tow on its way toward the Sun. As we have also noted, the probability is that it would have disrupted some of the bodies that lie within this belt. Signs of this disruption, as we have pointed out, remain quite evident to this very day. And this continues to astound those who have chosen to look deeper into this matter. As Stern continues to disclose, even if for reasons different than ours, “the orbital distribution of these bodies indicates that something has greatly disturbed the dynamics of the Kuiper Belt—and thus the orbits of the bodies in it.”4 This explains why Kuiper Belt orbits “are mixed up and jumbled over one another.”5
One conclusion astronomers arrived at was that this disarray came about because the giant planets of the Solar System “had moved far from their birth locations” within the belt.6 Or, as the theorist Hal Levison put it, the orbits within the belt “all but scream that the region had a close and violent encounter with at least one of the outer planets.”7
Having learned what we have, to us it seems more likely that the giant planets shifted in order to allow the intrusion of proto-Saturn. But not just the giant planets. Chiron (not to be confused with Charon) is only slightly over 100 miles wide. It now resides within the confines of the Solar System outside the demarcation of the Kuiper Belt. But it, too, is now believed to have originated within the belt from where it has been dislodged into a tighter orbit that wends between the planets Uranus and the very Saturn of our main interest.8
The shifting of bodies within the belt, to say nothing of the evicting of others out of it, is more likely to have required the invasion of a foreign body into the system. This invading
1 Ibid., p. 455.
such a body has actually been considered. “The elongated, highly inclined orbits of many of the denizens in the doughnut-shaped Kuiper belt,” Levison concluded, “suggest that a massive intruder barged into the belt early in the history of the solar system, ejecting bodies and jumbling orbits.”1 Such changes are “a smoking gun” that such an intruder “must have plowed into the Kuiper belt.”2 Mike Brown is of the opinion that the “obvious suspect” is the planet Neptune.3 Needless to say, we have reasons to think otherwise.
There are other theories floating around. In an earlier chapter of this very work we have seen how, on the one hand, some think of Chiron as being an escaped satellite of the planet Neptune,4 while others consider it to be a comet in disguise.5 If the latter is correct, Chiron’s ejection from the Kuiper Belt would have been in the form of an errant comet. As it turns out, there are many comets beyond Pluto’s orbit that have highly skewed paths of their own. What this has led to is the supposition that they were “stirred up by a star passing close by.”6
What the above implies is that stars, too, can be prone to erratic orbits. And why not, especially when it is realized that quite a few of the stars within our galaxy are actually interlopers. But never mind stars. How about entire stellar families?
“Many of our galaxy’s globular star clusters are actually foreigners—having been born elsewhere and then migrated to our Milky Way,” according to Duncan Forbes. “It turns out that many of the stars and globular star clusters we see when we look into the night sky are not natives, but aliens from other galaxies.”7
That the Milky Way had gobbled up other galaxies had already been known before Forbes made scientific headline news with the above disclosure while this book was being written. What was newly discovered was that “the Milky Way may have swallowed up more dwarf galaxies” than had been “previously thought.”8
If stars and even galaxies are now known to have invaded the Milky Way, what can be so surprising about an interloping planet? Is not the extra-solar planet known as HIP 13044b, together with its host star, now known to have been captured by the Milky Way from yet anoth-
1 R. Cowen, op. cit., p. 3.
er galaxy?1 Have we not ourselves posited that proto-Saturn, with Earth in tow, actually came from the Sagittarius dwarf galaxy that was likewise captured by the Milky Way?2
The very concept of such an event might also have been considered quite bizarre only a few decades ago, but not at present. Proto-Saturn aside, it continues to be announced that “the Sun, the Moon, our planet and its siblings, were not born into the familiar band of stars known as the Milky Way,” but actually came from the invading Sagittarius dwarf galaxy.3 And while, for some strange reason, there are many within the scientific community who have chosen to contest this disclosure, it is not going to go away, as it has indeed been indicated by Mel Acheson. Due to certain factors which we do not really need to go into, Acheson has not only shown that such capture is evidenced by additional recent discoveries, but also that such discoveries supports the hypothesis that “the Sun captured a previously independent Saturnian system” that included Earth.4
That none of this transpired millions of years ago as maintained by most astronomers is indicated by what our ancient forebears had to say about the planets. In fact, planetary orbits within the Solar System continued to be altered, even if only slightly, well into historical times. Despite his difference of opinion when it comes to Earth’s cosmic past as outlined in these very pages, Marinus van der Sluijs could still write that: “Indeed, the limited set of information conveyed by ancient scientists includes some very credible indications that planetary orbits did shift within the space of human history.”5
EARTH’S NORTHERN HERITAGE
Recent discoveries that continue to lend credibility to our unfolding series of hypotheses have not only come from the realms of astronomy. Earth itself continues to bear witness since signs of what it underwent throughout its catastrophic history have not all yet disappeared.
Further evidence of early life in Earth’s north polar regions came to light through additional discoveries of microscopic fossils at Mount Slipper, north of Dawson City, close to the Yukon-Alaska border. These fossils contain what has been claimed to be “the earliest traces of animal life.” The site is now considered to contain “an important new record of ‘eukaryotic evolution’—the branch of life that eventually gave rise to humans and all other animals.” This places the area at “a crucial time” in Earth’s history, when primitive, unicellular forms of life were beginning to evolve into more complex structures.6
A team of British and Canadian paleontologists has also reported the discovery of “the oldest evidence of animal locomotion” in a fossilized track of an unidentified marine creature.
1 R. Cowen, “It Came From Another Galaxy,” sciencenews.org (December 18,
This also came from one of Earth’s northern extremities in Newfoundland.1
Additional reports from a team of scientists from the United States and Canada tell of chemical traces which point to the one-time existence of a “sponge-like organism—possibly the oldest evidence of an animal ancestor ever found on earth.” And this, too, comes from Earth’s north polar regions in the Mackenzie Mountains close to the border between Yukon and the Northwest Territories.2
After everything we have documented about the dating methods at our disposal, there would be no point in being concerned with the millions of years that have been allotted to the above fossils or to any of those which follow. One matter I will definitely stress, however, is that such early life could not have thrived, let alone originated, in the regions these fossils have been found had these regions been as frigid as they are now.
Yes, there have been voices raised against all this. Thus, a 400-meter-long core of sediment that was recovered close to the north pole in June 2006 indicated that the region had definitely basked in a subtropical climate some 55 million years ago. The same area was however claimed to have subsequently been driven into freezing temperatures five million years later.3 But, as Andrew Revkin noted, the “centerpiece” of this argument happens to be “a single pebble, about the size of a chickpea, found in a layer” that was supposedly laid down 45 million years ago.4 The pebble, according to Kathryn Moran, who then hailed from the University of Rhode Island, “could have been deposited…only if it had been carried overhead in ice”—in other words an iceberg. This was such slim, not to say lame, evidence from which to draw such a wide-ranging conclusion that there were those who felt they had to raise their voice against it. Among them was Julie Brigham-Grette, from the University of Massachusetts, an expert in paleo-Arctic climates, “who cautioned against giving too much significance to the single sample, and particularly the single stone [read ‘pebble’] from 45 million years ago.”5 As she also pointed out, other evidence clearly indicates that Arctic coasts were still basking in a warm climate as recently as 2.4 million years ago.6
Others may claim, as some already have, that, due to continental drift, what is now Earth’s Arctic region had earlier been located farther south, thus accounting for the warmth-loving species the signs of which are now discovered way up north. But that is really a misconception. Geological fieldwork, as Ian Johnson noted at the turn of the century, has confirmed that the present northern lands of the globe “have been located in polar latitudes for at least the last 100 million years, despite ongoing continental drift.”7 More than that, these polar latitudes seem always to have been much warmer than they are at present. Even at the dawn of the Mesozoic era, which has been dated to 250 million years ago, Earth’s poles were free of
ice,1 as they also were earlier still during the Devonian period, dated at close to 400 million years ago.2 During that time, the Arctic regions were not all that much different than at present.
In fact, give or take a little, the lands surrounding what is now the Arctic Circle have not moved much since about 200 million years ago.3 And if one wishes to split hairs, I can do just as well by pointing out that, even around the above mentioned 400 million years, the same area was still located in Earth’s north polar region.4 In fact, let’s face it, as it has been noted by most glaciologists of worth, the north polar regions, together with the rest of our world, have “enjoyed uniformly warm, equable climate” for most of Earth’s history.5
It is not that continents have not shifted, but as far as the north polar regions are concerned, the lands around what is now the Arctic mainly moved through a relative slight rotation about a fulcrum that was centered close to what are now the New Siberian Islands.6
Although the accuracy of the supplied dates may be questioned, we keep them in mind as an approximate value when we look at the discoveries of further fossils on Ellesmere Island, in the same north polar region, which have been classed as belonging to the Eocene epoch, dated to some 50 million years ago. “These unique, world-renowned sites near Strathcona Fiord,” the Society of Vertebrate Paleontology reported, “include fossil plants and animals that lived during one of the warmest times in all of Earth history, when Ellesmere Island was blanketed in forests inhabited by alligators, turtles, primates and hippo-like animals.”7
Some of these beasts were represented by sets of fossilized teeth, all of which were retrieved from what has been described as “the island’s ancient tropical environment.”8 These mammals, it has been ascertained, did not migrate or hibernate, but lived in the High Arctic “all year long.”9 They lived in a region that has been compared to the “swampy cypress forests in the southeast United States today,” a region that continues to “contain fossil tree stumps as large as washing machines.”10
Between 2008 and 2010, the above site led to what has been described as a “battle between fossil fuel and fossil science.” This came about because of a proposal by a certain Vancouver-based corporation that was seeking coal- ining rights in the area. While paleontologists registered their deep concern over the possible loss of these valuable fossils, the presi-
1 J. J. Flynn & A. R. Wyss, “Madagascar’s Mesozoic Secrets,” Scientific
American (February 2002), pp. 56-57.
dent of the mining corporation insisted that mining would actually facilitate the recovery of fossils as in fact transpired in the Klondike where the search for gold nuggets, during a period of 113 years, resulted in a wealth of rare fossil specimens.1
Coal-mining aside, it was then reasoned that, because the oldest-known tapir fossils come from this area, “there is the possibility that some prehistoric mammals could have evolved in the circumpolar Arctic and then dispersed through Asia, Europe and North America” rather than the other way around.2 Unfortunately this led to the conclusion, at least by some, that these very same Arctic regions had always been burdened with recurring winters even if such winters had been mild. The mammals whose fossilized remains have been found in these areas were thus believed to have “endured six months of darkness each year.”3 It is not told how “swampy cypress forests” could have survived a series of recurring six months of darkness, which could hardly have been warm, for a period that would have lasted thousands of years.
John Tarduno, from the University of Rochester, does not believe in these recurring six months of Arctic darkness. As far as he is concerned, “the Arctic Ocean was warm and icefree” all year- round.4 In 2006, while trudging through the island of Axel Heiberg, just west of Greenland, a member of his Arctic expedition discovered the “amazingly well preserved shell” of a 90-million-year-old turtle that “strongly resembles a freshwater Mongolian species.” Now named Aurorachelys—i.e., Aurora Turtle—the newly discovered reptile could only have thrived in what Tarduno has described as “extremely warm, ice-free conditions in the Arctic region.”5 As reported by the University of Rochester:
“Tarduno’s paleomagnetic expertise, which allows him to ascertain when points on Earth’s crust were at specific locations, allows him to rule out the possibility that millions of years of tectonic activity had brought the fossil from southern climes. The turtle was clearly a native of the area.”6
Tarduno was thus driven to suggest that “the warming may have been caused by volcanoes pumping tremendous amounts of carbon dioxide” into Earth’s atmosphere.7 But, as he also noted, there is “evidence that this volcanic activity happened all around the planet—not just the Arctic.”8 That, however, was as far as he could go. Not knowing where else to turn, he remained strangely silent when it came to what could have caused such volcanic activity all over the world.
Ellesmere Island, which is one of Earth’s northernmost landmasses, seems to have no end of surprises. Forests, apparently, continued to grow between ten and two million years ago in
1 R. Boswell, loc. cit.
what is now a dry, frigid, treeless site that is surrounded by glaciers all year round. The mummified remains of this forest include well-preserved logs, leaves, and seed-pods from such trees as pine, birch, and spruce. The trees, it has been suggested, “were likely preserved” because “they were buried quickly by landslides and thus protected from air and water, which hastens decomposition.” The logs in Quttinirpaaq National Park, many of which are several feet in length, are so numerous that one can hardly cross some areas without tripping over them. Joel Barker, from the Ohio State University, was offering nothing new when he compared the mummified forest to ones “growing hundreds of miles to the south,” which suggested that the ancient forest “must have grown during a time when the arctic was much warmer.” 1 And on it goes. But never mind animals and the forests in which they lived. How about human beings?
The presence of early man in Arctic regions, especially in Siberia, is a subject that has been debated since the middle of the nineteenth century. Such presence has been accepted by some and contested by many others. Some had even gone so far as to claim that “Siberia, or the Far North in general,” had been the actual “cradle of mankind.”2 As time went by, however, lack of solid evidence resulted in an ever-diminishing number of believers. Those who held on kept on looking and digging, but what they found did not always convince their detractors. 3 In that respect, not much has changed. And yet, worked tools continue to be discovered in areas where, at present, it is definitely too cold for tribal populations to exist in comfort. But even then, the objects in question are so crude that many refuse to see them as the handiwork of human beings. Nevertheless, there are exceptional cases.
One of the most important of these Siberian sites is that of Diring-Ur’akh on the Lena River north of Yakutsk in Yakutia. Excavations on the terrace of this river have uncovered human burials in stone coffins that have been dated to 1500 B.C. It is evident, however, that these burials had been dug through much older Paleolithic deposits and, below that, further sites have been uncovered, some of which have been dated to 2 million years ago. Because pieces of worked stone from these deeper layers could still be fitted together, it was clear that the crudely fashioned tools were actually made at the site and not brought from elsewhere. What this additionally indicated is that those responsible for the manufacture of these artefacts had probably belonged to a sedentary, rather than a roving, way of life. It was this, more than anything else, that converted the archaeologist Yuri Mochanov, who had originally been a skeptic, “to the notion of a very early occupation of the Far North.”4
Why would such early human beings have settled in Siberia had it been as freezingly cold as it is at present? This problem did not much bother Mochanov, or his associates, since it was obvious that, for whatever reason, such a settlement did take place. There seemed to be no point in arguing why. What did bother them was how such early human beings managed to
1 M. Inman, “Mummified Forest Found on Treeless Arctic Island,”
news.nationalgeographic.com (December 17, 2010), pp. 1-2.
survive in such extremes. As it has been argued, it is clear that “this part of the world could not have been populated by early humans unless they had the ability to make (or at least preserve) fire and were sufficiently advanced to have made themselves fur clothing.”1 But while the domestication of fire by early man has been traced to over one million years ago, there is no evidence that it had been used in Siberia during the much later Paleolithic. Siberian sites that have been dated to even later times had already run headlong into that quandary. “Most archaeologists,” it has been reported, “simply do not believe that humans—even 200,000 years ago—were capable of colonizing the harsh natural environment of the Far North because they were not advanced enough to control fire and make themselves clothes.”2
While the above remains highly controversial, in our own scheme the problem is not even raised. With the proto-Saturnian sun having shone above these regions from its immobile location in Earth’s north celestial pole, early humans in Siberia would not have needed fire or fur clothes to keep them warm—which is why neither the remains of clothes nor ashes have ever been discovered in these regions from such an early age.
Cosmic dust continues to be implicated in the inception of the Younger Dryas. Elevated levels of helium-3 in sediments associated with this event have been attributed by Paul LaViolette “to a sudden influx of a large amount of cosmic dust.”3 As he also indicates, Earth is still “surrounded by a dust cloud” that extends “radially outward” for a few thousand miles.4
LaViolette is also of the opinion that the mass extinctions of the Pleistocene had to have had a solar cause.5 According to him, several studies “indicate that toward the end of the ice age the Sun was far more active than it is today.”6 Citing the works of H. A. Zook and others, he presents evidence from the “tracks” that solar flares are said to have etched in lunar rocks, indicating that, around that time, “the average solar cosmic ray intensity was 50 times higher than at present,” which intensity then started to decline until it reached the current level.7
This was not exactly new. As already noted, analyses of the lunar rocks recovered by the Apollo astronauts had already led to the assumption that Pleistocene fauna had succumbed to intense cosmic ray bombardment. Rather than LaViolette’s more energetic Sun, however, the source of the bombardment was theorized to have been a supernova.8
LaViolette is not alone in this. To an extent, van der Sluijs also agrees. According to him, the events in question “can be explained on the hypothesis of a solar storm of unprecedented
proportions, provoking intense geomagnetic disturbances and near-lethal synchrotron radiation emitted by magnetospheric plasma, possibly in combination with a cometary interloper.”1
The telltale signs of this bombardment are indisputable. What remains debatable concerns its source—a supernova or an overactive solar orb. In our scheme, the flare-up from the brown dwarf star that had been acting as Earth’s primordial sun just as readily accounts for the accumulated evidence. As we have also indicated, even though we still maintain our lunar neighbor had not yet been captured in terrestrial orbit, it would still have been bombarded by intensive cosmic rays had it not been too far away from Earth.
MAN ON THE MOVE
The Clovis people’s reputation of having been the first wave of infiltrators to settle in the Americas is under severe attack. Fossilized human feces dated to 14,000 years ago, which thus predate the Clovis era, have been discovered in an Oregon cave.2 Additional to that, thousands of stone artifacts have been found littering an ancient settlement, dated to 15,000 years ago, that has come to light in central Texas.3 There are those who believe that seafaring people from Asia “must have used skin boats” to navigate between ice-free waters along the Pacific Coast of Alaska and British Columbia “at least 16,000 years ago.”4 Anthropologists from the University of Utah have shoved this date even farther back in time by proposing that “the original peopling of the Americas might have begun more than 20,000 years ago from Central Siberia, across the Arctic Ocean, via Canada’s High Arctic Islands.”5 All of which is in keeping with genetic evidence that definitely points to “a much earlier arrival of humans in the Americas than previously believed.”6
For reasons we have already supplied in earlier pages of this work, the above dates have been rounded off from the ones supplied by those involved in these theories and discoveries. And even then these rounded figures are at best a reasonable average. What is of greater importance is that the evidence does not only indicate an earlier migratory trend into North America, but the greater feasibility of such migrations via different means than had earlier been believed. How many survivors from these different societies managed to merge and blend with others at the termination of the Younger Dryas is now difficult to ascertain. But the changes that affected them following the end of that period keep receiving further validation.
The mining of iron oxide for the production of the coloring medium known as ochre had been going on by ancient man in various localities around the world. Signs of such activity that have been dated between 350,000 and 400,000 years old have come to light at Wonderwerk Cave in South Africa.7 The site at Terra Amata in France, where ochre has also been
1 M. (Rens) van der Sluijs, “Bad to the Bones.” thunderbolts.info
(August 30, 2010).
found together with Acheulian tools, goes back to 300,000 years ago.1 The oldest mine in the Americas, which has recently been discovered near the coastal town of Taltal in northern Chile, is much younger, but it bears special significance to our developing work. According to the archaeologists involved in this discovery, an estimated 700 cubic meters, containing 2,000 tons of rock, have been extracted from this mine. More than 500 hammering stones that were used for this extraction from the earliest use of the mine have also been unearthed.2
What seems clearly indicated is that the mining operation at this site constituted a labor-intensive activity that demanded technical skills and a fair amount of social co-operation that must have lasted for generations. This was implied by the carbon-derived dates of the charcoal and shells that were found in association with the mine, which dates range from 12,000 to 10,000 years ago.3 These, too, are rounded averages that should also be used with caution, especially since radiocarbon testing of comparable samples conducted by separate laboratories have resulted in different dates, not to mention other discrepancies.4
We do, however, note that mining at this site came to an end around that last given date, which coincides with our benchmark figure for the end of the Pleistocene Ice Age. In view of the catastrophic events that accompanied this event, the cessation of mining activity is understandable. And while, as we keep harping, we hate to jump ahead, the same site seems to have been rediscovered and mined again around 4,000 years ago,5 which comes rather close to coinciding with the end of the proto-Saturnian era during which Earth commenced on its long period of stabilization.
In actual fact, Earth has not yet settled down from its past series of catastrophic disturbances. I will not here mention those ancient cities that are now under the sea due to the sinking and/or tilting of land areas.6 Yes, lands continue to rise and fall. But they also split apart. Take the spreading of the continental plates in the Afar Triangle in northeastern Ethiopia. The pulling apart of these plates is slowly opening up a breach that will eventually be flooded by the waters of the Red Sea. The widening of the fissure that is being created has been calculated to be taking place “at up to 12 centimeters per year.”7 This might not seem to be much, and, in a way, it really isn’t. There are times, however, when sudden jolts accelerate the widening, amplifying the catastrophic disruption of the surrounding strata. One such jolt took place between September and October of 2005 when a 60-kilometer-long stretch of rock was torn
apart, widening the breach “by as much as eight meters.”1 Magma from nearby volcanoes then flowed into the newly created rift, thrusting up “a dyke of roughly 2.5 cubic kilometers—twice as much material as erupted from Mount St. Helens—more than two kilometers below the surface.”2
The splitting apart of the above area has been the one that has
generated the most attention. But there are others. A 3-kilometer long
crack, 100 meters wide, appeared quite suddenly in Puno, southern Peru,
without an accompanying earthquake to give it birth.3 In Iceland, an
entire lake started draining into a newly formed crack in the earth.4
And that’s to name only a
THE EMERGENCE OF RELIGION
The changes that most affected our ancestors at the end of the Ice Age and the events that followed the Younger Dryas, however, were those that took place within their own intellect. Prime among these was the embryonic emergence of religion. While, with others, Richard Rudgley places the inception of the Neolithic period at our benchmark date of 10,000 years before the present, he traces man’s “artistic and religious awareness” to an earlier period, “40,000 years or so ago.”5 What we accept, however, is that man’s artistic nature developed long before he formulated any concepts that could be called religious. What we have been able to extract from the mytho-historical record does not allow us to push religious concepts beyond our benchmark figure.
There will be those who will point to the production of ochre, said to have been utilized in religious practices, far ahead of 10,000 years ago. Yes, as we have already seen, ochre had been developed earlier than that. It should, however, be kept in mind that this coloring agent was not always, and/or strictly, used for religious purposes. Besides its utilization as a painting medium, as Rudgley himself pointed out, “ochre may have been used in the treating of animal skins and hides in order to make some rudimentary form of clothing or bedding.”6 The Australian Gugadja continue to use it as a medicine, which is not surprising since ochre contains antiseptic qualities and can be used to staunch bleeding.7 Even its use as body coloring must not necessarily be seen as having been symbolic. We know from existing tribal people, such as the African Himba, that the rubbing of ochre on one’s body can be used “to repel insects and provide protection from the sun.”8
Rudgley is also of the opinion that man’s acquisition of religion preceded his development of agriculture.1 This is in keeping with the newest shift in anthropology, especially as brought to bear with the archaeological discoveries at Çatal Hüyük and, more recently, at Göbekli Tepe, both in present-day Turkey.2 This last mentioned site contains a megalithic structure that is presently believed to be the oldest religious temple that has so far come to light. It has been dated by Klaus Schmidt, its excavator, to 11,600 years ago.3 To us, this seems a little bit too early, but let it be for now.
Religion is additionally seen by some as having been the very basis of civilization.4 As always, however, opinions differ. Some have even taken a middle course. “In one place agriculture may have been the foundation,” Charles Mann found reason to report, “in another, art and religion.”5
To Rudgley, civilization is much older than most authorities would be willing to admit,6 but this depends on what one means by “civilization.” If civilization is meant to incorporate any form of institutional or civil law, no matter how primitive in origin, civilization could hardly be said to have developed before settlement, which, in turn, is usually believed to have preceded farming.7
If we have been following the right track, other than what dreams might have instigated in human minds, man would originally have found nothing in nature that would have led him to believe in gods, whatever his first concepts of such entities might have been. Never mind the anthropomorphization, or deification, of thunder and lightning, hurricanes and such. As far as we can tell, even such occurrences would have been kept at a minimum beneath the static atmospheric regime that existed within proto-Saturn’s encasing plasmasphere. Judging by what we find encoded in mankind’s mytho-history, the belief in an omnipotent being was the direct result of the catastrophic changes that were instigated by proto-Saturn’s flare-up. It was due to that event that mankind commenced to endow heavenly bodies with intention and long lasting lives. Nor is this merely our contention. As Mann recently pointed out, organized religion came about in response to “a common vision of celestial order.”8
What Mann could not have known is that this “celestial order” was what followed proto- Saturn’s chaotic outburst. It was, in fact, what mankind was to remember as the Creation. It is unfortunate that the manner in which this event was reported by those who witnessed it was to be misunderstood by those who came much later. In simpler words, it was eventually forgotten what it was that God had actually created.
As we pointed out at the very end of our previous volume, God gave birth to a daughter.9 As some have said, she was actually born at the very shedding of God’s light. But God kept
1 Ibid., p. 11.
her hidden for a while until a proper place for her had been created. And that was more or less what God’s Creation amounted to, a simple, even if radiantly glorious, celestial enclosure. This is so contrary to most existing religious beliefs that it is bound to rile even some of my most ardent readers. It cannot, however, be stated all that simply. There is an abundance of evidence that needs to be surveyed and critically assessed. So let me end this treatise the same way I ended its three prequels by asking my readers to stay on board with me. I still have much to offer.