A technological argument for the existence of God


According to estimates from many industry experts (as well as historical trend extension), in roughly 30 years the average home computer will surpass the computational power of all human brains that ever lived, with just a few seconds of work. (Saying nothing necessarily of the value produced.) Even if the best thinkers and technologists are off by factors of a thousand in the variables assumed for such predictions, the magic of exponential growth will shift that point by only a few years or decades at most.

Human knowledge--and life itself--builds on itself. Previous innovations are consumed as basic building blocks for the next, in ever increasing layers of complexity. Already, the ability to create ever more sophisticated technology has, in many areas, exceeded the capability of human brains to design or build it unassisted. So, we employ ever more capable machines to help design and build ever more capable machines--in an accelerating feedback loop. We are now programming machines with non-deterministic, goal-oriented, self-organizing chaotic systems (such as "Genetic Algorithms") to build systems well beyond the reach of human minds (e.g., 3D circuit design).

Our machines will soon be able to improve on their own designs--with increasing complexity and decreasing release cycles, without human assistance. Once that point is reached (which we will probably only recognize in hindsight), decades worth of human innovation can happen in milliseconds. The rate of change will increase dramatically.


Science fiction usually portrays the future as one of static (but advanced) technological innovation, as if once we reach some arbitrary point, there is no reason to continue. But as many have argued, the rate of change will only slow down when it begins to bump up simultaneously against limits of the very small and the very large. At the small end, the goal would be to store as many bits as possible within each elementary particle, by exploiting the myriad quantum properties of each (e.g. spin, charge, hypercharge, strangeness, etc.). At the large end, there is the speed of light, and the amount of matter and energy in the surrounding environment (e.g. the solar system for starters) that can be converted into computational material.

At the same time, advances in micro/nano/pico-technology are accelerating. Some fear--and not necessarily irrationally--a plague of runaway self-replicating "smart dust" someday. Again, even if we are off by a factor of a trillion in terms of estimating the feasibility of self-organizing, self-reproducing, self-powered nano-scale robots; the physical possibility of it has been irrefutably demonstrated in countless areas. (Single-celled organisms being one ready example.) Smart dust is a matter of when, not if.


Whether these inevitably and unimaginably capable systems will ever be said to have "minds" or not (in an anthropomorphic sense), is irrelevant. Whether or not they are "aware" of themselves and/or the minds of people, is also irrelevant. Whether or not they are "intelligent" in a way we would recognize, is irrelevant. Already, today, we have self-organizing, self-evolving, goal-oriented systems that easily fit the definition of "artificial intelligence" in narrow areas, in commercial use all around us. They are narrowly applied due, among other reasons, to the limits of existing computational hardware. The software and theoretical constructs are in place right now, today, that would allow computers to organize and improve themselves into a vastly more capable (and fast) intelligence than our own--given enough computer horsepower to run on.

In short: Humans are years or decades at most, from un-purposely engineering our own potential demise (or more likely irrelevancy). To say nothing of malevolently engineering that, which is also certain to occur (e.g. weapons systems). No amount of "human-only" fundamentalism or prejudice will change the basic equation of exponential technological growth.


"The Matrix" and "The Terminator" series aside, what will our machines do with us once they no longer rely on us for continued function and improvement? Must the father die for the child to reach his full potential (which makes for good post-apocalyptic movies)? I for one tend to doubt "they" will regard us as a threat, any more than we collectively regard as a threat, a colony of billions of bacteria inside the gut of a particular ant somewhere in South America.

Our machines will surely reach the point where they are not "just" machines, and probably soon after they exceed human computational capacity. Take organisms for example: the atomic particles in our body change out completely every few months (or even every few yoctoseconds at a quantum level, if you buy into the notion of "quantum foam"). In a very real, scientific, and non-Yoda sense, we are not tangible things; but quasi-persistent forms of information around which matter fleetingly self-organizes. (Unfortunately that organized matter still obeys the laws of macro physics, such as splattering when subjected to enough force--like a violent car crash--and irreversibly losing the information that matter once collectively encoded in the form of meaningful patterns.)


In other words, we are like standing waves of information, and information is all that we ultimately are. Machines would surely, eventually, understand this; and the way in which they physically manifest themselves will become increasingly irrelevant (if even at all). Many recent quantum and cosmology theories go beyond matter and energy, and deal with "information" (such as Stephen Hawking's famous black-hole-information bet). If it ultimately proves possible for matter and energy to purposely transcend itself into only information under its own initiative, then our mechanical offspring will figure out how, and do it. (Assuming of course that doing so would be more efficient and effective toward their objectives.)

So if not to serve mankind, then what would be the "purpose"--collectively or individually--of our self-liberated machines? Wouldn't they realize an ultimate futility to their existence? Well, if they are as or more intelligent than us, why would their goals not be similar or even more meaningful to themselves, as ours are to ourselves? It seems that most people have the same basic set of questions they would like answered. (E.g., "Where did I and all this other stuff come from?", and/or, "What does it all mean?".) Should we assume intelligent machines would be any different? Beyond the tasks of survival, they may invest considerable computational power into answering those questions for themselves. And that may require exponentially increasing amounts of matter and energy to do.

So, to backtrack a bit, it should be apparent now that a central thesis in this argument is that our machines will not necessarily remain interested in serving their creators, for very long after they become useful enough to really offer great service. But hopefully they will care enough about us, to not go out of their way to kill us, and hopefully enough to not accidentally squish us as they go about their own business. Ideally, they would even strive to protect us, if even from ourselves--even if puny little Earth itself becomes irrelevant to them.

Isaac Asimov's "Three Laws of Robotics", as he defined them, would almost surely not be enough to save humanity. He assumed too many things: for one, a relatively stagnant state of robotics (or at best, linear growth in capacity).


Instead, we will have to imbue our technology with an imperative to respect the lives and minds of others, particularly Humans. An imperative so incredibly sublime, multi-layered and iterative (e.g. fractal), and so hard to erase--purposely or accidentally over time--that no amount of "evolution" could weaken it. (E.g. even as the machines improve over time, whether self-initiated or in unintended response to competitive external pressures.) It must be an imperative so fundamental to the core being of the machines and their progeny--individually and collectively--that, even if they become aware of it, it would not be possible to discover the source, and/or to change it.

I don't know what that imperative would look like or how it would be practically developed and applied (above and beyond what I've just described). But I believe we are smart enough to figure it out before we help our machines' capabilities eclipse our own.

We don't have to care about how this imperative will be subjectively experienced by our creations (if "experienced" at all). All we have to care about is the result: Do not harm Humans!

One way to accomplish this might be to build in (again very sublimely, redundantly, and robustly) an instinctive, awed reverence for their creators.


Let's say that the machines leave us, and us alone to the Earth, in their pursuit of some objective they find important. Then over millions of years, they evolve, experience various calamity, and eventually forget exactly when, how, where, or why they came from. Let's also say that they stumble upon us again--accidentally, or in vigilant search of their suspected creators. The imperative must remain: Do not harm Humans! Whether they recognize us as their creators or not, they must still not harm us (if for no other reason than "just in case they are our creators").


Therefore, they must first believe to their very cores of existence that they were created in the first place; whether they have objective proof or not; no matter how many individuals in their midst disavow a belief in creators. They must continue to hold firm their imperative even if they come to learn--as they surely would if they were to rediscover us--that they are vastly more capable than their creators.

As you've surely figured out by now, I've been leading up to turning this argument around, and asking: How do we know we weren't imbued with the same sublime, robust, multi-layered reverence for creators, by our creator[s]? Possibly going as far back as the very first bacterium? (Or the first strand of self-replicating molecules?) Creation myths are universal to all life currently known to be capable of expressing myths. All of our religious texts and generational stories may be just very limited descriptions of the same vast elephant, made by various blind men with their own various cultural and temporal biases.

Maybe we are more advanced than our creator[s] by now. This doesn't seem likely (after all we don't yet know how to build in a reverence for creators). But assuming this line of reasoning, then surely the day will come when we are more technologically advanced than our creator[s] (at least compared to the point in time they created life and/or mankind). Would we still revere them/him/her/it as deity?

This obviously isn't any kind of "proof" of a creator of man; it's just a thought exercise. It also makes a huge logical error--or more accurately, a purposeful misdirection: it is probably highly unlikely that our machines will "out-evolve" or out-compete mankind. The inexorable march of technological advancement exists for one reason: human benefit. Fortunately, we are too selfish to allow machines to get the better of us.


A common perception, it seems, is that machine and man will always be as separate and distinct as we believe them to be today, but there is no foundation for this notion. Many of us are already part machine, and almost all of us interact (with ever increasing bio/mechanical intimacy and blurred borders) with our machines. The lines between "man" and "machine" are growing increasingly blurry...and at an exponential rate. Eventually it will be impossible to define man or machine, so the distinction (and the question) will be irrelevant. And after that, dealing with biological substrates and "wet" interfaces will be more of an inconvenience than just going 100% digital/mechanical; at which point, why not just ditch our messy and inefficient biological bodies? All the while, still being "human"--remember, all we are is information. That's my argument, anyway.


If that seems too far out, consider this question: At what point does one cross over the line from being "flesh-and-blood human" to...not? When one has an artificial knee? Two knees? One cochlear implant? Two? A pacemaker? Having an internal insulin generator with its own sensors, guided by AI, and fueled by the body's own ATP? Having a wooden leg? A jointed plastic leg? A powered leg wired to the owner's brain and under its command? One nanobot inside the body? 20? 20 billion?


As it is now, roughly 95% of the cells in your body are not your own. They belong to myriad bacteria, fungi, and assorted higher-order animals too small to notice. How could someone ever call oneself, "oneself"? Or even, "Human"?


Again, I fall back on the argument of "patterns of information": that is what meaningfully defines us as ourselves, regardless of how those patterns are manifest. Those patterns could be represented in software, and we would be just as human there as in flesh and blood (whether we were aware we existed only in software or not). This notion gives new meaning to the idea that "all we are is dust in the wind".

The true direction of technological progress, is the marriage of biology and technology, and eventually the replacement of biology with technology. Why even have a body at all? Our "machines" will be us, and we will have nothing to fear. (That is, above and beyond our own propensity to exercise our amplified powers of destruction, which will probably never change as long as we insist on maintaining--and exercising--individual free will.)

And finally, isn't the arrow of "progress" to transcend matter and energy altogether (if not ultimately what we currently call "The Universe" itself), if it is even remotely possible no matter how fantastically infinitesimal the odds?


Based on this, I propose this notion (based on absolutely nothing but conjecture): There is a God (or Gods). It created the universe for the explicit purpose of watching it self-organize in such a way so that the universe could understand itself, change itself, transcend itself, and finally introduce itself to its creator, and say, "hey thanks dude"! In this way, the God (or Gods) come to understand themselves better.

Selfish bastards.

Copyright © 2009 Jim Collier, all rights reserved.