Protons, electrons, neutrinos — these are not fundamental components of the universe. Forget quarks and photons — they aren’t fundamental either. These “fundamental particles” are really just manifestations of the the true fundamental component of the universe: information.

For many years I have relied on a simple definition of reality whose implications I did not appreciate. For me, reality is everything that we could plausibly know about. Unicorns are not real because there is no plausible reason to believe they exist. An unseen galaxy is real because there is plausible reason to believe that it exists. Yes, yes, you can cavil at this definition with lots of counterexamples, but of all the definitions I have seen, this one strikes me as the most useful, the most reasonable.

But I had never noticed a crucial element of this definition: the role of information. Reality is not defined by mass or velocity or energy or space or time, or any of the other “fundamental” dimensions; it is definied by knowledge, that is, information. If we have no information about something, and can have no information about it, then it doesn’t exist. If we do have information about something, then we know that it exists. Information is truly the fundamental component of the universe.

It seems obvious that the fundamental unit of reality is the bit, but how do bits manifest themselves in reality? It’s not as if astronomers looking deep into the universe see ones and zeros. How do those ones and zeros manifest themselves physically?

Here I turn to the Uncertainty Principle. It’s always seemed odd to me that the Uncertainy Principle has been shoved aside by quantum mechanics. Yes, QM is the way we apply the Uncertainty Principle to calculate the behavior of physical objects. But the essence of the idea lies in the Uncertainty Principle; QM is just the mechanical application of the Uncertainty Principle. The very essence of the Uncertainty Principle is the notion that the amount of information about the universe is limited. We cannot know anything with infinite precision.

This means that the total amount of information in the universe is finite. That should be obvious. If the spatial, temporal, and energy extent of the universe is finite, then why would the information extent be infinite?

This leads to an interesting problem illustrating the difference between mathematical truth and physical truth. The value of π is impossible to express with a finite number of digits. Yet it is physically impossible to assemble an infinite sequence of digits. To put it another way, even the universe cannot know the full value of π.

It would seem, then, that the basic unit of information in the universe is the Planck Constant, 6.6 x 10^{-34} Joule-seconds. This comports with the notion of the photon as a carrier of information. A photon has a fixed amount of energy (measured in Joules), and it can deliver one bit of information, but it cannot deliver that bit instantaneously; the receiver must wait for at least half a wave-period to ascertain the existence of the photon, which means that the information content of the photon is measured in seconds as well. Therefore, its information content is properly measured in Joule-seconds — the fundamental unit of information in the universe.

Now I’m going to shift gears by talking about how we actually measure the “fundamental” dimensions of the universe. I was always deeply struck by Einstein’s brilliant observation that space is defined by the behavior of light. A straight line is defined as the path taken by the path of a photon. That’s why we check the straightness of a stick by sighting along its length: the path of the light defines straightness. Einstein was able to use this observation to show that, since a gravitational field perturbs the path of light, it necessarily curves space itself.

But it doesn’t stop there. Light defines the straightness of space, but it also defines its extent. The length of a meter is defined to be 1,650,763.73 wavelengths in a vacuum of the radiation corresponding to the transition between the the 2p_{10} and 5d_{5} energy levels of the Krypton 86 atom. A more prosaic way of thinking about it is to imagine a universe with nothing in it. How big is that universe? It’s impossible to know; you need a yardstick to measure a distance, and if there’s no yardstick, there’s nothing to measure distance with. You need at least two points to specify a distance, and you need something to mark each of those points: an electron, a baseball, or a planet.

*[post scriptum: I recently realized that this line of reasoning was first used by Aristotle in a slightly different way. He argued that space cannot be measured except by reference to matter — just as I do. But he went further: he then concluded that this makes a vacuum impossible, because without any matter, there cannot be any space. I have no problem with a vacuum; we could have a vacuum between two mass points. But if there were absolutely no matter in the universe, then we would not be able to measure any space (aside from the fact that we wouldn’t exist in the first place…). ]*

The same thing applies to time. We measure time by events. Without events, you can’t measure time. With a clock, an event is either a “tick” or a “tock”. Physicists define the second in terms of light, using the frequency of radiation produced by an atomic energy transition, in this case a cesium atom. Each of those oscillations is an event. Time is demarcated by events.

The Big Bang

All this makes the Big Bang easier to understand. Let’s start at the instant of creation, T = 0, when all the information of the universe came into existence. It’s nonsensical to ask, “What happened before that?” because there was nothing to measure time with; ergo, there never was any such thing as T = -0.000000….1. It couldn’t be measured, so it didn’t exist. If the information for it cannot exist, it simply isn’t a part of reality.

So, at T = 0.00000…. we have a stupendous but finite amount of information. Don’t ask where it came from — that question erroneously assumes that there was a time before T = 0. Think of it as a movie that has all that information on the first frame. There are no frames before the first frame.

Here’s another way to get past the confusion arising from the notion of time starting at T = 0. Think of a computer endowed with lots of artificial intelligence. This computer is so intelligent that it is self-aware. It is conscious, and is a kind of cybernetic person. It perceives the environment in which it exists. Now we pull the plug, and the computer instantly loses power. It dies. What happens after that? From its point of view, nothing happens after that, because it is not aware of the passage of time after it loses power. It makes no sense to think about how the computer perceives its death, because it is unaware of its death. Now reverse the thinking to consider a symmetric “birth”. Suppose that the computer could turn on instantly in the same way that it turns off instantly. What would it think before its birth? Again, the question is nonsensical. It has no perception of time prior to turning on.

The universe is the same way. It’s a gigantic computer, an information processing machine. It turned on at T = 0. That’s when the movie started. There’s nothing before that.

So all that information manifested itself as some sort of “energy-time”, measured in Joule-seconds. Because there was something there, it was now possible to measure both space and time, although at T = 0, it was all concentrated at a single point. This situation could not endure for even the tiniest fraction of a second, because if it did so, the total amount of information would increase.

Here’s one way to understand why the information content of the universe would increase if it remained static. Think of a single particle moving through space. It has a position *x* and a momentum *p*. The Uncertainty Principle tells us that there is an uncertainty in our knowledge of its position and its momentum: ∆*x*∆*p* ≥ h, where h is Planck’s Constant.

Now suppose that you could measure its position x_{0} at time t_{0} with accuracy ∆*x*, without affecting it in any way. Then you wait a while and measure it again at position x_{1} at time t_{1} with the same accuracy ∆*x*. You can determine its velocity (and hence its momentum) with greater accuracy, and in fact, the longer you wait, the more accuracy you can get. This would violate the Uncertainty Principle; you could end up knowing its position and momentum with much better accuracy than the Uncertainty Principle permits.

The “catch” that prevents this is that the act of measurement necessarily perturbs the particle. You have to look at the particle to measure its position, and that requires bouncing photons off it. Those photons knock it off its previous course in an unknowable way, so you can’t pull off the trick I described above.

This principle applies to the universe as a whole at T = 0. If the universe had remained at a single point for any length of time, we would be able to localize it with greater and greater precision with each passing billionth of a second, which would violate the Uncertainty Principle. Therefore, the universe had to expand; that was the only way that it could keep its total information content stable. This expansion was very rapid, much faster than the speed of light. It was not caused by any force or pressure; it was not an explosion in the sense we think of an explosion. It was instead the natural and necessary action of the Uncertainty Principle.