Astrology these days is a superstition. But in times past it was something entirely different, and it played a crucial role in the development of Western science. 

Science is a modern concept
People wrongly assume that science has always been with us, so we see books about “Babylonian science” or “science in ancient Greece”. This is entirely wrong, because there was no science anywhere in the world until the seventeenth century. Science is itself a modern concept, part and parcel of modern civilization and alien to any civilization before the seventeenth century. It’s like other modern concepts, such as ecological awareness or consumerism or egalitarianism, that simply don’t work when applied to ancient civilizations. 

There were, of course, precursors to science in ancient times. Every civilization had extensive lore attempting to explain natural phenomena. Initially that lore was based on social reasoning: natural phenomena were controlled by powerful persons whose favor must be curried: gods. See Mental Modules Interact for an explanation of this process.

The sun and the seasons
The first inkling that there were secular explanations for natural phenomena came from the seasons of the year. As the sun moved in its annual path through the ecliptic, the weather changed. When the sun was south of the celestial equator, the seasons were cold. When the sun was north of the celestial equator, the seasons were warm. 

Inasmuch as early civilizations skated along the edge of famine, it was crucial to know exactly when to plant and harvest crops. You couldn’t rely on today’s weather to make that decision — perhaps today’s weather is a late freeze or an early thaw. The weather itself was highly variable, but the sun was a regular and reliable predictor of the overall weather. All early civilizations learned to plan their farming based on the movements of the sun. 

Thus was born the concept of the calendar, the regular observations of the sky, and the celebrations of critical turning points in the annual cycle. But you can’t observe the sun’s position in the sky, at least not directly. After all, when the sun is visible, the stars aren’t, and when the stars are visible, the sun isn’t. Thus, indirect observational methods were required. 

The simplest and most commonly used technique was to observe the sunrise or sunset against some fixed landmark. If you were to observe the location of the sunrise on the horizon every morning of the year, you would see that the sun reaches its most southerly position on the horizon on December 22nd, and its most northerly position on June 22nd. Halfway between those two extremes, on March 22nd, the sun would rise exactly due east.  Thus, all you need to do is set up some markers that permit you to see when this happens. Inasmuch as this is observation is crucial to the well-being of your community, it’s probably a good idea to make those markers clear and prominent, like so:

Stonehenge heel stone

Marking systems like these were used all over the world. Few were as monumental or long-lasting as Stonehenge, but archaeologists have found their traces in many locations. Many, like Stonehenge, were set up to mark a number of critical locations for rising and setting of both the sun and the moon:

Stonehenge Station Stones

Another system, less reliable but requiring less effort, utilized heliacal risings and settings of bright stars. We know that the Egyptians, for example, used the heliacal rising of the bright star Sirius to estimate the timing of the annual Nile flood that inundated the fields along its banks. Starting a few weeks before they anticipated the flood, they would get up well before sunrise and watch for Sirius to rise. Each day it rose four minutes later than the previous day, closer and closer to twilight. At some point, Sirius would be lost in the glare of late twilight and would no longer be visible — that was the exact date of the heliacal rising of Sirius, and the flooding would start on that day. This system allowed them to issue warnings in advance so that the farmers could clear out their equipment in due time before the flooding started.

Both of these systems worked well, but they suffered from a fatal flaw: what if it was cloudy on the critical day? This problem could indeed be fatal if the cloudiness extended over several weeks. The solution was easy enough: keep a record of the days when the sun rose at critical points on the horizon. This record is what we now call the calendar. 

It takes only a few years of such observations to see that the sun completes its cycle in about 360 days. Observational errors made it difficult to nail down the correct days with a precision of better than a few days. The longer your records extended, the more accurately you could calculate the length of the year. 

Every civilization developed its own estimate of the length of the year, and most civilizations quickly zeroed in on 365 days as the correct figure. Inasmuch as a lunar month seemed to span about 30 days, most civilizations settled on a simple system in which a year was divided into twelve months of 30 days each. They added five “special days” at the end of the year to keep the calendar in step with the sun. Those special days were usually times of celebration and feasting. Some societies marked the end of the year at the winter solstice; this was the case with the Roman calendar and when Christianity became the dominant religion in the Roman Empire, they rechristened <ahem> the winter solstice holiday as “Christmas”. 

But some societies marked the beginning of the year at the beginning of spring and some market it at summer solstice.

In any case, the critical trick was to keep count of the days that passed and add whatever observations that weather permitted to the overall record. Because the Egyptian civilization had a continuous record of observations stretching back many centuries, they were the first to realize that the year is a little longer than 365 days — in fact, it was, by their estimate, 365.25 days long. So they added an extra day every four years to keep the calendar in step with the sun. This is the basis for our leap year.

From the calendar to astrology
A simple and obvious lesson emerged from all this effort: the sun exerts enormous influence on the world. It controls the seasons, the weather, and the crops. It was also observed that the moon seemed to exert some mysterious influence on women, because their menstrual cycles seemed to be regulated by the motion of the moon. Moreover, there was some sort of relationship between the moon and the tides as well, but they could never quite nail it down. 

There were five other celestial objects that moved through the sky: Mercury, Venus, Mars, Jupiter, and Saturn. They moved along the same path that the sun and the moon followed; it was obvious that they were somehow related to the sun and the moon. It really doesn’t take much of a leap to conclude that, if the sun dominates our lives, and the moon seems to have lesser influence on the world, then clearly the five planets must also exert some lesser, more subtle influence on our lives. This was the foundation of astrology.

It’s important to recognize that astrology did not start off as superstitious mumbo-jumbo; it was a rational extension of the state of knowledge at the time. Moreover, the importance of the calendar to society meant that the people who maintained the calendar enjoyed high status in society. We call them priests, but again, the modern concept doesn’t quite apply to ancient times. Their role was religious, but in those days religion was closely tied to the secular observations of the sky. In other words, science and religion were one and the same back then. Most civilizations had nailed down the motions of the sun by about 5,000 years ago; with that knowledge in hand, they turned to the moon and by about 3,000 years ago, the best of them could predict eclipses. 

But it was the Greeks who took the primitive astronomical knowledge of the Egyptians and ran wild with it. They created Western astrology by adding detailed observations of the planets to the overall system. The crucial turning point was the development of detailed tables of the positions of all the celestial bodies for every time in the past or the future. These tables permitted them to assign a unique astrological stamp to any moment in time. With seven celestial bodies that could be assigned to any of 12 signs of the zodiac (constellations), there were 108 different combinations (although the propinquity of Mercury and Venus to the sun reduced this number to about 80). Throw in time of day and you add another 14 variables to the mix: whether each object was rising or setting at that moment. The complexity of the subject exploded. 

Remember that I mentioned earlier that science and religion were one and the same? At this point, religion entered into astrology by providing a basis for associating various astrological patterns with appropriate social consequences. Mars was not only a planet, but also the god of war; his position was important to questions of war and peace. Likewise, Venus influenced love and marriage. By associating the personal characterists of gods, as derived from religious lore, with the planets and their positions, astrologers could make predictions about events on earth.

Astrology and astronomy
To get good tables of planetary positions, careful observations of the planets over long periods of time were necessary. It was no longer good enough to merely note the constellation in which a planet resided on any given day; more precise observations were required. Since the planets moved against the background of the stars, it was best to measure their position relative to the stars, which in turn required detailed observations of the positions of stars. This was the genesis of astronomy: the rigorous measurement of objects in the sky. The Greeks worked hard at it, and two names have endured through the centuries as major contributors to this effort: Hipparchus and Ptolemy. 

Hipparchus assembled the first good catalog of the stars in the sky, using measurements from others as well as his own observations. He also created the system we still use for measuring the brightness of stars. He divided the stars into six levels of brightness, and called the brightest “stars of the first magnitude”; the next category was “stars of the second magnitude”, and so on. Nowadays were have refined the basic system so that a star can be of magnitude 1.73. The moon is magnitude -11; the Hubble telescope can detect objects of magnitude +31. 

Ptolemy made the crucial step of replacing a mindless table of positions — raw data — with an algorithm for computing their positions: the Ptolemaic model of the solar system. By knowing the position of any planet at any given time and the speed of its motion, one could calculate its position at any past or future time. Unfortunately, the Ptolemaic system in its simplest form didn’t work, so Ptolemy added kluges called epicycles to the system that permitted him to adjust it to better match observations. We call his work “astronomy”, but it was all part and parcel of a larger system that combined observation (astronomy) with application (astrology). 

The maturation of astrology
Over the centuries, better and better observations forced the addition of more and more epicycles to his system — but they produced ever more accurate tables for use by astrologers. The relationship between planetary positions and their earthly consequences became completely detached from any underlying system of objective reasoning, and instead became totally conventional, in the sense that it was all a matter of convention, not objective truth. Astrological truth was much like the meanings of words in a language. A very few words are onomatopoeic (they sound like what they mean). The vast majority of words are conventional in meaning: their meaning is merely a matter of what everybody says they mean. The meanings of words drift over time. For example, the Latin word follis meant ‘bag’. The French version of the word evolved to mean a ‘bag-head’: somebody with an empty head. It came into English as ‘fool’. There is nothing about the word to suggest its “true” meaning; it means whatever people use it to mean. And that meaning changes with time. In much the same fashion, the meanings of various planetary configurations drifted with time. 

Note, however, that Western astrology was computationally intensive: astrologers had to do a lot of calculating to figure out the precise positions of planets at any given moment in time. By the late Middle Ages, the adoption of Arabic numerals and the development of efficient arithmetic techniques enabled astrologers to carry out extensive calculations. This added to their credibility, because arithmetic was an impressive skill back then, and automatically commanded respect. It was just like the common belief in the 1960s and 1970s that “If a computer says it, it must be right”. If an astrologer carried out complicated calculations to derive his predictions, then they must be right. 

Astrology carried astronomy along with it. Astrologers could make a good living, but they needed accurate planetary tables to get the best results, and so many intellectuals in the late Middle Ages devoted much time to figuring out how to produce better tables. One of these intellectuals was Copernicus, who dumped the geocentric Ptolemaic system for a heliocentric system. At heart, Copernicus wasn’t striving for objective truth as much as he was striving for better astrology. See these two book reviews for more on Copernicus: The Copernican Revolution and The Sleepwalkers.

The demise of astrology
Galileo struck the first blow with his revelation that the planets were not perfect heavenly bodies. The moon was pockmarked with craters and mountains. Astrology commanded credence because people saw the heavens as pure, clean, unsullied like the messy earth. The sky above was a completely different world than the earth below, and therefore it somehow controlled events on the “lesser” earth. But when people learned that Jupiter had tiny planets that circled it, it made the earth less special, and just one more of a group of similar planets. Tycho Brahe proved that comets were not atmospheric phenomena, but instead occupied the same space that held the planets. Then Kepler showed that the planets orbit the sun, not in geometically perfect circles, but instead in geometrically ‘impure’ ellipses.

All of these developments served to undermine the popular conception of the universe as set of pure nested spheres, with earth occupying the center. Instead the heavens constituting a perfect and permanent sphere, they were similar in many ways to the earth: inconstant, messy, and definitely not perfect in any manner. With the planets reduced to being mere objects rather than some kind of god, astrology had the rug pulled out from underneath it. Newton’s development of physics was the final blow; it demolished any credibility that astrology had among educated people. The general public continued to cling to astrology, but as standards of public education rose, the star of astrology fell.

Astrology played a crucial role in the rise of science. It arose as a reasonble extension of the calendar, and was joined at the hip with astronomy. But it carried the seeds of its own destruction; the astronomy that it harbored grew up into a real science, and by so doing, destroyed its parent.