1947: The Transistor

The ability to transmit and acquire information through a means of communication may be our species greatest quality. The first step on this journey was the evolutionary development of spoken language. For that, we thank nature. The second influential step was the development of writing. While speaking and writing were effective, they were unable to mass produce information. The invention of the printing press, possibly the most important achievement in the history of science, changed the landscape and ushered in a scientific revolution that changed the world. However printing still worked at a speed that humans could comprehend. The next frontier was in electronics and that worked at the speed of light. The invention of the transistor made electronics a practical reality and reaffirmed that good things do come in small packages.

A Transistor
A Transistor
(Credit: Amazon.com)

Electronics Before the Transistor

Electronic devices were in use well before the invention of the transistor. The problem with these devices was that they were powered by large structures called vacuum tubes. A vacuum tube looks like a medium sized light bulb and consisted of two electrodes called a cathode and an anode placed on opposite ends of a closed glass tube. The cathode is heated causing electrons to be released which flow through the vacuum in the tube to the anode, creating a current. Eventually a third electrode called the control grid was added to these tubes to control the flow of electrons from the cathode to the anode.

The earliest computers were actually made from vacuum tubes. These computers were enormous in size sometimes requiring a full room to house them. They generated a tremendous amount of heat and required a huge amount of energy to power them. The first large scale, general purpose vacuum tube computer was the ENIAC – Electronic Numerical Integrator and Computer – designed and constructed in 1945 by the United States Army. By the end of its use in 1956 it consisted of over 20,000 tubes, weighed 27 tons, occupied 1,800 square feet and consumed 150 kilowatts of electricity. Instead of 20,000 tubes, today’s much more powerful electric devices such as smartphones consist of billions of transistors. Imagine trying to pack a billion vacuum tubes inside your smartphone.

A Panel of Vacuum Tubes from the ENIAC Computer
A Panel of Vacuum Tubes from the ENIAC Computer
(Credit: Wikimedia Commons)

The Invention of the Transistor

For electronics to really take off a new device that was substantially smaller and consumed less power was needed. The transistor was invented by John Bardeen, Walter Brattain and William Shockley of Bell Labs in 1947 as a replacement for the inefficient vacuum tube. The concept of the transistor is based on the field-effect transistor (FET) principle which uses electronic fields to control the flow of electricity. The theory was first patented by physicist Julius Edgar Lilenfield in 1925. Shockley was familiar with FET and had been working on the idea for over a decade. Unable to produce a functioning transistor, Bell Labs teamed up Shockley with Brattain and Bardeen. Within two years they were able to construct a working transistor which they first demonstrated to the world on December 23, 1947.

The design of the transistor has been improved over time. The first transistors were constructed from germanium, a chemical element with similar properties to silicon. In 1965 Gordon Moore (the co-founder of Intel) published a paper describing the doubling of transistors which he revised into what is now known as Moore’s Law. Moore’s Law is his observation which states that every two years the number of transistors will double on a circuit, and the observation has roughly held for four decades.

The Transistors Impact on Technology

Transistors and vacuum tubes function the same in that they amplify and control the flow of electrical signals in a current. The difference lay in their size, heat generation, and power requirements. A transistor generates very little heat and requires a small power requirement. Importantly, it can be manufactures on a miniature scale. Transistor technology can be shrunk to the nanometer level with today’s transistors measuring between ten and twenty nanometers in length. That tiny size allows millions of them to be deployed in microchip technology. A microchip, also called an integrated circuit, is produced from a silicon wafer that contain millions of transistors on an area the size of a grain of rice. The compactness of these microchips make it possible to pack immense processing power into a tiny space, such as in today’s smartphones. These chips are deployed in all of our modern electronic devices. Televisions, radios, the personal computer, numerous household appliances, numerous industrial appliances all incorporate the technology of microchips whose function depends on the transistor. The transistor has unrecognizably changed the world in its 80 year history and it shutters the imagine to think what the next 80 years hold.

Continue reading more about the exciting history of science!

1903: First Successful Airplane

The entire flight lasted only twelve seconds. The consequences will last for all of human civilization. On December 17, 1903 just outside Kitty Hawk, North Carolina, Orville and Wilbur Wright completed the first successful sustained and controlled heavier-than-air, powered flight.

A Photograph of the Wright Brothers First Successful Flight
The Wright Brothers First Successful Flight

Flight Before the Airplane

First Hot Air Balloon Demonstration at Annonay, France, June 1783
First Hot Air Balloon Demonstration at Annonay, France, June 1783

Flight had been attempted long before the Wright Brothers success. People have always dreamed of soaring with the birds. Flying kites have been used in China dating to hundreds of years BCE. Renaissance artist and inventor Leonardo Da Vinci famously sketched but never constructed flying designs in his notebook. The earliest successful attempts at flight were conducted by lighter than air balloons and winged gliders.

The first hot air balloon flight was demonstrated in June 1783 by French brothers Jacques-Etienne and Joseph-Michel Montgolfier. Three months later another of their balloons carried the first living beings into the air – a sheep, a duck, and a rooster. Then on November 21, 1783 the first manned hot air balloon flight took place. The flight carried a doctor and an army office a distance of six miles over Paris, France.

Earlier gliders were the first heavier than air, unpowered flyers. It required the pilot to launch himself into wind from an elevated location. The special shape of the wing provided lift to keep the glider in flight. The first successful gliding flight occurred in 1849 and was based on the design of Sir George Cayley. The most influential figure in gliding was Otto Lilienthal. From 1981-1986 he performed over 2000 successful flights testing out different gliding designs. His fame influenced many others to experiment with flight, including the Wright Brothers.

Enter the Wright Flyer

In order to achieve flight a device must produce more lift than its weight. Balloons achieve this through being less dense than the air around them. Gliders use specially shaped wings to glide through the air. Their drawback is they lack a source of thrust needed for sustained flight.

The Wright Brother initially experimented with gliders so that they could master flight balancing and control while continually improving their design. From 1900-1902 the Wright Brothers constructed and tested various glider designs at Kitty Hawk. They initially struggled with producing enough lift in the gliders. Their gliders were only producing about 1/3 of the calculated lift when derived from the established equation of lift. This led the brothers to question the value of the Smeaton coefficient in the equation which determined the value of air pressure. They collected their own aerodynamical data and determined the value of Smeatons coefficient to be close to 0.0033, different from the accepted value of 0.0054. With more accurate data they were now better able to construct more reliable designs.

In 1902, armed with more accurate data they decided to build one more glider before attempting to build a powered airplane. Improvements were made to the rudders and control system. Between September and October they made over 500 successful flights, some to a height of 600 feet. They were now ready to construct their airplane.

The constructed airplane had to account for the added weight of the 200 pound propulsion system. This required an increased wing area to over 500 square feet, along with other improvements in wing design to increase lift. They constructed their own, crude engine, made of lightweight aluminum capable of producing twelve horsepower. They used a chain and sprocket transmission system, one similar to a bicycle, to transmit the energy from the engine to the propellers.

In late 1903 the brothers returned to Kitty Hawk to test out their airplane. They chose the name Wright Flyer for their plane. They conducted four successful flights on December 17, 1903. The longest covered a span of 859 feet and lasted 59 seconds.

The World Takes Flight

Appollo 11 Space Launch
Apollo 11 Space Launch to the Moon
(Credit: Wikimedia Commons)

The success of the Wright Brothers was initially met with much skepticism, but they soon showed the way for many others to follow. In 1909 Louis Bleriot flew across the English Channel, a distance of 25 miles. Charles Lindberg traversed 3,000 miles across the Atlantic Ocean in 1927.

The jet engine, first produced in 1939 in Germany, was another significant milestone in flight. The jet engine allowed aircraft to fly higher where the air is thinner and drag is reduced. Soon flight had revolutionize the world, both in military and commercial contexts. Air superiority played a key world in the outcome of WWII. It was only a few more decades before man was walking on the moon. The Voyager Space Program has two probes outside of our solar system. Flight technology will play a pivotal role in humanity’s next frontier: space exploration.

Continue reading more about the exciting history of science!

1820: Discovery of Electromagnetism

Electromagnetism was discovered at the turn of the second decade of the 19th century. It is the branch of physical science where electricity and magnetism come together. Like many discoveries in the history of science it was not discovered in a single stroke of genius but rather by the additive work of many great thinkers over a vast stretch of time. The effects of this monumental discovery cannot be overstated, especially in today’s technological world. The principles of electromagnetism form the basis for nearly all electronic devices in use today – radar, radio, television, the internet, the personal computer, to name a few. We take these devices and the fact that they work for granted but understanding what events lead to these discovering principles they are made on can prove illuminating.

Electromagnetic Wave
Electromagnetic Wave

Initial Observations of Electricity and Magnetism

Compass Rose
Compass Rose

When the electric and magnetic forces were first identified they were considered to be separate forces acting independently of one another. The effects of these two forces were observed as far back as 800 BCE in Greece with the mining of lodestone, a mineral containing natural magnetic properties. Lodestone was later used in the production of the magnetic compass. The Chinese Han Dynasty first developed the compass in the second century BCE and this invention proved to have a profound impact on human civilization. Modern systematic scientific experiments only began in the middle of the 16th century with the work of William Gilbert. Over the centuries there were many more advancements in knowledge in electric and magnetic forces made by scientists such as Otto van Guericke, Pieter van Musschenbroek, Benjamin Franklin, Joseph Priestly, Alessandro Volta, Luigi Galvani, and many others.

Two Key Discoveries Begin the Process of Unification

The first major hints that they were one force occurred during a lecture on April 21, 1820 when the Danish physicist Hans Christian Oersted noticed that his compass needle moved while in the presence of an electric current. Specifically, he found that if a wire carrying an electric current is placed parallel to the needle, it will turn at a right angle to the direction of the current. This observation prompted him to continue his investigations into the relationship. Several months after his lecture Oersted published a paper demonstrating that an electric current produces a circular magnetic field as it flows through a wire. Oersted’s paper demonstrated that electricity could produce magnetic effects, but this raised another question. Could the opposite happen? Could magnetism induce an electric current?

In 1831 Michael Faraday provided the answer. He showed this additional relationship between electricity and magnetism by demonstrating in a series of experiments that a moving magnetic field can produce an electric current. This process is known as electromagnetic induction. An American physicist, Joseph Henry, also independently discovered the same effect around the same time. However Faraday published his results first. Faraday’s and Oersted’s work showed that each force can act on the other, that the relationship works in both ways. The discovery of electromagnetism was now beginning to come into focus. In order to fully synthesis these two forces into one a mathematical model was needed.

A Mathematical Synthesis of Electricity and Magnetism

By 1862 James Clerk Maxwell had provided the necessary mathematical framework to unite the forces into his unifying Theory of Electromagnetism. Oersted’s and Faraday’s discoveries provided the basis for his theory. Indeed, Faraday’s law of induction became one of Maxwell’s four equations.

His theory also made some radical predictions for the time that were difficult for most to digest. It suggested that the speed of electromagnetic waves are the speed of light, a highly unlikely coincidence Maxwell thought. His equations also predicted the existence of other waves traveling at the speed of light. These idea’s received little traction in the scientific community at large until 1887 when the German physicist Heinrich Hertz discovered radio waves. The once radical predictions derived by Maxwell’s theory had been verified.

The Electromagnetic Spectrum
The Electromagnetic Spectrum
(Credit: Creative Commons)

The discovery of electromagnetism changed the course of human civilization. Today it is understood as one of the four fundamental forces in nature, along with the gravitational force, the strong nuclear force and the weak nuclear force. By understanding and applying its principles human cultures have sparked a revolution in technology and electronics. The story of its discovery highlights the power of the scientific system of thought. Our modern world as we know it would not exist without this insight into this incredible force of nature.

Continue reading more about the exciting history of science!

1860s: Germ Theory of Disease

The Germ Theory of Disease seems like common knowledge to just about everybody today, but this has not always been the case. Throughout most of history the causes of disease, the breakdown of the normal functioning of the human body – have been a mystery. This is not to say that a lack of ideas have existed.

A Brief History of Disease Theory

Early medical practices were awash in trial and error methods and a mix of science and superstition. One popular belief developed by the ancient Greeks and believed throughout the Middle Ages was known as humorism. Humorism states that the body contained four humors which were linked to the four major bodily fluids plus the four elements. An imbalance in humors was believed to cause disease. Cures revolved around restoring this balance.

Precursor to Germ Theory: The Four Humors
Humorism
(Credit: Tom Lemmens)

Religious and supernatural forces were also strongly believed to be tied to disease. Divine retribution may cause individual sickness or a demonic force would cause epidemics. However by the time of the Industrial Revolution the leading theory on disease was that bad air or “miasma” was believed to spread contagious diseases. This miasma would emit from rotting, dead organic matter to cause plagues, cholera, and various other diseases. People were instructed to avoid things such as decaying vegetation, corpses, and manure.

The invention of the microscope altered the landscape of medical science. Bacteria and viruses were discovered which led to a speculation of a germ theory of disease for centuries. In the mid-19th century The Hungarian physician Ignaz Semmelweis noticed that Puerperal Fever could be drastically reduced simply by washing hands. Various publications reported that the mortality rate could be reduced to around 2% from the mid-20s. Curiously, these were completely ignored by the medical community. It is for this we get the metaphor Semmelweis reflex – the tendency to reject new knowledge because it contradicts established norms.

The Triumph of the Germ Theory of Disease

Beginning in the 1860s, the work of Louis Pasteur finally proved the Germ Theory of Disease. Pasteur first began to study the validity of spontaneous generation, the popular idea that living things spontaneously emerge from nonliving matter. He conducted several clever experiments showing that microorganisms cause fermentation. In one experiment he showed that fermentation would not take place in a solution sterilized by heat. He correctly attributed this result to the absence of living microorganisms in the sterilized solution. Through a variety of experiments in fermentation he was able to prove that specific microbes can bring about specific chemical changes.

Louis Pasteur performing an experiment
Louis Pasteur carrying out an experiment

Having gained some notoriety for his work on fermentation, in 1865 the French government asked Pasteur to study two diseases of silkworms that were devastating the French silk industry. He accepted the task, discovered again that microorganisms were the culprits, and saved the French silk industry. These experiments, along with his other work, proved that all living things must have parents.

Anthrax Cells
Anthrax Cells
(Credit: U.S. Army Medical Research Institute of Infectious Diseases)

A decade later Pasteur began to study anthrax. His work with anthrax and anthrax immunization proved to everyone once and for all the validity of the Germ Theory of Disease. In 1881 at a farm southeast of Paris, Pasteur conducted a public experiment to demonstrate his anthrax vaccine. He inoculated twenty-five sheep with weakened anthrax microbes. Two weeks later he injected those sheep and others with active anthrax. All twenty-five inoculated sheep survived, the rest all perished.

Along with Pasteur, Robert Koch is also recognized as having contributed to placing the Germ Theory on sound, scientific footing. He developed four basic criteria known as the Koch postulates that establish a cause and effect relationship between a microbe and a disease. It should be noted that viruses were later identified and cannot be cultured, so his postulates do not apply to them. Koch’s postulates are:

  1. The microbe must be found in the diseased animal, but not the healthy animal
  2. The microbe must be isolated in the diseased animal and grown in a pure culture
  3. When the cultured microbe is introduced to a healthy animal, the animal develops the disease
  4. The microbe must be re-isolated from the experimentally infected animal

An Immediate Impact to Health

The Germ Theory of Disease allowed us to discover exactly what cause certain diseases. Once the method was accepted and understood it quickly led to the identification of many specific, disease-causing microorganisms. This led to vaccinations and cures to many of the diseases. Also steps could be taken to prevent diseases too. Personal and hospital hygiene was improved with the aim at reducing the transfer of microbes from one source to another. As a result, life expectancy quickly began to rise at a rate unseen in human history. This is the power of science.

Continue reading more about the exciting history of science!

Carl Sagan

Carl Sagan (1934 – 1996) was an American astronomer, influential popularize of science, and author. His scientific achievements and contributions are vast. He published over 600 scientific papers, authored over 20 books, and co-wrote the popular 1980 television series Cosmos.

Early Life

Carl Sagan
Carl Sagan
(Credit: NASA)

Carl Sagan was born in 1934 in Brooklyn, New York. As he recalls in his books, he acquired an interest in science at a young age. One influential experience was a visit to the 1939 New York World Fair at the age of four. After that experience he frequently read books on astronomy from his local library. Sagan attended the University of Chicago where he was associated with or taught by many notable scientists and Nobel Laureates. He completed his B.S. in 1955, M.S. in physics in 1955, and in 1960 was awarded his Doctorate in astronomy and astrophysics.

Scientific Career

Carl Sagan had a distinguished career. He first worked as a fellow at the University of Berkley, California for two years before being offered a lecture position at Harvard University. After working at Harvard for a few years he was surprisingly denied tenure. He quickly moved on to Cornell University where he became their director of Laboratory for Planetary Studies. He later was promoted to associate director of the Center for Radiophysics and Space Research. In 1977, he was named the David Duncan Professor of Astronomy and Space Science, a position he held for the rest of his life.

He worked with NASA several of their space missions. He helped determine possible landing sites on Mars for the Viking Program. He was instrumental in sending out the first physical messages sent into space in the form of the Pioneer Plaque and the more famous Voyager Golden Record.

Carl Sagan Standing Next to a Model of a Viking Lander
Carl Sagan Standing Next to a Model of a Viking Lander
(Credit: Wikimedia Commons)

While he was always a respected scientists among his colleagues, but it was the highly successful Mariner missions to Mars launched Sagan’s public career. People wanted to know more about Mars and Sagan became a sort of spokesperson about the topic. He was selected to appear in several educational films and his exceptional style and clear delivery quickly earned him notoriety. His expertise was in high demand and he soon began appearing on television talk shows. Carl Sagan had become a celebrity scientist.

Bringing Science to the People

Carl Sagan had a burning passion to share his love of science with everyone. He was a prolific author and who made reading about science enjoyable. In 1978 he won a Pulitzer Prize for The Dragons of Eden. The following year he co-wrote the popular PBS televisions series Cosmos. The series was a smashing success and more than 600 million people have viewed the series before the century was over. Sagan died of pneumonia in 1996 at the age of 62.

Read about more impactful scientists.

1911: Atomic Nucleus

The atomic nucleus is the tiny, dense, center of the atom. It’s surprising discovery was announced to the world by the physicist Ernest Rutherford at a meeting of the Manchester Literary and Philosophical Society in March 1911.  Two months later he published a scientific paper reporting his findings.

The Rutherford Experiment

Nuclear Structure of the Atom
Nuclear Structure of the Atom

Prior to Rutherford’s discovery of the atomic nucleus, the prevailing atomic model was the “plum pudding” model devised by J. J. Thomson, who discovered the electron in 1897.  Thomson proposed a model of the atom that consisted of a diffuse cloud of positive charge with the negatively charged electrons scattered within it.  However, there was a lack of experimental evidence to support this model, along with other issues such as the problem of atomic stability, and conflictions with observations of atomic behavior.

Rutherford, along with his colleagues at the University of Manchester, set out to further investigate the structure of the atom using alpha particles.  Alpha particles are a form of radioactive decay that have a positive charge.  Rutherford’s team fired a beam of alpha particles at a thin sheet of gold foil.  To create his beam Rutherford placed radium inside a lead box with a small pinhole directed at the sheet of gold foil.  The lead box absorbed most of the alpha decay from the radium except for the small beam that escaped through the pinhole. The foil was completely surrounded with a detector that could locate where they alpha particles ended up after they passed through the foil.  Rutherford specifically used gold for its malleable properties.  Gold can be hammered into incredibly thin layers which was needed so that the alpha beam could pass through it.  

Most of the alpha particles passed through the foil as if it was going through empty space.  Occasionally a few alpha particles – about 1 in 20,000 – were reflected straight back towards the source.  This was a highly unexpected result that could not be explained by the plum pudding model.  Rutherford concluded that most of the mass of an atom must be concentrated in a tiny, dense region of the atom which he called the nucleus.  He proposed that the atom had a central nucleus where all the positive charge and most of the mass was concentrated, and the negatively charged electrons orbit this nucleus.  His model of the atom resembled the planets in the way that they orbit the Sun in the Solar System.

Rutherford's Experiment
Rutherford’s Gold Foil Experiment
(Credit: Wikimedia Commons)

The Atomic Nucleus and Structure of the Atom

Rutherford’s model of the atom contained the concept of the nucleus, a significant departure from the plum pudding model.  It consists of positive electrically charged protons and the slightly heavier electrically neutral neutrons. Although Rutherford failed to discover the neutrons himself he predicted their existence and his student James Chadwick discovered them in 1932. Orbiting the nucleus are the negatively charged electrons.

One of the most striking implications of this model is that atoms are mostly empty space.  The atomic nucleus occupies a space of 1/100,000 of the atom, with electrons occupying the vast region around the nucleus.  This poses the question: what fills the region between the nucleus and the orbiting electrons?  The answer is empty space.  This representation challenges our perception of solid matter, and underscores the weird, mysterious, and counter-intuitive nature of the atomic realm.  

Although Rutherford’s proposed a model of that atom was a significant improvement over the plum pudding model it still had limitations of its own.  Critically, it still could not explain the stability of the electrons’ orbits around the nucleus.  According to classical physics, electrons moving in an orbit would radiate energy and soon – within one second – spiral into the nucleus.   Obviously, atoms are stable and do not collapse in this way.  

The solution to this problem came from Niels Bohr and the quantum mechanical model of the atom. Bohr introduced the concept of quantized orbits for electrons, where electrons could only exist at certain discrete energy levels without radiating energy.  According to Bohr’s theory, electrons moving between orbits actually disappear from one orbit and reappear in the new orbit, without traveling in the space between.  This resolved much of the stability issues despite reinforcing the confusing and unusual ways that atoms behave at the subatomic scale. 

The discovery of the atomic nucleus had profound implications on atomic physics and led to the development of an entirely new field of research, nuclear physics.  It paved the way for the harnessing of nuclear energy, a technology that has the potential to alter the course of human civilization.  

Continue reading more about the exciting history of science!

3400 BCE: Writing

Language is one of the special characteristics that distinguishes humans from other animals.  It allows us to communicate complex concepts and ideas to other people, undoubtedly providing us with a remarkable evolutionary advantage over other species.  Writing is a set of markings used to represent a language.  It augments the benefits of language by making it permanent, allowing the message to travel further and persist through time.  It is why the invention of writing systems often distinguish history from prehistory.

The History of Writing

History of Writing Systems: Cunieform Tablet
Cuneiform Tablet

The history of writing systems traces a complicated journey.  Written language emerged around 3400 BCE in Sumer, southern Mesopotamia.  These same industrious people also invented number systems and the wheel. Their writing form is known as cuneiform (cunea, Latin for “wedge”) and consisted of making wedges on clay tablets. It is derived from their proto-writing system of using clay tokens of various shapes as counters for various goods. There were dozens of different tokens. Each token was a unique geometric shape such as a cone or a square and each one representing, with a one to one correspondence, a certain type of good. Two cones means two baskets of grain. No matter your language, if you understood that a cone token meant a basket of grain you could account for the transaction. These tokens most likely were accounts of debt and were stored inside clay envelopes. For over 4000 years this system was in use.

This brings us to the original purpose of the Sumerian writing system. This was in accounting – in the recording amounts of grains, numbers of livestock, and various other goods.  As the civilization grew in population size the number of debts increased. Since the tokens were stored inside envelopes their contents could not quickly be known until you opened the envelopes and counted the tokens. Some accountants solved this problem by making wedges on top of the envelopes representing the contents of the envelope. The transition from token to script begun and the worlds first writing system emerged. Eventually clay tablets with markings representing the tokens completely replaced the token system since the impression of the cone on the tablet was identical to the cone token itself.

Egyptian Heiroglyphs, Temple of Kom Ombo
Egyptian Heiroglyphs, Temple of Kom Ombo
(Credit: Wikimedia Commons)

It took around another 400 years until the Sumerian writing system made the shift to create the phonetic signs of speech. This was moving from a clear one to one representation to a more abstract for of representing sounds. This created a big problem for a society inventing a writing system. It has to agree upon a system of symbols or makings to represent spoken sounds.  This agreement would take some time.  Pictorial notations such as a picture of a bird or a tree were easiest to agree upon.  Eventually consensuses were built and writing formats gradually became more formalized, arranging itself it to standardized rows and columns.  The full development of the Sumerian writing system took at least 1,000 years.

Chronological Development of Writing
The Chronological Development of Writing
(Credit: Wikimedia Commons)

It is not certain whether writing originated in a single geographic area (Sumeria) and spread throughout the world by cultural diffusion or if it was invented in a few areas independently. The discovery of scripts in ancient Mesoamerica certainly seems to indicate that it was invented at least more than once. In the Old World it is very likely that only the Sumerians and a few centuries later the Egyptians independently invented their own writing system. It is also possible that the Egyptians borrowed the idea from the Sumerians, nobody knows for sure. The Egyptian writing system is called hieroglyphics (meaning “sacred engravings” in Greek) and are pictorial in form. There are about 1000 distinct characters. It is the most famous and well known ancient form of writing.

Good Ideas Like Writing Spread, Now This Good Idea Spreads Other Good Ideas

Due to the difficulty in inventing writing systems, it is likely that all writing systems have been borrowed and altered from early Mesopotamian writing systems with the exception of the Egyptian, Chinese and Mesoamerican writing systems.  Writing systems also require a long time to fully develop, probably at least a thousand years. Other rudimentary writing systems may have been invented but they were either absorbed, aborted, or replaced due more the established writing systems rapid diffusion.

A Brief Video on the Spread of Writing Systems Across the Globe

In the 16th century BCE the Canaanites simplified the Sumerian and Egyptian pictographic scripts by creating an alphabet of 22 consonants. All of our modern alphabets are derived from this script. Eventually the Greeks introduced characters for vowels, establishing the alphabet to be used for Western Civilization. Once writing spread across the globe itself became the means for spreading other good ideas. A fitting destiny for one of humanity’s most impactful ideas.

Continue reading more about the exciting history of science!

12000 BCE – 4000 BCE: Domestication of Plants

The human domestication of plants was the single most influential event in modern human history.  It is the demarcation of the nomadic lifestyle to the settled, urban lifestyle. Its impacts can be summarized by the agricultural revolution, resulting in a tremendous spike in food production.  The spike in food production led to larger and larger populations, the birth of city states, which marked the dawn of civilization.  The domestication of plants, coinciding with the domestication of animals, has profoundly changed the course of humanity. 

When and Where did the Domestication of Plants Happen?

Domestication of Plants
Cereal Crops

Plant domestication is the alteration of wild plant species into crop plants by human, what can be called artificial selection. The original techniques were likely stumbled upon by accident and the process leading to agriculture was certainly a very slow and gradual one. The earliest domestication of plants followed by the transition to agriculture can be thought of as an evolutionary process rather than an intentional discovery.

We have only fragmentary evidence of its beginnings since it began thousands of years before writing was invented. The earliest evidence suggests that plant domestication began around 12000 BCE with cereal crops. The location was in the area between the Tigris and Euphrates rivers in the Middle East.  Other area’s of the globe soon independently domesticated other crops. Rice was domesticated in China and maize (corn) was domesticated in America’s all by around 10000 BCE.  Herbs such as coriander, vegetables such as sweet potatoes and lentils, and fruits such as figs and plums were also being domesticated by around the same time.

How Did it all Happen?

The process of plant domestication was a complex, slow, and gradual process. In a few places it happened independently but this was a fairly rare occurrence. The most recent evidence suggests agriculture began in no more than ten places independently. The exact number is still debated due to incomplete and inconclusive evidence. Mostly it spread to other areas of the globe through cultural diffusion. 

The road to the domestication of plants was long and curvy, full of cliffs and dead ends.  It involved centuries of trial and error and was subject to local climate, geography, and available plant species. However a few notable factors seem to have been important in its evolutionary process.

  1. The decline of wild animal populations – By around 13000 BCE humans were becoming extremely proficient hunters and large game was beginning to thin out. This made hunting increasingly less rewarding and alternative food strategies increasingly more rewarding.
  2. An increasing abundance of wild edible plants due to a change in climate – Around 13000 BCE the Earth began warming resulting in increased plant life. This made eating plants increasingly more rewarding and provided more opportunities for learning by trial and error by peoples in locations with the highest proportions of these plants.
  3. The cumulative development of food production technologies – In some area’s edible plants were so abundant that people could abandon a nomadic was of life and establish permanent settlements. This provided the opportunity to develop food storage, tools, and production methods.
  4. Population growth led to new food production strategies – The abundance of wild plants led to a surge in populations. This demanded new ways to feed the population. This creates what is none as an auto-catalytic process.

Completing the Process and Establishing a New Way of Life

Egyptian Agricultural Calendar
Egyptian Agricultural Calendar

When humans began moving around less they noticed changes in plant life much better. Some plants were evidently dropped on the trip back to camp. It wasn’t long before people noticed that new plants soon began growing along these well-worn trails. Also, the garbage dumps of food became breeding grounds for plant growth in the following seasons. Some plants require seeds to be spat out and plants began growing in these spots also.

Soon the connection was made between planting these seeds and the growth of crops. Over time this process was refined and improved, and new species of plants were tested. Some species of plants were more easily domesticated than other species but these weren’t distributed across the world evenly, which is why some societies invented agriculture and others didn’t. This new process of controlling nature to grown your own food soon allowed societies to grown a population so large that only agriculture could support them. Life started revolving around agriculture. These larger society with a greater population were able to conquer or assimilate their neighbors spreading the domestication of plants through cultural diffusion. The age of the hunter-gather was ending and the rise of civilization had begun!

Continue reading more about the exciting history of science!

14000 BCE – 4000 BCE: Domestication of Animals

Domestication is the process of selective breeding for human use. The domestication of animals began with the now-lovable dog by at least around 14000 BCE and possibly thousands of years earlier.  As it so often happens with much of prehistory, the archeological record is simply unclear as to the exact time and location of the dogs domestication.  It may have happened as early as 40000 BCE and it also may have happened several times independently.  What is clear is that the domestication of the dog did happen, followed by the goat, pig, sheep, cattle, cat, chicken, horse, and a few other important and well known animals. 

The domestication of animals, along with the domestication of plants, played a key role in the agricultural revolution and in the beginnings of civilization.  Aside from the dog, animal domestication happened slightly later than plant domestication since enormous quantities of plant food was needed to feed the animals. Domesticated animals provided humans with several benefits of enormous value and was essential on humanity’s path towards urban civilization.

Domestication of Animals
Painting Depicting Beasts of Burden
(Credit: Winnifred Neeler, Royal Ontario Museum)

An Increase in Food Production

Grazing sheep and cattle
Grazing Sheep and Cattle
(Credit: www.agupdate.com)

Prior to the domestication of animals all food provided by animals had to be obtained from hunting. This changed after domestication. Each of the domesticated animals could be used for their meat in times of food scarcity or after an unsuccessful hunt.  However providing a source of meat was not their only addition to food production. In addition to meat, cattle, sheep, and goats provided a steady supply of milk and other dairy products.  Once farming became widespread draft animals such as cattle, ox, and water buffalo provided an unprecedented addition of muscle power. 

The increase in food production from first domesticated plants and then animals resulted in radical changes to the human condition. A sustainable and predictable source of food lead to a rapid increase in population density. People were able to disband their nomadic hunter and gather lifestyle and establish permanent settlements. The dawn of civilization was underway.

Additional Uses of Domesticated Animals

Egyptian Bronze Statue of a Cat, University of Pennsylvania Museum of Archaeology
Egyptian Bronze Statue of a Cat, University of Pennsylvania Museum of Archaeology
(Credit: Mary Harrsch, Wikimedia Commons)

In addition to the increase in food production, domesticated animals provided a variety of additional benefits to humans. Around 4000 BCE horses became domesticated allowing for significant improvements in transportation. A person riding a horse could travel double or more the distance and speed of what a person walking or running could travel in a day. The use of the horse was eventually applied to combat leading to superiority in warfare for those civilizations who were able to successfully utilize them.

There were also many other animals that provided significant benefits to human civilizations. Today most people revere cats for being cute and cuddly house companions. But in ancient Egypt cats were revered for their pest control qualities and for their ability to hunt venomous snakes, scorpions, and rodents. An unusually high volume of statues and paintings were dedicated to cats in this culture. Hides of a variety of animals were used for clothing, storage, or shelter. Sheep were prized for their wool that could be spun into clothing, rugs, and a variety of luxury goods.

A Rare Combination of Traits

Not all animals can be domesticated.  Of the world’s roughly 150 large, wild, terrestrial, herbivorous mammals – the ideal candidates for animal domestication – only 14 have been domesticated.  This indicates that there is a specific mix of traits an animal must possess in order to be successfully domesticated.  These traits are:

  1. An efficient diet – Herbivores are much more efficient than carnivores. The conversion of food biomass into the consumer’s biomass is typically around 10 percent. This means if you want to raise 1,000 lb cow you have to grow 10,000 lbs of corn. Large carnivores would be extremely difficult and costly to domesticate because it would take 100,000 lbs of corn to make the 10,000 lbs of herbivore needed to feed the 1,000 lb carnivore. The food preference of the herbivores must also not be finicky.
  2. A quick growth rate – Some herbivore animals such as elephants take decades to reach their full adult size. Cattle on the other hand can reach 1,000 to 2,000 lbs by age three.
  3. A willingness to breed in captivity – Animals such as the cheetah refuse to breed in captivity. In the case of the cheetah it is due to a lengthy and elaborate courtship ritual that cannot take place in a cage.
  4. A friendly disposition towards humans – Large, vicious animals like the grizzly bear will instinctively maul humans making it suicidal to try to domesticate them
  5. A tendency to stay calm or not panic – Nervous species that have a tendency to fight or flight when they precede danger are difficult to domesticate.
  6. A manageable social and herding structure – Living in herds, having a well developed dominance hierarchy that have an overlapping home range is the ideal structure. This rules out solitary animals who are not instinctively submissive.

Continue reading more about the exciting history of science!

1763: Bayes’ Theorem

Bayes’ theorem is a fundamental concept in probability theory.  It was formulated in 1763 by the English statistician and Presbyterian minister, Reverend Thomas Bayes.

History of Bayes Theorem

Thomas Bayes was born in London in 1702 and studied at the University of Edinburgh.  During his time at Edinburgh he was exposed to some of the leading mathematicians and philosophers of his time.  He was elected as a Fellow of the Royal Society where he may have served as a mediator of intellectual debates. He later returned to London to become a minister, but he continued to pursue an interest in mathematics, specifically in probability theory. 

Heading for Bayes Doctrine of Chances (1764)
Heading for Bayes Doctrine of Chances (1764)

Bayes wrote his theorem in order to address the question of how to revise beliefs in the light of new evidence. However, more interestingly it appears that he likely wrote it as a mathematical means to defend Christianity and to combat an argument by David Hume in his 1748 essay Of Miracles, from his book An Enquiry Concerning Human Understanding. In this essay, Hume made the case for dismissing miracles, such as the resurrection of Christ, on the grounds of probability. In effect he argued that the probability for miracles (a violation of the laws of nature) was much more improbable than the probability that miracle was accurately reported. While there is no absolute or direct evidence that Bayes sole motivation to compose his work was to refute Hume’s essay, there is extremely good circumstantial evidence he did at least in part, given the details surrounding the events of his later life and the eventual publication of his work.

Whatever the real motivation for his work may have been, Bayes’ work was published two years after his death, when his friend Richard Price brought it to the attention of the Royal Society and read on December 23, 1763. It was published to following year both in the Philosophical Transactions, the journal of the Royal Society, and as an offshoot. The now famous essay was titled An Essay towards solving a problem in the Doctrine of Chances.  It should be noted that in 1767, Prince published a book titled Four Dissertations, where he explicitly took on the work of Hume and challenged his probabilistic arguments in Of Miracles. He used Bayes results in an attempt to show that Hume failed to recognize the significance of multiple independent witnesses to a miracle, and that the accumulation of even imperfect evidence could overcome the statistical improbability of an event.

As things sometimes happen in the history of science, the theorem initially was largely forgotten, until it was independently rediscovered by the brilliant French mathematician Pierre-Simon LaPlace in 1774. The theorem is used to describe the conditional probability of an event.  Conditional probability tells us the probability of a hypothesis if some event has happened. 

The Goal of Getting Closer and Closer to the Truth

Bayes’ Theorem involves beginning with an educated guess, called a prior probability, and then revising your prediction when new evidence comes in.  As the new evidence is considered the probability of the event is updated give you the posterior probability.  Bayes’ Theorem provides a useful way of thinking by approximation, getting closer and closer to the truth as we accumulate new and relevant evidence.  This is an important point to consider because we are always working with incomplete information in nearly all situations.

Formula of Bayes' Theorem
Formula of Bayes’ Theorem

A Bayesian way of thinking requires us to constantly update our probabilities as new evidence becomes available to us.  This revision does not happen just once but can continually happen. We may never know the truth with 100% certainty, for example we can never be 100% certain the sun will rise tomorrow.  But with Bayesian thinking we can be 99.999999% sure which tells us we’re getting very close to the truth and gives us a high degree of confidence in the proposition. Bayes theorem helped to revolutionize probability theory by introducing the idea of conditional probability – probability conditioned by evidence. If you have an extraordinary hypothesis, it should require extraordinary evidence to convince you that it’s true.

Practical Uses of Bayes Theorem

Bayes Theorem has relevance in any avenue of life because it is a form of probabilistic thinking.  If you think about it, everything you and happens to you in life is probabilistic in nature.  The theory’s flexibility and versatility provide the ability to make both life and business decisions under conditions of uncertainty.  Here are a few examples of Bayesian theory used in the real word. In biology it is used for medical diagnosis, genetics, and the spread of infectious diseases.  In computer science it is used in speech recognition, search engine algorithms, spam filtering, and weather forecasting.  Its practical examples are almost limitless.  Ultimately, it is a learning process, with more observations and evidence leading to better certainty.  Lets take a look at one interesting application of Bayes theorem in a real word setting. 

The theorem was used to crack the Nazi Enigma code during WWII.  The Enigma code was an encryption machine that the Germans used to send secure messages.  Its effectiveness was that its cipher system was changed daily.  Alan Turing, the brilliant British mathematician, used Bayes Theorem to break down an almost infinite number of translations based on messages that were most likely to be transmitted.  For example, messages from German U-boats were most likely to transmit messages containing information about the weather or allied shipping.  The strong priors thus greatly reduced the possible translations to be deciphered and sped up to time to crack the code.  Eventually he and his staff invented a machine known as The Bombe, which ultimately cracked the German Enigma Code. The use of Bayes’ theorem in cracking the Enigma code was a monumental breakthrough for the Allies, as it provided them access to critical information about German military operations. It provided a significant strategic advantage in the war effort and played a key role in their eventual victory.

Bayes’ theorem continues to impact statistics and society to this day. In recognition of Bayes’ contribution to the development of probability theory, the Bayesian Analysis journal was established in 2006 as a peer-reviewed academic journal dedicated to Bayesian statistics. Additionally, they Thomas Bayes Award is awarded every two years by the Royal Statistical Society to recognize outstanding contributions to the field of Bayesian statistics. The continuing relevance of Bayes’ theorem is a testament to the enduring legacy of Thomas Bayes and his contribution to the field of probability theory.

Continue reading more about the exciting history of science!