1900 – 1920s: Quantum Mechanics

Quantum Mechanics

Quantum mechanics is a paradigm shifting theory of physics that describes nature at the smallest scale. Understanding quantum mechanics requires a journey into the realm of the infinitesimally tiny, where the rules that govern our everyday reality no longer apply. The theory was developed gradually in the early part of the 20th century.

The Birth of Quantum Mechanics

The field of quantum mechanics began emerging very early on in the 20th century as a revolutionary framework for understanding the behavior of particles at the atomic and subatomic levels.  The theory was formed from the observations and experiments of a handful of scientists of that period.  As the 19th century was coming to a close classical physics was reaching its limits.  New phenomena were being observed that it couldn’t explain.  Quantum mechanics first entered into mainstream scientific thought in 1900 when Max Planck used quantized properties in his attempt to solve the black-body radiation problem.  Plank introduced the concept of quantization, proposing that energy is emitted or absorbed in discrete units called quanta. It was initially regarded as a mathematical trick but later proved to be a fundamental aspect of nature. 

Five years later Albert Einstein offered a quantum-based theory to describe the photoelectric effect, earning Einstein the Nobel Prize in Physics in 1921.  The next major leap came from Niels Bohr in 1913.  One of the problems that puzzled physicists of the day was according to the current electrodynamic theory the orbiting electrons should run out of energy fairly quickly, almost right away, and crash into the nucleus.  Bohr’s solution was to propose a model of the atom where electrons orbited the nucleus in definite energy levels or ‘shells’. In this new theory, electrons moving between orbits would move instantaneously between one orbit and another.  They would not travel in the space between the orbits, an idea that became known as a quantum leap.  Bohr published his work in a paper called On the Constitutions of Atoms and Molecules, and for this unique insight he won the Nobel Prize in Physics in 1922.

With these discoveries and others, quantum mechanics became a revolutionary field in physics.  It also became one of the strangest fields in science to study and attempt to understand.  It happens that things on the subatomic level don’t behave like anything we are used to in our everyday experience.  Because of this strangeness some physicists did not like quantum mechanics very much, including Albert Einstein.  Despite its strangeness and messiness, quantum mechanics is known for having a high degree of accuracy in its predictions.  In the decades to follow quantum mechanics was combined with special relativity to form quantum field theory, quantum electrodynamics, and the standard model.

Foundational Principles of Quantum Mechanics

The main principle of quantum mechanics is that energy is emitted in discrete packets, called quanta.  This differs from classical physics where all values were thought possible and the flow was continuous.  Other attributes of quantum mechanics include:

  • Uncertainty Principle:  this states that certain pairs of physical properties, such as position and momentum, cannot be simultaneously known with absolute precision.  It was first formulated by Werner Heisenberg in 1927 and arises from the wave-like nature of particles at the quantum level.
  • Wave-Particle Duality:  this describes the dual wave-like and particle-like behavior exhibited by objects at the quantum level.  Electrons, for instance, can exhibit both particle like behavior (localized position and momentum) and wave-like behavior (interference and diffraction) under different experimental conditions.
  • Quantum Entanglement:  this is the curious phenomenon in quantum mechanics that occurs when a pair or group of particles becomes correlated in such a way that the quantum state of one particle is directly tied to the other particle.  This means that changes to one particle instantly affects the other, regardless of the distance between them.  The instantaneous correlation has been confirmed in numerous experiments with photons, electrons, and even individual molecules.  
Double slit experiment showing both the wave and particle behavior of light
Double slit experiment showing both the wave and particle behavior of light

Together, these four principles provide solutions for physical problems that classical physics cannot account for.  Quantum mechanics therefore provides a much more comprehensive framework than classical physics for understanding the behavior of matter and energy at the smallest scales.  However, on some levels its development goes much further than that.  It has transformed our understanding of matter and energy and upended our notions of predictability and determinism, with interesting philosophical implications.  It is one of those areas in science where we are reminded of the power of human curiosity, ingenuity, and the perseverance in unraveling the mysteries of the universe.  

Continue reading more about the exciting history of science!

1938: Nuclear Fission

The discovery of nuclear fission, a process that releases an enormous amount of energy by splitting the nucleus of an atom, was an explosive moment in the history of science and technology. This incredible discovery directly led to the development of nuclear weapons and nuclear energy production, both of which would become world changing events.

The Birth of Atomic Physics

1979 German postage stamp depicting a diagram of splitting uranium atoms
1979 German postage stamp depicting a diagram of splitting uranium atoms

The discovery of nuclear fission began with the birth of atomic physics and its related research into the components of the atom.  In 1897, J. J. Thomson discovered the first subatomic particle, the electron, while working on experiments with cathode ray tubes.  This discovery prompted further research into the structure of the atom.  Fourteen years later, Ernest Rutherford discovered the nucleus of the atom when to his surprise alpha particles were occasionally reflected straight back to the source when directed at a thin sheet of gold foil.  At this point, the nucleus was thought to only contain positively charged protons, however in the 1920s Rutherford hypothesized the existence of neutrons, a theoretical neutral subatomic particle in the nucleus with no electric charge, to account for certain observed patterns of radioactive decay.

The neutron was quickly discovered by James Chadwick, a colleague and mentee of Ernest Rutherford, in 1932.  The discovery of the neutron proved to be a critical step for the development of nuclear fission technology.  Scientists soon realized that they could use the neutron to split heavier atomic nuclei.  Since the neutron have a lack of electrical charge, they are not repelled by the positively charged nucleus in the way alpha particles are repelled, and they can penetrate and be absorbed by the nucleus.  This makes the nucleus unstable and causes it to split into two or more smaller nuclei, releasing a tremendous amount of energy in the process.  

The Discovery of Nuclear Fission

Shortly after the discovery of the neutron scientists began using it to probe the structure of the atom further.  In 1934 Enrico Fermi began using the neutrons to bombard uranium atoms.  He thought he was producing elements heavier than uranium, as was the conventional wisdom of the time.  

The first experimental evidence for nuclear fission occurred in 1938 when German scientists Otto Hahn, Fritz Strassmann, Lise Meitner, and their team began also bombarding uranium atoms with neutrons.  As was so often the case in the early days of atomic physics, their results were completely unexpected.  Instead of creating heavier elements, the neutrons split the nucleus to produce smaller, lighter elements such as barium among the radioactive decay.  At the time it was thought improbable that a neutron could split the nucleus of an atom.  The experiments were quickly confirmed, and the first instance of nuclear fission had been achieved.  

It was quickly realized that if enough neutrons were emitted by the fission reaction it would create a chain reaction, and an enormous amount of energy would be released in the process.  By 1942 the first sustained nuclear fission reaction had taken place in Chicago.  Hann was awarded the Nobel Prize in Chemistry in 1944 “for his discovery of the fission of heavy nuclei”. 

An Explosive Impact on Civilization

Nuclear fission is the process of splitting an atom into smaller fragments.   The mass of the sum the fragments is slightly less than the mass of the original atom, usually by about 0.1%.  The mass that had gone missing is actually converted into energy according to Albert Einstein’s E=mc^2 equation.  

The discovery of nuclear fission ushered in the atomic age, leading to inventions such as nuclear power and the atomic bomb with world changing consequences.  Almost immediately after its discovery, scientists realized the immense power that could be unleashed by splitting the atom.  In 1939, a group of influential scientists including Albert Einstein drafted a letter to President Frankling D. Roosevelt warning at the potential military applications of nuclear fission and urging the United States government to initiate its own nuclear research program.  They speculated that Nazi Germany may be developing nuclear weapons of their own.  

Cooling reactors of a nuclear power plant
Cooling reactors of a nuclear power plant
(Credit: Wikimedia Commons)

In response, an Advisory Committee on Uranium was formed which eventually led to the creation of the Manhattan Project. The Manhattan Project was officially launched in 1942 and led by J. Robert Oppenheimer at a secret facility in Los Alamos, New Mexico.  The result of the massive scientific and engineering project was the development of the world’s first atomic bomb, which was ultimately used against Japan at the end of World War II.

A more positive benefit to civilization than atomic weapons is the development of nuclear energy.  Nuclear energy produces extremely low amounts of greenhouse gases, making it a much cleaner alternative form of energy from fossil fuels.  If humanity it to solve its climate crisis in the coming century, nuclear energy may prove to be the saving technology. 

Continue reading more about the exciting history of science!

1913: Bohr Model of the Atom

In 1913 Niels Bohr proposed a model of the atom based on quantum mechanic physics which helped solve problems of previous atomic models that were based on classical physics. His proposal came to be know as the Bohr model of the atom. 

Earlier Models of Atomic Structure

Bohr Model of the Atom
Bohr’s Model of the Atom

Prior to Bohr’s model of the atom there were various competing models being used.  In 1897 J.J. Thomson discovered the electron through his experiments with cathode ray tubes, setting the stage for the development of his plum pudding model.  In this model electrons were embedded in a positively charged sphere, akin to plums scattered in a pudding.   

Shortly afterwords, Ernest Rutherford announced the surprising discovery of the atomic nucleus which instantaneously and radically transformed our understanding of the structure of the atom.   The discovery of the nucleus paved the way for Rutherford’s nuclear model of the atom, placing the nucleus in the center with electrons orbiting around it, akin to planets orbiting the Sun in our Solar System.  Unfortunately, both of these models had shortcomings because they were entirely based on classical physics.

The most pressing issue was the stability of the atom. According to electromagnetic theory, whenever an electron is accelerated around the nucleus of an atom it should emit radiation resulting in a continuous loss of energy.  This loss of energy would cause the electron to slow down and spiral into the nucleus almost instantly.  Clearly this does not happen in the real world as atoms are stable.  To solve for this problem Bohr used the emerging quantum physics.

Bohr’s Model of the Atom

Bohr’s model resolved this problem by showing the electrons in orbit were consistent with Max Planck’s quantum theory of radiation.  At the turn of the 20th century Max Planck introduced the revolutionary concept of quantized energy to explain the spectrum of black-body radiation.  His key insight was that energy is emitted or absorbed by matter in discrete units, or “quanta,” rather than in a continuous manner as predicted by classical physics.  This insight laid the groundwork for Bohr’s work on the structure of the atom.

In Bohr’s model of the atom electrons would only be able to occupy certain orbits with a specific amount of energy, which he referred to as energy shells or energy levels.  They emit or absorb radiation only when electrons abruptly jump between different orbits.  Using Plank’s constant, the frequency of photons, and some information about the electrons mass and charge, Bohr was able to obtain an accurate mathematical formula for the hydrogen atom.

This was huge improvement on previous models because it incorporated the new quantum physics, however there still were a few problems associated with Bohr’s model.  It was not a particularly useful description for atoms other than hydrogen and it failed to account for the Zeeman Effect in hydrogen.  It was eventually refined and superseded by quantum theory that was consistent the work of Werner Hiesenberg, Erwin Schrodinger, Max Born, and many others.

Impacts of Bohr’s Model of the Atom

Bohr’s model of the atom, with its quantized energy states, was nothing short of revolutionary. It implied several significant impacts on the understanding of atomic structure and on the development of quantum mechanics.  Some of the key impacts of Bohr’s model include:

Bohr's Model of the Atom provided the theoretical framework for understanding the spectral lines of different elements
Bohr’s Model of the Atom provided the theoretical framework for understanding the spectral lines of different elements
(Credit: www.webbtelescope.org)
  • Explanation of atomic spectra: Bohr’s model was successful in explaining the discrete line spectra observed in the emission and absorption of light by atoms.  It provided a theoretical framework for understanding the spectral lines of different elements.
  • Development of quantum theory:  Bohr’s model was crucial in the early development of quantum theory.  It provided one of the first examples of the application of quantum principles to the behavior of electrons in atoms. 
  • Influence on atomic theory: Bohr’s model spurred further research and inspired subsequent scientists, including Werner Heisenberg, Erwin Schrodinger, and Max Born.  These scientists went on the develop more sophisticated quantum mechanical models.
  • Practical technological applications:  Bohr’s model has helped in the development of technologies such as lasers, semiconductors, and nuclear energy.   These technologies depend on an understanding of atomic behavior.  

Overall, Bohr’s model of the atom had a significant impact on physics as a whole.  Its development can be marked as a transition in time between classical physics and quantum mechanics.  While his model has since been superseded by a more comprehensive quantum mechanical model, its significance as a foundational role in the development of atomic theory and quantum mechanics remains important. 

Continue reading more about the exciting history of science.

1781: Discovery of Uranus

In 1781 Sir William Herschel announced the discovery of a new planet that became named Uranus in the tradition of naming planets after classical mythology. The discovery of Uranus, the seventh planet from our Sun, was a pivotal moment in the history of astronomy. In antiquity, the planets referred to the seven visible points of light that moved across the fixed background of the stars. These included the Sun and the Moon, as well as the classical planets of Mercury, Venus, Mars, Jupiter, and Saturn. The discovery of Uranus marked the first time a new planet was discovered since ancient times and ushered in a new era of exploration within our solar system.

Sir William Herschel Makes a Monumental Discovery

Uranus photo NASA
Photo of Uranus
(Credit: NASA)

Herschel was a German-born astronomer who resided in England.  Born in 1738, Herschel was a polymath with a keen interest in music, mathematics, and of course astronomy.  In 1757 he moved to England where he worked as an organist in Bath. He began his foray into the world of astronomy with a simple, homemade telescope and began observing the stars. He would eventually construct more than 400 telescopes during his lifetime, including a great 40-foot telescope, and most of which were superior to the ones available at the time.

On the night of March 13, 1781, Herschel was conducting a routine survey of the night sky using a 6.2-inch aperture telescope he had constructed himself.  During this survey, he stumbled upon an object that appeared to be a faint, nonstellar object.  Herschel initially reported it as a comet due to its slow movement across the sky and dimness but he continued to observe it over the following nights.  As the months passed by Herschel and the other astronomers he reached out to for input began to suspect differently, as its orbit suggested it was a planet and no tail was visible.  Eventually it became clear the object was a planet as the orbit was calculated by Pierre-Simon Laplace and Alexis Bouvard, two French mathematicians and astronomers.  Their calculations confirmed that Uranus followed a nearly circular orbit around the Sun, consistent with it being a planet.  For the first time in history, our solar system had expanded from the six previously known planets.  

The discovery of Uranus sparked a debate over its name.  Hershel, the discoverer, felt that he had the right to name the planet and he proposed the name “Georgium Sidus” or “George’s Star,” in honor of King George III.  However this name was not well received in the international community, as it broke with the tradition of naming the planets after ancient Roman Gods.  Several alternative names were suggested, but in the end the name Uranus was chosen and eventually accepted as the planet’s name.  

The Planet Uranus and the Impact of its Discovery

Uranus is the seventh planet from the Sun and approximately 2.6 billion kilometers from Earth.   It takes about 84 years for it to complete a full orbit around the Sun.  Its mass is about fifteen times Earths with a diameter of about four times Earth, making it the third largest planet in our solar system.  Accompanying the planet are thirteen rings and 27 named moons.  Composed mostly of rock and ice, it is one of the coldest planets in the solar system with an average temperature of -216 Celsius.  This is due to its low core temperature – it doesn’t generate much heat unlike Jupiter and Saturn.

The discovery of Uranus marked a new era in the exploration of our Solar System.  Most significantly, it showed that our Solar System was much larger than previously thought.  This provided much motivation for further exploration. Perturbations in Uranus’s orbit could not be explained by Newton’s laws of gravity if the only known planets were considered. This led to a hypothesis that there might be other unknown planets leading to the discrepancies. The discovery of Neptune in 1846 and later Pluto in 1930 confirmed this hypothesis, although Pluto was later reclassified as a dwarf planet in 2006. The discovery of Neptune was a triumph for Newton’s laws of gravity because its position was predicted based on the gravitational influence it had on Uranus.

The discovery of a planet more distant than Saturn provided motivation for advances in telescope technology as astronomers recognized the new for more powerful telescopes to study distant objects.  Lastly, the naming controversy that followed highlighted the significance of tradition in the scientific community.  

Continue reading more about the exciting history of science!

3000 BCE: Number Systems

The invention of number systems designated another high mark for the civilization. Its development led to formal mathematics just as the development of writing systems led to reading and literature. These two tools are responsible for preserving and transmitting the vast, accumulated knowledge humanity has attained in the past 5000 years. These disciplines form the foundation of our modern academic curriculum.

For tens of thousands of years there was not an organized number systems. People counted things using their fingers and toes, if they needed to count anything at all.  Around 25,000 ago there is evidence that people started placing marks on wood and bone, a practice known as tally systems, to keep track of things.  Later on people used markers, counters, or tokens in what is called a token system. These tokens corresponded directly to the goods and things that they represented. Tally systems followed by token systems were proto-numeral systems. This proved was useful for counting and keeping track of smaller amounts but it was not very practical when counting large numbers or attempting to complete more complex mathematical operations.

When humans were hunter-gathers there was not a pressing need to tally items. People lived in smaller groups and generally shared goods in the community. Human emotions such as resentment and distrust were sufficient to regulate fairness in the community. With the advent of agriculture the human condition changed. Soon their were sprawling city-states with tens of thousands of people and a division of labor. Accountants were needed to record debts and taxes owed. This necessity provided the kindling spark to devise a more capable system of keeping track items than a tally system.

Number Systems Take Their Position in Civilization

Babylonian Number System
Babylonian Sexagesimal Number System

Around 3000 BCE the Babylonians developed one of the first known positional number systems.  It was written in cuneiform and was a sexagesimal (base 60) number system.  The major achievement of the Babylonian number system over previous number systems was that it was positional.  This mean that the same symbol could be used to represent different orders of magnitude, depending on where the symbol was located within the number.

The Babylonian system was a significant advancement in the development of mathematics. It provided for the addition and subtraction of numbers and allowed for fractions. It did have some many shortcomings. One such shortcoming is the absence of the number zero. Today we use a base 10 positional number system however there are still some relics of the base 60 number system in our culture.  For example, the circle is 360 degrees and there are 60 seconds in a minute.

Many other civilizations further developed number systems. They Chinese, Egyptians, Aztecs, Mayans, and Inca’s all made use of them. The Greeks in particular showed an intense interest in math. When the conquests of Alexander the Great spread Greek culture throughout the ancient world it marked a turning point in science and math that still lingers, along with so much else from the Greek culture, with us today.

The Story of our Number System

The number system we used today is referred to as Arabic Numerals despite its oldest preserved samples being discovered in India from around 250 BCE. It is uncertain whether this system developed entirely within India or had some later Phoenician and Persian influence. What is certain is that the Arabic’s fully developed and institutionalized this system. A book written around 820 by the mathematician Al-Khwarizmi provides us with the oldest fully developed description of this system. Titled On the Calculation with Hindu Numerals, it is responsible for introducing this Hindu-Arabic numeral system to Europe.

Arabic Numerals
Various Styles of Arabic Numerals
(Credit: Wikimedia Commons)

The Arabic’s designed different sets of symbols which can be divided into two main groups – East Arabic numerals and West Arabic numerals. Although the Arabic language is written from right to left, Arabic numerals are arranged from left to right. The European numeral system was primarily modeled on the now extinct West Arabic numeral system.

The Importance of Number Systems

We would be lost in our world without numbers. They are used to represent goods and things.. They allow the measurement of objects. They are used in the tracking of time. But maybe most importantly, number systems are necessary for mathematics, the bedrock of science.

Pythagorean Theorem
Pythagorean Theorem
(Credit: www.mathworld.wolfram.com)

Much of the world can be expressed in mathematics. It has been echoed by many great scientists that nature speaks to us in the language of mathematics. This is why science depends so much on math. Math has wide-ranging applications ranging from engineering, accounting and finance, navigation, physics and cosmology, computers and coding. Geometry and calculus allows us to construct buildings to live in. Algebra allows us to calculate our loan payments when we purchase that new home. The examples the benefits of using math, just like are numbers, are infinite.

Continue reading more about the exciting history of science!

1869: Mendeleev’s Periodic Table of Elements

The periodic table of the elements is a cornerstone of modern chemistry and an iconic visual representation almost all school children are familiar with.  It was originally developed in 1869 by the Russian chemist Dmitri Mendeleev, when he published his periodic table of elements.  Aside from a few minor changes we still use Mendeleev’s system of organization for our modern periodic table.

The Development and Organization of Mendeleev’s Period Table of Elements

There had been a total of 63 elements discovered and isolated by 1869.  As the number of known elements was increasing, several scientists began to notice relationships among some of elements and patterns in how they combined with each other.  Scientists had been trying to develop a classification system of elements for decades for the known elements for decades but no agree upon system had been reached.  For one example, an English scientist named John Newlands proposed the Law of Octaves in 1865.  He noticed that every eighth element shared similar characteristics when arranged by atomic weight.  There were however limitations to his law, and it was not accepted by all scientists of the time. 

Mendeleev's Periodic Table of Elements
Mendeleev’s Periodic Table of Elements

Dmitri Mendeleev changed this when he published his periodic table of elements in 1869. He had been working on publishing a chemistry textbook beginning two years back titled Principles of Chemistry and it was his research for his textbook that led him develop this relationship between the chemical properties of the elements to their atomic weights.

He organized his table in order of increasing atomic weight.  He then placed elements with similar properties underneath each other.  In a few instances where it made sense, he swapped some elements out of order of increasing atomic weight to better line up the chemical properties. In doing this he inadvertently set up his table by increasing atomic number rather than atomic weight. By the time his table was finished he had discovered what is now called the Periodic Law, which means that the physical and chemical properties of the elements repeat in a periodic manner.  

Mendeleev’s true genius came in the fact that he left spaces on his table, correctly predicting the existence of elements that had yet been discovered.  He even predicted the properties of these missing elements based on their position in his table.  For instance, he correctly predicted the existence of elements that would later be known as gallium, scandium, and germanium.  Although his table was initially met with skepticism, when these elements were eventually discovered and their properties closely matched Mendeleev’s predictions it provided a strong validation of his table, quickly leading to its acceptance.

The Modern Periodic Table

Mendeleev’s table has continued to evolve as new elements were discovered and the understanding of the atomic structure increased. It is a product of the collaborative efforts of many scientists involving a few important changes and additions.

Most notably, thanks to the work of Henry Moseley, we now organize the table by atomic number, which is the total number of protons in the nucleus.  This change happened as a result of the discovery of isotopes and lead to the realization that atomic number is the fundamental basis for the organization of the elements.  In addition to that change the modern periodic table now includes over 100 elements, up from the 63 known to Mendeleev when his first table was published.

The Modern Periodic Table of Elements
The Modern Periodic Table of Elements: The table today contains 118 known chemical elements
(Credit: American Chemical Society)

The are a few other changes and additions to make note of.

  • Noble gases: the original table did not include the specific group of noble gases as these elements were not yet discovered.
  • Electron structure: the modern periodic table is often presented with electron configurations for each element.
  • Improved measurements: significant advances in technology have allowed for more precise measurements of atomic weight and structure.
  • Filling of d- and f- blocks: these blocks were not fully understood during Mendeleev’s time and have been refined to reflect their electronic structures and chemical properties.
  • More comprehensive periodic trends:  the modern periodic table provides more information about periodic trends such as atomic radius, ionization energy, and electron affinity.

The Importance of the Periodic Table

The periodic table is an indispensable tool for chemists and educators worldwide.  Its organization captures the complexity of the natural world in a simple framework that easily shows the relationships, properties, and reactivity between the chemical elements.

The periodic table also provides a wealth of information about the chemical elements.  It contains information about the atomic structure and weight, electron configuration, valence electrons, and chemical reactivity.  All of this information provides insights into studying the elements and for manipulating matter at the molecular level.  One area of study where this information is particularly useful is in the field of material science.  By understanding the periodic trends, scientists have been able to design and engineer new materials with specific characteristics for a wide range of applications such as in electronics, energy production, agriculture, and medicine.  

Continue reading more about the exciting history of science!

1824: Carnot Cycle

The Carnot cycle is the ideal cycle of operations that provides the maximum theoretical efficiency for any engine that utilizes heat.  It was first proposed by the French physicist Sadi Carnot in 1824 prior to its expansion by other scientists.

Reflections on the Motive Power of Fire

Nicolas Leonard Sadi Carnot is often referred to as the “father of thermodynamics”.  In 1824, at the age of 27, he published his only book titled Reflections on the Motive Power of Fire in which he introduced the concept of the Carnot cycle while laying down the foundations for a new discipline – thermodynamics.  By this time the steam engines industrial and economic importance had been established, although little scientific work had been done by studying them. Carnot’s book was one of the first scientific studies of steam engines.

Carnot explore some key ideas in his book. He recognized heat as a form of energy that can be converted from one state to another. This concept eventually led to the first law of thermodynamics, which states that energy is conserved in any thermodynamic process. He introduced the notion of an idealized heat engine, known as the Carnot engine, as a theoretical construct to study the maximum efficiency of heat conversion. This engine could be served as a reference point against which all other real work engines could be compared. Additionally, he noted that the efficiency of a heat engine depends of the temperature difference between the source and the sink, and that maximum efficiency is only achieved when it operates in a recoverable manner.

To understand the principles of an ideal heat engine, Carnot devised the Carnot cycle which is a series of four reversible processes. It is a theoretical cycle that provides an upper limit on the efficiency that any classical thermodynamic engine can achieve during the conversion of heat into work.  The Carnot cycle heat engine is not a practical engine that can be made because its processes are reversible, which violate the second law of thermodynamics.

The Four Stages of the Carnot Cycle

The Carnot cycle consists of several operations.  First, the engine absorbs energy in the form of heat from a reservoir.  Work is done by the heat which causes an expansion.  Next is compression where the heat is given out.  The net result is that heat is taken from a hot source to a cooler one, while some of the heat does work.  The Carnot cycle establishes the highest possible efficiency – measured by dividing the work that is done by the energy absorbed from the reservoir – for any heat engine.  In the real work efficiency of an engine is never as high as that predicted by the Carnot cycle due to factors such as friction.

Here are the four stages of the Carnot cycle.

The Four Stages of the Carnot Cycle
The Four Stages of the Carnot Cycle
  1. Isothermal expansion – gas expands on its surroundings as heat is transferred from hot reservoir at a constant temperature; work being done = heat supplied.
  2. Adiabatic expansion – gas continues to expand on its surroundings with no heat exchange, causing temperature to cool; work being done > heat supplied.
  3. Isothermal compression – surroundings compress gas as heat is transferred to a low temperature reservoir at a constant temperature.
  4. Adiabatic compression – surroundings compress gas with no heat exchange, resetting the system to its original state (before stage 1).

The Importance of the Carnot Cycle

Sadi Carnot’s work brought about a revolution in our understanding of the underlying principles of heat transfer and energy conversion. Most directly, it set the stage for subsequent advancements in the field of thermodynamics. Work on the Carnot cycle heat engine led to the fundamental thermodynamic principle of entropy, a fundamental concept related the the second law of thermodynamics.  Rudolf Clausius introduced the term entropy in the mid-19th century to explain the irreversible nature of energy transfer and transformations in physical systems. Clausius observed that heat naturally flows from a higher temperature region to a lower temperature region and that the process is irreversible. He recognized the importance of Carnot’s work with heat and sought to put in on a more rigorous, mathematical footing.
Another Rudolf was to later be inspired by the Carnot cycle when he was developing a more efficient engine. In 1982 Rudolf Diesel submitted design patents for an internal combustion engine.  He knew gasoline engines were very inefficient, wasting around 90% of its heat in the process. He sought to make an engine closer to the theoretical framework of the Carnot cycle engine.

Continue reading more about the exciting history of science!

820s: Algebra

Algebra is the study of mathematics by using a combination of symbols and values and the rules for manipulating those symbols and values. In its most basic form, it involves using equations to find the unknown. Linear equations, the quadratic formula, functions, and much more are all familiar examples of algebra. Algebra became recognized as a separate branch of mathematics thanks to work of the Persian mathematician Muhammad ibn Musa al-Khwarizmi

The Roots of Algebra

The Quadratic Formula with Examples
The Quadratic Formula with Examples
(Credit: www.onlinemathlearning.com)

This history of any field of mathematics rides on a curvy road. This is no-less true for algebra. The roots of algebra can be traced back to Babylonian and Greek mathematics, at least 2000 BCE.  We have evidence of stone tablets from Babylonian mathematicians who were hitting on the same ideas of algebra. The representation was not the same and the symbols they used were unique to their culture, but the fundamental spirit of algebra is evident. The Babylonians used complex arithmetic methods to solve modern algebraic problems. The Egyptians also worked with algebraic ideas but they were much less advanced that the Babylonians and did not advance much past solving linear equations.

The next big reservoir of algebraic thought came courteous of the Greek mathematicians, in particular a person named Diophantus of Alexandria. Diophantus lived in Alexandria, Egpyt in the 3rd century. Little is known of his life except his works and his age. He authored a thirteen book series titled Arithmetica that unfortunately has not survived in its full form. The portions that have survived show algebraic equations being solved. Diophantus and the Greeks devised a system of geometric algebra, using squares to solve for equations. 

The Indian mathematician Brahmagupta was another person who influenced the development of algebra. Brahmagupta lived during the 7th century in northwest India. He wrote many influential works with a focus on mathematics and astronomy. His most famous work, Brahmasphutasiddhanta, provided solutions to linear and quadratic equations and is one of the earliest known texts to treat zero as a number. Much of his work moved from India the the Middle East, and was not known by Western Europe until many centuries later.

The Compendious Book on Calculation by Completion and Balancing

The Compendious Book on Calculation by Completion and Balancing
The Compendious Book on Calculation by Completion and Balancing

These earlier systems, especially the Greek and Indian, provided the inspiration for Persian mathematician al-Khwarizmi. Al-Khwarizmi was born in 780 and fortunate enough to have studied and worked in the House of Wisdom in Baghdad. The House of Wisdom was an enormous library and a major intellectual center of the time. In the 820s he wrote The Compendious Book on Calculation by Completion and Balancing, forming the foundation of algebra and establishing it as an independent discipline from arithmetic and geometry. 

The Arabic title of his work is Al-jabr wa’l muqabalah, and it is from “al-jabr” that we get the term algebra. As the title indicates the text stresses the completion and balancing of equations. Here is a simple example of each type of operation:

  1. Completion – Take the equation x+6=36. To complete this equation, we subtract 6 from each side to get x=36-6, or x=30.
  2. Balancing – Take the equation x+y=y+30. To balance this equation we cancel y from both sides and get x=30.

His treatise is important because it presented the first systematic solution of linear and quadratic equations.

Algebra Today

Today algebra is used in a variety of mathematical fields, practical applications, and everyday life situations. Numbers and equations are used in everyday life whether we realize it or not. We use it in finance when we calculate loan interest, our return on investment, or a currency exchange rate. We use algebra when calculating rations. Ratios are relationships between different quantities. Twice as many guests are now showing up to your party? We need to balance the equation and add twice as many ingredients to that soup we are cooking. Are you a United States resident traveling outside the country? Most of the world uses the metric system for measurements and we’d use algebra to convert these measurements. We use algebra in statistics, graphing, computer coding, measuring calculations such as area, volume, and mass, and more.

Intuitively, we use algebra all the time when we solve for unknown variables. Abstractly, algebra helps us with our critical thinking and problem solving skills. Lastly many other branches of mathematics are dependent on algebra. Finding the area under a curve requires the use of calculus, and calculus would not be possible without algebra.

Continue reading more about the exciting history of science!

1848: Absolute Zero

Everyone is familiar with the concept of temperature.  Temperature is a way to describe how hot or cold something is.  But what is it that determines how hot or cold something is?

All of matter is made of atoms, and those atoms are always moving.  Temperature then, is a measure of the kinetic energy (the energy of motion) of the particles in a substance or system.  The faster the atoms move, the higher the temperature.  Temperature also determines the direction of heat transfer, which is always from objects of a higher temperature to objects of a lower temperature.

Absolute zero is the lowest temperature theoretically possible.  It corresponds to a bone-chilling -459.67 degrees on the Fahrenheit scale and -273.15 on the Celsius scale.  At this temperature there is the complete absence of thermal energy, as the particles of a substance have no kinetic energy.

The History of Absolute Zero

The roots of the idea of absolute zero can be traced back to the early 17th century when scientists began to explore the behavior of gases.  In 1665, Robert Boyle formulated Boyle’s Law, which stated that the volume of a gas is inversely proportional to its pressure at a constant temperature.  This law laid the foundation for the study of gases and eventually lead to the concept of absolute zero.

Absolute Zero Temperature Scale
Absolute Zero Temperature Scale

Over the next 200 years additional discoveries were made that brought scientists closer and closer to the concept of an absolute zero temperature point.  Then in 1848 the distinguished British scientist, William Thomson (later Lord Kelvin), published a paper titled On an Absolute Thermometric Scale where he made the case for a new temperature scale with the lower limit to be absolute zero. At this time temperature was measured on various scales, such as Celsius and Fahrenheit scales.  However these scales had certain limitations and were based on arbitrary reference points.  Thomson recognized the need for a temperature scale that would provide a universal standard and be based on fundamental physical principles.  Scientists could now rely on a scale for temperature measurements without the need for using negative numbers.

Thomson’s key insight was to base his new scale on the behavior of an ideal gas.  According to Boyle’s Law, the pressure of an ideal gas is directly proportional to its temperature when the volume is held constant.  He realized that if a gas were to be cooled to a temperature at which its volume reached zero, then this temperature would represent the absolute zero of temperature.  Thomson correctly calculated its value and used Celsius as the scale’s unit increment.

Absolute zero is the temperature to which you all atoms would stop moving and kinetic energy equals zero. This temperature has never been achieved in the laboratory, but it’s been close. Sophisticated technology involving laser beams to trap clouds of atoms held together by magnetic fields generated by coils have cooled elements such as helium to within fractions of a degree of absolute zero.  The current world record for the coldest temperature is held by a team of researchers at Standford University in 2015. They used sophisticated laser beams to slow rubidium atoms, cooling them to an incredible 50 trillionth of a degree, or 0.00000000005 degrees Celsius, above absolute zero! This is extremely impressive since according to theory, it is suggested that we will never be able to achieve absolute zero.

Thomson’s temperature scale was later named the Kelvin Scale in his honor, and kelvin is the International System of Units (SI) base unit of temperature.

Practical Uses of Absolute Zero

The concept of absolute zero is relevant to many modern technologies, such as cryogenics and quantum computing.  Below is a summary of its applications:

  1. Cryogenics – cryogenics is the study of very low temperatures. cryogenics is the study of very low temperatures. Its technologies are used in various industries such as medical science, where they assist in the preservation and storage of biological materials.
  2. Superconductivity – superconductivity is the phenomenon where certain materials can conduct electric current with zero electrical resistance. Superconductivity is needed in several fields including medical imaging (MRI) and particle accelerator technologies.
  3. Quantum Computing – at very low temperatures quantum mechanical effects become more pronounced.at very low temperatures quantum mechanical effects become more pronounced. In order to create and manipulate qubits, the basic unit of quantum information, quantum computing systems require extremely low temperatures.
  4. Space Exploration – extremely low temperatures are encountered in deep space. Understanding the properties of materials at these temperatures is crucial for designing spacecraft components.

As you can see, absolute zero holds profound implications for various fields of study and cutting-edge technology.

Continue reading more about the exciting history of science!

1866: Laws of Inheritance

The laws of inheritance are a set of fundamental principles that govern the transmission of genetic traits from one generation to the next. Its discovery and understanding have changed our view of life while having a profound impact on a diverse range of topics such as medicine, agriculture and biotechnology. The ideas behind the laws of inheritance, the theory of evolution by natural selection, and population genetics has formed what scientist’s call the modern synthesis, a cornerstone of modern biology.

Gregor Mendel and the Pea Plant Experiments

For most of history peoples understanding about inheritance came from anecdotal evidence and observations of certain traits being passed down from parents to offspring. It wasn’t until the mid 19th century that the Augustinian monk Gregor Mendel conducted his now famous experiments with pea plants that established the principles of heredity. Prior to Mendel’s experiments the prevailing theory of inheritance suggested a blending of traits and characteristics from both parents to their offspring.

Gergor Mendel's pea plant experiment
(Credit: Encycleopedia Britannica)
Gergor Mendel’s pea plant experiment
(Credit: Encycleopedia Britannica)

In 1866 the Augustinian monk Gregor Mendel published Experiments in Plant Hybridization that explained his pea plant experiments and the resulting laws of inheritance.  His work was first read to the Natural History Society of Brünn then published in the Proceedings of the Natural History Society of Brünn

During the years 1856 to 1863 Mendel cultivated over 28,000 plants and tested for seven specific traits   The traits he tested for were:

  • Pea shape (round or wrinkled)
  • Pea color (green or yellow)
  • Pod shape (constricted or inflated)
  • Pod color (green or yellow)
  • Flower color (purple or white)
  • Plant size (tall or dwarf)
  • Position of flowers (axial or terminal)

The results of his careful experimentation allowed Mendel to formulate some general laws of inheritance. His three laws of inheritance are:

  • Law of Segregation – allele pairs (one form of a gene) segregate during gamete (sex cells: sperm or egg) formation. Stated differently: each organism inherits at least two alleles for each trait but only one of these alleles are randomly inherited when the gametes are produced.
  • Law of Independent Assortment – allele pairs separate independently during the formation of gametes.
  • Law of Dominance – when two alleles of a pair are different, one will be dominate while the other will be recessive.

Mendel’s laws of inheritance suggested a particulate inheritance of traits in which traits are passed from one generation to the next in discrete packets.  As already noted, this differed from the most popular theory at the time which suggested a blending of characteristics in which traits are blended from one generation to the next.

Unfortunately for the progress of science, Mendel’s work was largely unnoticed and forgotten during his lifetime.  This was for a few reasons.  First, he lived in relative isolation at the Augustinian St. Thomas’s Abbey, now the modern day Czech Republic, and did not have a network of scientific colleagues.  He published his work in relatively obscure scientific journal and did not have the means to promote his findings.  His work, in a sense, was also ahead of his time.  The scientific community was simply focused on other areas of study during his lifetime and the concept of discrete hereditary units (now called genes) did not fit in with the prevailing scientific paradigm.  Lastly Mendel did little follow up to his work and soon shifted his attention to administrative and educational duties within the abbey.  It wasn’t until the turn of the 20th century that his work was rediscovered and popularized independently by three scientists – Hugo de Vries, Carl Correns, and Erich von Tschermak.  

A Journey into Genetics

Mendel’s laws of inheritance laid the groundwork for the 20th century field of genetics.  The field of genetics is the study of heredity that incorporates the structure and function of genes as the mechanism of biological inheritance.  

The emergence of molecular genetics began to take shape after it was discovered that the mechanism of hereditary transfer was contained in nucleic acids.  The race was on to discover the mechanism by which nucleic acids transferred the hereditary material.  The final breakthrough culminated with the discovery of the double-helical structure of DNA by James Watson and Francis Crick in 1953, as it provided the definitive explanation for how genetic information is encoded and transmitted within living organisms.  

The field of genetics continues to advance into the 21st century at a blistering pace.  In addition to unraveling the fundamental principles of life, scientists are now able to exploit the mechanics of genes and are learning novel ways to edit them to cure disease.  As of late 2023, the United States Food and Drug Administration (FDA) and medical regulators in the United Kingdom have approved the world’s first gene-editing treatment for Sickle Cell Disease using a gene-editing tool called Crispr.  Crispr technology has the potential to revolutionize the field of genetics and various related fields through its precise genome editing capabilities, potentially leading to another exciting development in the exciting history of science!

Continue reading more about the exciting history of science!