- Home
- Chris Waring
From 0 to Infinity in 26 Centuries Page 9
From 0 to Infinity in 26 Centuries Read online
Page 9
Setting things out
Set theory is the branch of mathematics concerned with placing objects into groups called sets. Sets can be defined as containing:
1. numbers (e.g. all odd numbers less than 100)
2. objects (e.g. the set of different types of dogs)
3. ideas (e.g. the set of problems that can be solved by a computer)
Boolean algebra works in set theory, like the example of the shaggy and yappy dogs on page 145.
John Venn (1834–1923) was a British priest and logician who, besides designing and building a cricket-ball-bowling machine, is best known for Venn diagrams, which help to show set theory in a visual way. To use our earlier example, if you are considering all types of dog, then the rectangular box should be receptive to all types of dog. I introduced two sets: x, the set of shaggy dogs and y, the set of yappy dogs. We saw that some shaggy dogs are also yappy, so those two sets need to overlap.
The overlap in the middle represents x AND y. x OR y would be any dog within the two circles. NOT x would be any unshaggy dog.
To bring things back into more mathematical territory, the German mathematician Georg Cantor (1845–1918) was very interested in infinite sets, which, as the name suggests, are sets containing an infinite number of things.
Consider the set of positive whole numbers (1, 2, 3…) which mathematicians call the natural numbers. This is an infinite set because we can keep on counting for ever. Then consider the set of numbers in which the numbers can be anything at all, including fractions, negative numbers and those irrational numbers like π, e and ϕ that we saw earlier (mathematicians call this continuous list of numbers real numbers). This is also an infinite set, and it contains more numbers than the set containing the natural numbers. By comparing the set of natural numbers, which is infinite, with the set of real numbers, which is also infinite, Cantor was able to show that, although both were infinite, they were not the same size. Therefore we have the idea that there are different infinities for different infinite sets. This sort of business is the reason many mathematicians consider infinity to be a concept, rather than a number.
Cantor said that the natural numbers were countably infinite because we make progress towards infinity as we count. The real numbers are uncountably infinite because, no matter where you start counting from, you will not make much progress as you add on an infinitesimal amount each time.
Cantor developed the continuum hypothesis, which states:
There is no set that has more members than the set of integers, but fewer members than the set of real numbers.
So far, it has been proved mathematically that this hypothesis cannot be proved or disproved within the current limits of standard set theory, and it remains one of the greatest unsolved problems in mathematics.
ALAN TURING (1912–54)
Turing was a brilliant British mathematician and scientist who is well known for the instrumental role he played in the Allied effort to break the German Enigma cipher, a coding machine used by the Nazis during the Second World War.
Memory machine
Prior to the First World War, Turing was a mathematics fellow at Cambridge University. He worked on the Entscheidungsproblem, a challenge set by the German mathematician David Hilbert (see here) that asked whether it is possible to turn any problem in mathematics into an algorithm that will produce a ‘true’ or ‘false’ answer that doesn’t require a proof. Turing was able to show that it is not possible by introducing the concept of an idealized computer called a Turing machine.
A Turing machine is a computer that has an infinite memory that can be fed an infinite amount of data. The machine can then modify the data according to a simple set of mathematical rules to give an output. Turing showed that it was not possible to tell whether a Turing machine would reach an answer to the Entscheidungsproblem, or whether the machine would carry on working out the problem for ever.
The theoretical Turing machine was very important in establishing computer science. Turing’s research into computing took him to Princeton University in the United States, where he worked towards a PhD in mathematics. Here Turing built one of the first electrical computers using Boolean logic (see here), a crucial development towards what came next.
Unpicking the code
Turing was part of the Government Code and Cipher school based at Bletchley Park in Milton Keynes, UK, during the Second World War, where he was set to work on breaking the Enigma cipher. While there Turing developed something called the Bombe, an electro-mechanical machine that could work its way through the huge number of possible settings of the Enigma machine far faster than a human cryptanalyst could. The Bombe relied on the fact that the Enigma would never encrypt a letter as itself – if you typed the letter ‘q’ into the Enigma, ‘q’ wouldn’t be given as an output. The Bombe would try each setting, and when the output letter matched the input letter that setting would be eliminated and the next one tried. The second trick was to use a crib: making an educated guess as to what the first words of an intercepted message might be.
The work of Turing and others at Bletchley Park was hugely important to the Allies and certainly shortened the war. However, the work was top secret and Turing could therefore receive little recognition for his efforts.
Man v machine
After the war Turing continued his work on electronic computers, turning his hand to Artificial Intelligence – whether or not a sufficiently powerful and fast computer could be considered intelligent. During his investigations, Turing devised what has become known as the Turing test. The test states that a computer is to be considered intelligent if a human being communicating with it does not notice that the computer is not a human being. Turing’s work in computers laid the foundations for the digital revolution we now enjoy (and also take for granted). The computer I’m typing this book on, for example, has its origins in a Turing machine.
Into the Chaos
In 1952 Turing turned his hand to biology. In many living things there is a stage when cells change from being very similar to one another to becoming more differentiated. For example, in a developing embryo a group of identical stem cells transform into cells that go into developing the body’s organ system. Turing was able to show that this process, called morphogenesis, is underpinned by simple mathematical rules that, nonetheless, can develop very complex animals. This idea was well ahead of its time and many people thought Turing was wasted on such research. However, many years after his death, his work on morphogenesis would be recognized as the one of the first glimpses of chaos mathematics.
Unfortunately, in 1952 Turing was convicted of gross indecency for homosexual acts, which were deemed illegal at that time. Turing chose to undertake a course of oestrogen injections over a prison sentence. Horrific side effects to the injections led Turing to become depressed, and in 1954 he was found dead of cyanide poisoning. Turing, the founder of modern computing, the unsung war hero, was dead, most likely by his own hand, at the age of forty-one. We can only ponder what discoveries were denied to us by his tragically premature demise.
MAPPING THINGS OUT
Computers were not immediately adopted by mathematicians because many detractors felt that an elegant proof of a problem should be shown formally, rather than by number-crunching. One of the first conjectures proved with the assistance of computing power was called the four-colour theorem.
In the nineteenth century South African mathematician Francis Guthrie (1831–99) was shading in a map of England’s counties when he noticed that each county could be defined from its neighbouring county using just four colours. Guthrie posited this as the four-colour conjecture, which, when put more formally, states that in the instance of a flat two-dimensional map, on which any regions that share a border cannot be the same colour, no more than four colours are ever needed. It is important to note that regions meeting across a point do not share a border and therefore can be the same colour, and that all the countries on the map are independent, so they can be any
colour required (unlike the part of Russia where old Königsberg is).
One of the main difficulties with this problem lies in the fact that it is highly visual; there must be an infinite number of maps one could draw to test the theory. However, we can visualize each region as a point using graph theory (much like Euler used with the Königsberg problem).
This allowed mathematicians to start tackling the problem in earnest because various different maps could be shown to have effectively the same graph. Serbian mathematician Danilo Blanusa (1903–87) discovered two maps that needed more than three colours. These elusive maps are known as snarks and they helped to demonstrate that the minimum number of colours required must be four or more. There are eight known snarks, the last of which was discovered in 1989.
Without a doubt
In the 1970s two researchers in the United States, Kenneth Appel (1932–) and Wolfgang Haken (1928–), then tackled the four-colour conjecture. They discovered that the most complicated part of any map – any area that might need lots of colours, and which might, therefore, disprove the four-colour conjecture – could always be shown as one of 1,936 possible situations. All of these 1,936 situations could be modelled as a graph and, using a computer, Appel and Haken showed that they could all be drawn using just four colours.
So, Appel and Haken showed that any map could be shown to be one of the 1,936 standard ones, so no counter-examples exist.
This was the first theorem proved with the assistance of a computer, but many mathematicians were unimpressed because the exhaustive proof required hundreds of pages of calculations. This meant that checking the proof would take so long it was necessary to trust that the computer’s work was correct, which was not an assumption many mathematicians would make at the time.
A PIECE OF THE PIE
π, as every schoolchild knows, is a special number that you get by dividing a circle’s circumference (perimeter) by its diameter (the distance across the middle of the circle). Because all circles are mathematically similar (in proportion) you always generate the same value, no matter the size of the circle.
Mathematicians have always been fascinated by π. It is an irrational number (see here) and it is a very useful constant in mathematics that occurs not only in questions about circles but also in all kinds of problems in geometry and calculus. Throughout the history of mathematics we have seen people trying to calculate π to increasing levels of accuracy (see box here). My calculator says:
π = 3.141592654
Since the advent of mechanical and electrical computers, the ability to perform the necessary calculations quickly and accurately has improved exponentially. The Chudnovsky brothers from the USA were the first to get π to 1 billion decimal places using a homemade super-computer in the 1980s. The record is currently owned by Japanese mathematician Yasumasa Kanada, who reached 2.7 trillion (2,700,000,000,000) decimal places in 2010.
π Crunching
Here’s how the value of π has been calculated across the centuries:
In 1900 BC the Ancient Egyptians calculated a value of 256/81 = 3.1605.
The ancient Babylonians had the handier 25/8 = 3.125.
We have inferred from the Bible an approximation of 3.
We saw that Archimedes placed π between 3.1408 and 3.1429, and he also gave us the common school approximation of 22/7.
In China in the fifth century AD, Zu Chongzhi used 355/113 = 3.1415929.
In c. AD 1400 Indian mathematician Madhava calculated 3.1415926539.
The 100 decimal places mark was reached by English astronomy professor John Machin in 1706.
Swiss mathematician Johann Lambert proved π is irrational in 1768.
Into the unknown
Chaos theory is the branch of mathematics that deals with unpredictable behaviour. Although this field has really opened up since the advent of computers, chaotic equations were first noticed during Newton’s time.
One of the main aims of Newton’s work was to create a system of equations that explained the motion of the planets. As we know, the planets move around the sun in almost circular orbits. Astronomers had noticed, however, that the orbits would wobble slightly, apparently at random, from time to time.
Leonard Euler (see here) developed a concept that became known as the three-body problem in an attempt to predict the somewhat irregular orbit of the moon. The reason for this irregularity is because the moon’s orbit is influenced by the gravity of both the earth and the sun. The force the sun exerts on the moon varies according to where the moon is in its orbit around the earth, and as a result the moon’s orbit fluctuates. Euler was able to work out the governing equations for the situation in which two of the bodies (the earth and the sun) are in a fixed relationship when compared to the third (the moon). French mathematician Henri Poincaré (1854–1912) tried to extend this to a more general solution that could be applied to the whole solar system. He did not succeed, but he was able to show that the orbits would never settle into a regular pattern.
The motion of the planets was not the only phenomenon scientists had observed to be irregular. The mathematics of turbulence had long eluded mathematicians. When a gas or a liquid flows, the motion of a particle within it can be explained mathematically under certain circumstances – when the speed of the fluid is relatively low. However, as the speed increases, the motion of a particle, particularly around an obstacle, becomes impossible to predict. This turbulence is highly chaotic. The study of the motion of gases and liquids is called fluid mechanics and is important to human beings in all sorts of situations, including transport, electricity generation, and even in understanding the flow of blood around our bodies.
The weather is perhaps the biggest earthly example of turbulence. In the 1960s American meteorologist Edward Lorenz (1917–2008) was modelling the movement of air using a computer when he noticed that, if he changed the initial conditions of his simulations by a negligible amount, as time went by the weather predicted by his simulations would vary widely. This led Lorenz to coin the term ‘the butterfly effect’: the notion that a minuscule change in an air pattern, much like the change induced by a butterfly flapping its wings, could lead to a hurricane happening elsewhere in the world.
It is this sensitivity to initial conditions that causes unpredictable, chaotic behaviour. Even when we completely understand the motion of the particles involved, we can never gather enough detail in our measurements of their speed or mass or temperature to be able to predict an accurate long-term outcome.
The same applies to Turing’s observations in morphogenesis – although the formation of stripes on a zebra is governed by very simple mathematical and chemical rules, immeasurable, tiny differences in each zebra means that they have huge variation in their stripes.
Weather Forecast
The weather is chaotic – we will never be able to predict it accurately for more than a few days in advance, if that, and even then it is possible that the true outcome could be significantly different than predicted. The Great Storm in Britain in 1987 saw the worst winds for hundreds of years, and yet it was not anticipated.
The computer has been key in the study of chaos mathematics because it allows you to take a model of a situation and run it forward, recalculating the many variables at each stage. Without a computer it can be incredibly time consuming to perform each step, called an iteration. Computers are good for iterative processes, which helps in the next branch of mathematics we shall look at: fractals.
Under the microscope
Although the term ‘fractal’ wasn’t coined until 1975, mathematicians have been fascinated by them for hundreds of years. A fractal is a geometric figure that has what mathematicians call self-similarity: no matter at what scale you look at the image, you still see the same features.
A coastline is a good example of a fractal. Imagine you had a satellite image of a coastline. Without any clues like buildings, trees or boats, you would find it hard to tell whether you were looking at 1,000 miles of coast, or 1
0 miles of an island. This is because the features – the river inlets, headlands, bays, etc. – look very similar at different scales. The same is true of many features of the natural landscape. Astronauts on the moon found it very hard to gauge the size of boulders – without the haze of atmosphere, they could not tell whether they were looking at a car-sized boulder 100m away or a much larger boulder 500m off.
Everyday Fractals
Many trees and plants have a branching, fractal-like structure. Ferns are especially fractal – the leaves of the plant look exactly like smaller versions of the fronds.
The CGI (computer-generated images) that we enjoy in today’s games and films often use fractal-algorithms to make landscapes, foliage, clouds and even skin and hair look realistic.
Famous fractals
Swedish mathematician Helge von Koch (1870–1924) devised one of the first mathematical fractals, known as the Koch snowflake, which follows a very simple set of rules: you start with an equilateral triangle, and then with each iteration you add another equilateral triangle to the middle third of each line in the diagram. This builds up a snowflake structure as shown:
Besides looking pretty, this snowflake has some very interesting properties. It has the self-similarity property – if you zoom in on an edge you would not be able to tell how many iterations had been performed. If you continued the iterations ad infinitum, the snowflake would have an infinite perimeter even though the snowflake has a finite area.