A reassuring article by Tom Holland in Saturday's Guardian discusses the supposed imminent end of the world predicted by Mayan calendar. Holland refers to a recently-discovered Mayan reference to the date (21 December 2012 in our calendar). The king Jaguar Paw was defeated in battle in 695 CE. To restore the confidence of his allies, he associated his time with the distant future by talking about 2012. Holland says this was "designed to place his defeat in a reassuringly cosmological context. Bad news was being veiled behind a recitation of numbers. George Osborne would surely have approved."
So we have an interesting example of the use of number as propaganda to assert one's place in the cosmos.
Incidentally, if you are worried about the end of the world this week (and Holland says that a Mayan inscription refers to the world's still existing in 4772, so there wasn't unanimity in the alleged prediction), the article suggests actions that the Mayans might have taken to reduce the risk. These include "piercing their tongues with thorns, and stabbing their penises with stingray spikes". So those who take this seriously know what they should do.
Wednesday, 19 December 2012
Friday, 7 December 2012
My (current) favourite infinity paradox
I remember one of my undergraduate tutors telling me about this paradox, of which I have just been reminded by Littlewood's Miscellany.
At one minute to noon the numbers 1 to 10 are put into a box, and the number 1 is removed.
At 1/2 minute to noon the numbers 11 to 20 are added to the box, and 2 is removed.
At 1/3 minute to noon the numbers 21 to 30 are added to the box, and 3 is removed.
And so on.
How many numbers are in the box at noon? The answer is obviously 9+9+9+9+... which looks as if it should be infinite. But in fact there are no numbers in the box, because if you suggest that number n might be there, I point out that it was taken out at 1/n of a minute before noon. The box is empty: nine times infinity is zero.
One has to be careful in dealing with infinity!
At one minute to noon the numbers 1 to 10 are put into a box, and the number 1 is removed.
At 1/2 minute to noon the numbers 11 to 20 are added to the box, and 2 is removed.
At 1/3 minute to noon the numbers 21 to 30 are added to the box, and 3 is removed.
And so on.
How many numbers are in the box at noon? The answer is obviously 9+9+9+9+... which looks as if it should be infinite. But in fact there are no numbers in the box, because if you suggest that number n might be there, I point out that it was taken out at 1/n of a minute before noon. The box is empty: nine times infinity is zero.
One has to be careful in dealing with infinity!
Sunday, 2 December 2012
Box paradoxes
It's longer than I would have liked since I last posted - which is because I've spent the last two weekend at maths conferences. First there was MathsJam - which was every bit as good as I expected when I wrote about it recently. Then there was the IMA's Early Careers Mathematicians Conference at Greenwich - where we had the chance to play with lost of unusual puzzles and games from the collections of David Singmaster and Laurie Brokenshire CBE and to try out wonderful linkages with Danny Brown.
There is a lot from MathsJam I could write about - not least a wonderful piece of graph theory from Colin Wright, who showed that factorising large numbers can be reduced to a graph colouring problem! My own short talk was about the two-box paradox(attributed to Schrodinger, in a slightly different form) that I wrote a blog post about in October.
This set me thinking about paradoxes involving boxes. There are several: the Monty Hall problem, Newcomb's Paradox, and Schrodinger's Cat come immediately to mind. What is nice is that they all tell us different things.
Schrodinger's Cat expresses concisely the quantum concept of the superposition of states.
The two-box problem I previously discussed tells us about the difficulty of selecting randomly from an infinite set wihtout being very specific about how you do it.
Newcomb's paradox gives the player the contents of two boxes. They can choose to take box B only, or both boxes A and B. Box A definitely contains one thousand pounds. An infallible predictor has chosen the contents of Box B in advance. If the predictor predicted that the player would take both boxes, they put nothing in Box B. If they predicted that the player would take Box B only, they placed one million pounds in Box B. What should the player do? At the point at which they make their choice, the contents of both boxes are fixed, so logically they get more if they take both boxes. But should they ignore the predictor's infallibility. This paradox, I think, shows us that no such predictor can exist.
The Monty Hall paradox can be viewed as a disguised version of Martin Gardner's prisoners paradox. Gardner wrote of three prisoners. held in solitary confinement, who know that two are to be executed the next day. Each has a 2/3 chance of dying. A thinks he can improve his odds - he points out to the guard that at least one of the other two must die, so that being told which of B or C will die isn't going to give him new information. But when the guard says C will die, A now reckons that his chance of survival is now 1/2 since it is either him or B. Of course, A is wrong (B's chance has improved) and I have a theory that it is memories of Gardner's article, where A's odds don't change, that led many mathematicians (myself included) to quickly jump to the wrong answer.
There is an excellent novel about mathematicians by Sue Woolfe, Leaning towards infinity, which contains a devastating account of the appalling treatment of women by male mathematicians at a fictional conference. I have never seen such behaviour at a maths conference and I found the account implausible until I read in Jason Rosenhouse's account in The Monty Hall Problem of the treatment the journalist Marilyn Vos Savant received from (some) mathematicians when she wrote about this problem. The responses would have been appalling even if Vos Savant had been wrong, but in fact she got it right and her critics didn't. So one useful lesson from the Monty Hall paradox is that on occasion even mathematicians can be jerks.
There is a lot from MathsJam I could write about - not least a wonderful piece of graph theory from Colin Wright, who showed that factorising large numbers can be reduced to a graph colouring problem! My own short talk was about the two-box paradox(attributed to Schrodinger, in a slightly different form) that I wrote a blog post about in October.
This set me thinking about paradoxes involving boxes. There are several: the Monty Hall problem, Newcomb's Paradox, and Schrodinger's Cat come immediately to mind. What is nice is that they all tell us different things.
Schrodinger's Cat expresses concisely the quantum concept of the superposition of states.
The two-box problem I previously discussed tells us about the difficulty of selecting randomly from an infinite set wihtout being very specific about how you do it.
Newcomb's paradox gives the player the contents of two boxes. They can choose to take box B only, or both boxes A and B. Box A definitely contains one thousand pounds. An infallible predictor has chosen the contents of Box B in advance. If the predictor predicted that the player would take both boxes, they put nothing in Box B. If they predicted that the player would take Box B only, they placed one million pounds in Box B. What should the player do? At the point at which they make their choice, the contents of both boxes are fixed, so logically they get more if they take both boxes. But should they ignore the predictor's infallibility. This paradox, I think, shows us that no such predictor can exist.
The Monty Hall paradox can be viewed as a disguised version of Martin Gardner's prisoners paradox. Gardner wrote of three prisoners. held in solitary confinement, who know that two are to be executed the next day. Each has a 2/3 chance of dying. A thinks he can improve his odds - he points out to the guard that at least one of the other two must die, so that being told which of B or C will die isn't going to give him new information. But when the guard says C will die, A now reckons that his chance of survival is now 1/2 since it is either him or B. Of course, A is wrong (B's chance has improved) and I have a theory that it is memories of Gardner's article, where A's odds don't change, that led many mathematicians (myself included) to quickly jump to the wrong answer.
There is an excellent novel about mathematicians by Sue Woolfe, Leaning towards infinity, which contains a devastating account of the appalling treatment of women by male mathematicians at a fictional conference. I have never seen such behaviour at a maths conference and I found the account implausible until I read in Jason Rosenhouse's account in The Monty Hall Problem of the treatment the journalist Marilyn Vos Savant received from (some) mathematicians when she wrote about this problem. The responses would have been appalling even if Vos Savant had been wrong, but in fact she got it right and her critics didn't. So one useful lesson from the Monty Hall paradox is that on occasion even mathematicians can be jerks.
Sunday, 11 November 2012
Why a flawed proof is worrying
In preparation for a class I have been looking at Alfred Bray Kempe's 1879 "proof" of the Four Colour Theorem. Until Percy Heawood pointed out the flaw in Kempe's proof eleven years later it was accepted. We had a clear, relatively straightforward proof of the result: anyone could check it for themselves (very different from the position now, when we all accept Appel and Haken's computer proof!) The error in Kempe's proof is subtle: in preparing the false proof to present to my class, I found it very persuasive! (The story is told in Robin Wilson's excellent book Four Colours Suffice, published as Four Colors Suffice in parts of the world where modern spelling hasn't yet arrived.)
False "proofs" are worrying. As a teenager I was turned off geometry by a well-known fake proof that all triangles are isosceles. The proof relies on an incorrect diagram. After seeing that I found it hard to accept any geometric proof: I became suspicious of geometric arguments (which I think was good) and turned my back on geometry (which was not so good for an aspiring mathematician!) I wasn't worried so much by trick algebraic proofs that 1=2, which relied on division by 0 or on an ill-based induction argument: I appreciated these jokes, but geometric "proofs" like E.A. Maxwell's "Fallacy of the empty circle" destroyed my enjoyment of geometry.
So Kempe's proof worries me because it is so plausible. If a simple argument can be accepted by the leading mathematicians of the day, where do we stand with today's proofs which are accessible only to specialists and which the rest of us cannot take on trust?
Many proofs contain errors but these errors are usually fixable. Kempe's wasn't (although a proof of the weaker Five Colour Theorem was salvaged) and that seems a warning against complacency.
Wilson's book records that Haken's son presented a seminar on the Appel-Haken proof at Berkeley which led to fierce discussion. Those under 40 were reluctant to accept the computer's role in the proof: those under 40 were more suspicious of a proof relying on 700 pages of human calculation. As I have grown older I have gone in the reverse direction. Whereas I once had concerns about computer proofs, I am much less sure now than I was thirty years ago that there is a gold standard for proofs. If we cannot be sure of a proof we have checked for ourselves (and that is what the story of Kempe's proof suggests) then where is mathematical certainty?
False "proofs" are worrying. As a teenager I was turned off geometry by a well-known fake proof that all triangles are isosceles. The proof relies on an incorrect diagram. After seeing that I found it hard to accept any geometric proof: I became suspicious of geometric arguments (which I think was good) and turned my back on geometry (which was not so good for an aspiring mathematician!) I wasn't worried so much by trick algebraic proofs that 1=2, which relied on division by 0 or on an ill-based induction argument: I appreciated these jokes, but geometric "proofs" like E.A. Maxwell's "Fallacy of the empty circle" destroyed my enjoyment of geometry.
So Kempe's proof worries me because it is so plausible. If a simple argument can be accepted by the leading mathematicians of the day, where do we stand with today's proofs which are accessible only to specialists and which the rest of us cannot take on trust?
Many proofs contain errors but these errors are usually fixable. Kempe's wasn't (although a proof of the weaker Five Colour Theorem was salvaged) and that seems a warning against complacency.
Wilson's book records that Haken's son presented a seminar on the Appel-Haken proof at Berkeley which led to fierce discussion. Those under 40 were reluctant to accept the computer's role in the proof: those under 40 were more suspicious of a proof relying on 700 pages of human calculation. As I have grown older I have gone in the reverse direction. Whereas I once had concerns about computer proofs, I am much less sure now than I was thirty years ago that there is a gold standard for proofs. If we cannot be sure of a proof we have checked for ourselves (and that is what the story of Kempe's proof suggests) then where is mathematical certainty?
Sunday, 4 November 2012
The MathsJam conference is coming - what can I talk about?
The MathsJam weekend conference is only two weeks away! It's only been going for two years but it's established as a highlight of the UK mathematical year. Over 100 of us get together to spend a weekend talking mathematics, showing off tricks and toys, and generally sharing our delight in the subject. The diverse attendance - all ages, amateurs and professionals, pure mathematicians and applied, ... - contributes to the joy of the MathsJam weekend.
The maths community is greatly indebted to those who set up and run this wonderful event - the founder Colin Wright, whose efforts ensure the smooth organisation of the event, and his supporting team of David Bedford, James Grime, Matt Parker and Rob Eastaway. Colin tirelessly puts a huge amount of work into the event: mathematicians are wonderful people, but not necessarily easy to organise an event for! The proceedings owe much in spirit to Martin Gardner: the whole mathematical world would have been very different without him.
At the MathsJam weekend all the talks are five minutes - time to present an inspiring idea but not time to teach, as Colin puts it. And I can't think what to talk about! Partly because I used up my best ideas at previous MathsJams. Partly because I am so heavily engaged in preparing new teaching material for a new final year course that my mathematical creativity is all going into teaching just now. I have a few ideas but am hoping for major inspiration in the next few days.
Hope to see you near Crewe later this month!
The maths community is greatly indebted to those who set up and run this wonderful event - the founder Colin Wright, whose efforts ensure the smooth organisation of the event, and his supporting team of David Bedford, James Grime, Matt Parker and Rob Eastaway. Colin tirelessly puts a huge amount of work into the event: mathematicians are wonderful people, but not necessarily easy to organise an event for! The proceedings owe much in spirit to Martin Gardner: the whole mathematical world would have been very different without him.
At the MathsJam weekend all the talks are five minutes - time to present an inspiring idea but not time to teach, as Colin puts it. And I can't think what to talk about! Partly because I used up my best ideas at previous MathsJams. Partly because I am so heavily engaged in preparing new teaching material for a new final year course that my mathematical creativity is all going into teaching just now. I have a few ideas but am hoping for major inspiration in the next few days.
Hope to see you near Crewe later this month!
Sunday, 28 October 2012
The Two-Box Paradox - what does random mean?
John Barrow finished his marvellous talk to the IMA London Branch this week with a baffling paradox. I can pick either of two boxes and keep the contents of the one I choose. I am told that one box contains twice as much money as the other.
Suppose I choose box A, and on opening it I find that it contains an amount x. I am asked if I wish to switch. What is the expected outcome? It is equally likely that I have chosen the smaller or the larger amount. If I stick with my choice, I gain x. If I switch, my expected gain is 1/2.(x/2) + 1/2(2x), which is 5x/4, a higher number, so my rational choice is to switch. But that applies whatever the value of x, so regardless of what I find when I open the box, I am going to switch. So why didn't I just choose box B in the first place? Well, because the same argument would have required me to switch to A on opening that box.
(A variation could allow the sums to be negative, with the same rule. In that case logic tells me I stick if I find that x is negative and switch if x is positive.)
Apparently this paradox was attributed to Schrodinger by the mathematician J.E. Littlewood, who mentions it in his Miscellany..
Professor Barrow addressed the apparent paradox by saying that it assumes that money is infinitely divisible. But that is not the case: if we are dealing in notes and I find that box A contains £1, I know that must be the smaller amount because there is no smaller value of money that box A could contain. That is a valid point, but it doesn't seem to me to be a wholly satisfying resolution.
Suppose the boxes contains not money but quantities of gold, so that there is no limit to how far it can be divided. Then there is no lowest possible value. Of course, in our physical world, matter cannot be divided infinitely, but in an infinite universe with different laws of physics one could postulate that gold is infinitely divisible and then how do we resolve the paradox? I don't quite believe that the Two-Box Paradox is proof that the laws of the Universe must be based on an atomic theory, so that infinite divisibility is impossible.
Coincidentally I had asked students an apparently very different problem (taken from Chris Maslanka's column in the Guardian a few weeks ago). If I choose three consecutive integers randomly, what is the probability that their product is divisible by 48? I'll leave that for you to solve: the answer is in one sense quite straightforward, but in another it is problematic.
What is the probability that a randomly chosen integer is divisible by 7? You might think it is obviously one in 7. One of my mathematical heroes, the group theorist William Burnside, got involved in a bitter argument with R.A. Fisher over exactly this question. If you choose an integer randomly from the set {1,2,3,4,5,6,7,8, 9,10,11,12,13,14,15,...} it looks as if there is a one in seven chance that it is a multiple of 7. But we can also write the set of all integers as {1,7,2,14,3,21,4,28,5,35,6,42,8,49,9,56,10,63,11...}, alternating numbers not divisible by 7 with those which are. This is again the set of positive integers: we can see that every positive integer is listed. But every second integer in the list is divisible by 7, so it looks like the probability that a random integer is a multiple of 7 is as high as 1/2.
Choosing from infinite sets is problematic. It's not that an event with probability 0 cannot happen - it can. Spin a bicycle wheel and let it come to rest: the bottom-most point is a random point of the circumference. There are infinitely many points on the circumference (in a non-atomic universe, anyway) so the probability that the wheel came to rest on that point was 0. So either an event of probability 0 has happened or the wheel cannot stop spinning. And there is no such thing as perpetual motion!
But choosing a random integer from the infinite possibilities presents problems. If all integers are equally likely, then the possibility that the chosen integer n is less than, say, one million, is infinitesimally small. The probability that it is less than one googol, or one googolplex, is still infinitesimally small. It's inconceivable that a random integer could be identified by name (or in any other way) in the lifetime of the universe so far. With probability one, n has more digits than one could write using all the atoms in the universe. So finding out whether n is divisible by 7 is not really a sensible question. The "divisible by 48?" problem I mentioned above needs to be specified much more carefully (and then the intuitive answer is correct).
To work out the true odds in the case of the Two-Box Paradox one needs to know how the contents of the boxes were chosen. The paradox assumes the values are chosen from an infinite range, and that's where the problems lie.
Here's another one. I have chosen a random number in the range [0,1]. What is the probability that it is rational? A pure mathematician would say 0 - there are many more irrational numbers than rational. But I've just tried this with my computer. I'm using Excel: that surely has high enough precision for every practical purpose. In ten million trials I have yet to generate an irrational number. So the theoretical probability is 0 but my estimate from my experiments is 1. Can computer simulation really be that inaccurate?
That may seem a frivolous example but I think it gets to the heart of the Two-Box paradox. Random selection from infinite possibilities needs to be handled very carefully.
Suppose I choose box A, and on opening it I find that it contains an amount x. I am asked if I wish to switch. What is the expected outcome? It is equally likely that I have chosen the smaller or the larger amount. If I stick with my choice, I gain x. If I switch, my expected gain is 1/2.(x/2) + 1/2(2x), which is 5x/4, a higher number, so my rational choice is to switch. But that applies whatever the value of x, so regardless of what I find when I open the box, I am going to switch. So why didn't I just choose box B in the first place? Well, because the same argument would have required me to switch to A on opening that box.
(A variation could allow the sums to be negative, with the same rule. In that case logic tells me I stick if I find that x is negative and switch if x is positive.)
Apparently this paradox was attributed to Schrodinger by the mathematician J.E. Littlewood, who mentions it in his Miscellany..
Professor Barrow addressed the apparent paradox by saying that it assumes that money is infinitely divisible. But that is not the case: if we are dealing in notes and I find that box A contains £1, I know that must be the smaller amount because there is no smaller value of money that box A could contain. That is a valid point, but it doesn't seem to me to be a wholly satisfying resolution.
Suppose the boxes contains not money but quantities of gold, so that there is no limit to how far it can be divided. Then there is no lowest possible value. Of course, in our physical world, matter cannot be divided infinitely, but in an infinite universe with different laws of physics one could postulate that gold is infinitely divisible and then how do we resolve the paradox? I don't quite believe that the Two-Box Paradox is proof that the laws of the Universe must be based on an atomic theory, so that infinite divisibility is impossible.
Coincidentally I had asked students an apparently very different problem (taken from Chris Maslanka's column in the Guardian a few weeks ago). If I choose three consecutive integers randomly, what is the probability that their product is divisible by 48? I'll leave that for you to solve: the answer is in one sense quite straightforward, but in another it is problematic.
What is the probability that a randomly chosen integer is divisible by 7? You might think it is obviously one in 7. One of my mathematical heroes, the group theorist William Burnside, got involved in a bitter argument with R.A. Fisher over exactly this question. If you choose an integer randomly from the set {1,2,3,4,5,6,7,8, 9,10,11,12,13,14,15,...} it looks as if there is a one in seven chance that it is a multiple of 7. But we can also write the set of all integers as {1,7,2,14,3,21,4,28,5,35,6,42,8,49,9,56,10,63,11...}, alternating numbers not divisible by 7 with those which are. This is again the set of positive integers: we can see that every positive integer is listed. But every second integer in the list is divisible by 7, so it looks like the probability that a random integer is a multiple of 7 is as high as 1/2.
Choosing from infinite sets is problematic. It's not that an event with probability 0 cannot happen - it can. Spin a bicycle wheel and let it come to rest: the bottom-most point is a random point of the circumference. There are infinitely many points on the circumference (in a non-atomic universe, anyway) so the probability that the wheel came to rest on that point was 0. So either an event of probability 0 has happened or the wheel cannot stop spinning. And there is no such thing as perpetual motion!
But choosing a random integer from the infinite possibilities presents problems. If all integers are equally likely, then the possibility that the chosen integer n is less than, say, one million, is infinitesimally small. The probability that it is less than one googol, or one googolplex, is still infinitesimally small. It's inconceivable that a random integer could be identified by name (or in any other way) in the lifetime of the universe so far. With probability one, n has more digits than one could write using all the atoms in the universe. So finding out whether n is divisible by 7 is not really a sensible question. The "divisible by 48?" problem I mentioned above needs to be specified much more carefully (and then the intuitive answer is correct).
To work out the true odds in the case of the Two-Box Paradox one needs to know how the contents of the boxes were chosen. The paradox assumes the values are chosen from an infinite range, and that's where the problems lie.
Here's another one. I have chosen a random number in the range [0,1]. What is the probability that it is rational? A pure mathematician would say 0 - there are many more irrational numbers than rational. But I've just tried this with my computer. I'm using Excel: that surely has high enough precision for every practical purpose. In ten million trials I have yet to generate an irrational number. So the theoretical probability is 0 but my estimate from my experiments is 1. Can computer simulation really be that inaccurate?
That may seem a frivolous example but I think it gets to the heart of the Two-Box paradox. Random selection from infinite possibilities needs to be handled very carefully.
Sunday, 21 October 2012
A slight indisposition for me, a disaster for Scottish football
On Tuesday I was mildly unwell - a seasonal cold, so I spent the day in bed. This was a disappointment because I missed a trip to Thorpe Park with our new students (and I was genuinely sorry to miss it, despite what you are probably thinking). On Tuesday evening the Scottish football team played Belgium - a highly fancied squad - in Brussels in a World Cup qualifying match and - another disappointment for me - they lost 2-0.
If I had not been mildly unwell on Tuesday then Scotland would have beaten Belgium. (1)
You may think that statement is unlikely, but it is absolutely true. At least if you are a mathematician, that is. By the rules of propositional logic, the proposition "If p then q" is true whenever q is false. Since the proposition "I was not mildly unwell on Tuesday" is false, then statement (1) is true.
In fact, we can strengthen it. This statement is also true:
If I had not been mildly unwell on Tuesday, then Scotland would have beaten Belgium 5-0. (2)
And so is this one:
If I had not been mildly unwell on Tuesday, then Scotland would have beaten Belgium 6-0. (3)
It might appear that statements (2) and (3) contradict each other. They don't. They are both true, and their mutual truth merely confirms that the proposition "I was not mildly unwell on Tuesday" is false. (Who needs doctor's notes?)
What this tells us is that mathematical language sometimes does not agree with everyday language. Many people would question the truth of statement (1): in everyday language the sentence appears to have meaning. In propositional logic if the antecedent ("I was not mildly unwell on Tuesday") is false then it has no bearing on any other proposition. In English statement (1) implies a connection, and many people would regard the statement as meaningful (and false). Mathematical logic suggests that this is misguided: when we move down one branch of the universal wave-function, it is meaningless to speculate on what might have happened on the branches not travelled. It's very human to do so, and makes for good pub conversations, but ultimately we are wrong to imagine that statements like (1) are meaningful.
If only I had finished this post five seconds earlier, I would have won millions of pounds in the lottery!
If I had not been mildly unwell on Tuesday then Scotland would have beaten Belgium. (1)
You may think that statement is unlikely, but it is absolutely true. At least if you are a mathematician, that is. By the rules of propositional logic, the proposition "If p then q" is true whenever q is false. Since the proposition "I was not mildly unwell on Tuesday" is false, then statement (1) is true.
In fact, we can strengthen it. This statement is also true:
If I had not been mildly unwell on Tuesday, then Scotland would have beaten Belgium 5-0. (2)
And so is this one:
If I had not been mildly unwell on Tuesday, then Scotland would have beaten Belgium 6-0. (3)
It might appear that statements (2) and (3) contradict each other. They don't. They are both true, and their mutual truth merely confirms that the proposition "I was not mildly unwell on Tuesday" is false. (Who needs doctor's notes?)
What this tells us is that mathematical language sometimes does not agree with everyday language. Many people would question the truth of statement (1): in everyday language the sentence appears to have meaning. In propositional logic if the antecedent ("I was not mildly unwell on Tuesday") is false then it has no bearing on any other proposition. In English statement (1) implies a connection, and many people would regard the statement as meaningful (and false). Mathematical logic suggests that this is misguided: when we move down one branch of the universal wave-function, it is meaningless to speculate on what might have happened on the branches not travelled. It's very human to do so, and makes for good pub conversations, but ultimately we are wrong to imagine that statements like (1) are meaningful.
If only I had finished this post five seconds earlier, I would have won millions of pounds in the lottery!
Sunday, 14 October 2012
Maths is everywhere - and nowhere
I'm very much looking forward to John Barrow's forthcoming talk to the IMA London Branch. John is an excellent speaker whose talks are always entertaining and full of mathematical interest. (As are his books, whether they are on the mathematics of sport, or cosmology and the infinite.) His title is "Maths is everywhere". I have no idea what examples he will be referring to, but I am sure they are fascinating. The talk is on Tuesday October 23rd at University College, London: if you are in the area I strongly recommend it. Details are on the IMA website.
The title "Maths is everywhere" is absolutely true, and the variety of maths that we use daily never ceases to impress me. For example, in planning my journey to UCL next Tuesday I will be using graph theory (to plan my journey), statistics and probability (to allow for train delays), multi-objective optimisation (I want my route to be quick, convenient and cheap!). Quite apart from all the engineering mathematics that has gone into the building of London's transport infrastructure, there is the massive effort of timetabling of trains and tubes (one of our recent graduates works in scheduling in the railway industry). There is the mathematics of social networking which makes events like this possible, and the mathematics of weather forecasting which will help me make an informed decision as to whether I take an umbrella. There's the mathematics of optics, without which I would have enormous difficulty navigating London, so dependent am I on my spectacles. And the same mathematics will be used by the data projector which will project Professor Barrow's examples.
So mathematics certainly is everywhere. That impresses me, but, since at heart I am a pure mathematician, I don't really care too much. I still have an instinctive view that mathematics is about eternal truth, independent both of the physical world and of the human mind. Seventeen would be prime if no being with the ability to count had ever existed. Fermat's theorem that x^p and x have the same remainder when divided by p would be true in any universe that could possibly exist, regardless of the laws of physics. I am increasingly aware that this view is naive and probably untenable, but that's what my heart tells me. If there were no mathematicians, if there were no universe, mathematics would be no less true.
So mathematics is everywhere, but if there were nowhere for mathematics to be, it would still exist.
The title "Maths is everywhere" is absolutely true, and the variety of maths that we use daily never ceases to impress me. For example, in planning my journey to UCL next Tuesday I will be using graph theory (to plan my journey), statistics and probability (to allow for train delays), multi-objective optimisation (I want my route to be quick, convenient and cheap!). Quite apart from all the engineering mathematics that has gone into the building of London's transport infrastructure, there is the massive effort of timetabling of trains and tubes (one of our recent graduates works in scheduling in the railway industry). There is the mathematics of social networking which makes events like this possible, and the mathematics of weather forecasting which will help me make an informed decision as to whether I take an umbrella. There's the mathematics of optics, without which I would have enormous difficulty navigating London, so dependent am I on my spectacles. And the same mathematics will be used by the data projector which will project Professor Barrow's examples.
So mathematics certainly is everywhere. That impresses me, but, since at heart I am a pure mathematician, I don't really care too much. I still have an instinctive view that mathematics is about eternal truth, independent both of the physical world and of the human mind. Seventeen would be prime if no being with the ability to count had ever existed. Fermat's theorem that x^p and x have the same remainder when divided by p would be true in any universe that could possibly exist, regardless of the laws of physics. I am increasingly aware that this view is naive and probably untenable, but that's what my heart tells me. If there were no mathematicians, if there were no universe, mathematics would be no less true.
So mathematics is everywhere, but if there were nowhere for mathematics to be, it would still exist.
Sunday, 7 October 2012
Why does it take so long to learn mathematics?
I'm teaching graph theory this year. It was one of my favourite areas of mathematics when I was a student. It contains many gems, ranging from with Euler's solution to the problem of the seven bridges of Konigsberg to the power of Ramsey's Theorem. The arguments seem to me to be unusually varied, and often sufficiently elementary that great depth of study is not required.
I have had very little contact with graph theory in the time since I graduated. As an undergraduate I used Robin Wilson's Introduction to Graph Theory, and I am now using it as the basis of my course. I remember enjoying the book in my youth, and finding it approachable, but I don't remember finding the material as straightforward as it now seems. (My students aren't finding it entirely straightforward, either, but that may be my fault.)
Why is this? I don't think I'm a better mathematician than I was 35 years ago. In terms of solving exam questions, I would not perform as I did when I was twenty. Even with practice, I am sure I could not get back to that level, and not only because I no longer value that kind of cleverness enough to put the effort in. I now have a much better general understanding of mathematics and how it all fits together, but I no longer have the ability to master detail that I once did.
Perhaps Wilson's book (which has gone through four more editions since my undergraduate days) has improved, but, with all due respect to its distinguished author, I doubt if it has really changed sufficiently to make a difference. (Pure mathematics doesn't change much: theorems that are proved generally remain proved, the Four-Colour Theorem notwithstanding.)
Learning mathematics takes time, and it has always astonished me how much better I understand material when I go back to it, months or years later, than when I first studied it. As John von Neumann is said to have told a student who complained that they didn't understand a piece of mathematics, "You don't understand mathematics, laddie, you get used to it." Even if I haven't looked at Philip Hall's Marriage Theorem, for example, for 35 years, the proof seems much simpler to me now than it did when I was first immersed in the subject area.
Perhaps I am misremembering my difficulties as a student: perhaps I didn't find it as difficult as I now remember it. Certainly I had little understanding of how an area of mathematics fitted together: my learning at University consisted of reading strings of definitions and theorems, with little idea where it was all going, making sure I understood each result before going on to the next one, until, perhaps, in the last lecture of the course the lecturer would say something like "and so we have now classified all Lie algebras" and I would suddenly find out what the point of it all had been. I now feel that I would have been a much more effective mathematician if I had read more superficially, skipping proofs until I understood the context, but since got good marks as an undergraduate I had no incentive to adopt what I now feel would have been a much better strategy.
But I think it is the case with mathematics, much more than with many other disciplines, that time is essential to understanding. Things we struggle with become much simpler when we return to them months later. This is why modularisation of mathematics studies is so pernicious. Examining students in the same semester as they have learned an advanced mathematics topic is, I feel, grossly unfair. It forces our exams to be superficial and makes it impossible to test deep understanding. At least, although my graph theory course finishes in December, the exam is not till May. I suspect my students don't like that, but they are likely to do much better than if they faced the same exam immediately after the final lecture.
Sunday, 30 September 2012
Relatively Prime
Relatively Prime is a new maths podcast from Samuel Hansen. I'm delighted that an early episode, The Toolbox, begins by discussing my favourite area of mathematics, game theory (the subject of my very first post in this blog). Samuel interviews the eminent writer on game theory, Steven J. Brams, whose book The Presidential Election Game I devoured when it first came out many years ago. Brams argues that Shakespeare had an intuitive understanding of game theory and discusses applications of the mathematics in Hamlet and Macbeth.
It is particularly fitting that game theory features in Relatively Prime because game theory contributed to the podcast's very existence. Samuel raised the funding for Relatively Prime through Kickstarter, a crowd-sourcing scheme. Various individuals pledged funds to make the project happen. If Samuel reached his target funding before the deadline, these pledges would be called in; otherwise they would lapse.
I was keen that Relatively Prime should be funded. I admired Samuel's previous work. With Peter Rowlett, he is responsible for the excellent weekly Math/Maths Podcast which has, amazingly, now been going for over two years. (In the interests of transparency I should note that I have contributed to this podcast on a couple of occasions.) Even though Samuel is responsible for the incorrect abbreviation for "Mathematics" in the title of that podcast, I was prepared to consider making a small financial contribution to make Relatively Prime happen.
So consider my situation last summer. I was willing to contribute (a little) to the project, but wasn't sure my contribution would be necessary. From my perspective, the ideal outcome would be that Samuel raised enough from other well-wishers, and I could enjoy the result without paying a penny. But I would rather make a contribution than see the project fail.
No doubt many others are in the same position. We can see how much has been pledged, and if five minutes before the deadline the project had been 5 dollars short, I would certainly have jumped in. But if the project (which sought $8000) were $5000 short just before the deadline, then it's not worth my while wasting time pledging to a lost cause. How do potential donors play it?
Well, I did what no doubt many others did - wait for some time to see what was happening. With a few days to go , there were enough backers to show that there was some interest, but as the deadline day dawned the project was a long way short, but word of mouth was building up. The last few hours were remarkably tense as the pledges started coming in, and suddenly the momentum built up and Samuel reached his target. I was one of the many who contributed on the last day. About 140 people, I think, contributed to Relatively Prime.
As a potential backer one uses game theory here to help achieve the desired outcome, which is that the project gets funded but one doesn't pay more than one's fair share when others were also willing to contribute. How does it work? Nothing significant is going to happen until fairly close to the deadline. I think the intuitive game theorist (like Shakespeare, apparently) would fix a maximum contribution they were prepared to make. They would make a very small contribution at a relatively early stage, to encourage other potential backers by showing that there is some interest, and would then wait until a few hours before the deadline, perhaps committing a little more early on the final day but then hoping that others' contributions will come in. At the last minute, if necessary, they would be prepared to go up to their limit if the target hasn't been reached, but they wouldn't throw all their money in early because that way other intending funders might be let off the hook.
Of course this game theory meant that if Samuel were to reach his target, it would happen at the very last minute (as indeed happened on 3 August last year). The cost of all this game theory was a very nervous few days for Samuel when he was convinced his project would fail. So it is entirely appropriate that Relatively Prime has opened with an illuminating discussion of this fascinating branch of mathematics.
It is particularly fitting that game theory features in Relatively Prime because game theory contributed to the podcast's very existence. Samuel raised the funding for Relatively Prime through Kickstarter, a crowd-sourcing scheme. Various individuals pledged funds to make the project happen. If Samuel reached his target funding before the deadline, these pledges would be called in; otherwise they would lapse.
I was keen that Relatively Prime should be funded. I admired Samuel's previous work. With Peter Rowlett, he is responsible for the excellent weekly Math/Maths Podcast which has, amazingly, now been going for over two years. (In the interests of transparency I should note that I have contributed to this podcast on a couple of occasions.) Even though Samuel is responsible for the incorrect abbreviation for "Mathematics" in the title of that podcast, I was prepared to consider making a small financial contribution to make Relatively Prime happen.
So consider my situation last summer. I was willing to contribute (a little) to the project, but wasn't sure my contribution would be necessary. From my perspective, the ideal outcome would be that Samuel raised enough from other well-wishers, and I could enjoy the result without paying a penny. But I would rather make a contribution than see the project fail.
No doubt many others are in the same position. We can see how much has been pledged, and if five minutes before the deadline the project had been 5 dollars short, I would certainly have jumped in. But if the project (which sought $8000) were $5000 short just before the deadline, then it's not worth my while wasting time pledging to a lost cause. How do potential donors play it?
Well, I did what no doubt many others did - wait for some time to see what was happening. With a few days to go , there were enough backers to show that there was some interest, but as the deadline day dawned the project was a long way short, but word of mouth was building up. The last few hours were remarkably tense as the pledges started coming in, and suddenly the momentum built up and Samuel reached his target. I was one of the many who contributed on the last day. About 140 people, I think, contributed to Relatively Prime.
As a potential backer one uses game theory here to help achieve the desired outcome, which is that the project gets funded but one doesn't pay more than one's fair share when others were also willing to contribute. How does it work? Nothing significant is going to happen until fairly close to the deadline. I think the intuitive game theorist (like Shakespeare, apparently) would fix a maximum contribution they were prepared to make. They would make a very small contribution at a relatively early stage, to encourage other potential backers by showing that there is some interest, and would then wait until a few hours before the deadline, perhaps committing a little more early on the final day but then hoping that others' contributions will come in. At the last minute, if necessary, they would be prepared to go up to their limit if the target hasn't been reached, but they wouldn't throw all their money in early because that way other intending funders might be let off the hook.
Of course this game theory meant that if Samuel were to reach his target, it would happen at the very last minute (as indeed happened on 3 August last year). The cost of all this game theory was a very nervous few days for Samuel when he was convinced his project would fail. So it is entirely appropriate that Relatively Prime has opened with an illuminating discussion of this fascinating branch of mathematics.
Thursday, 20 September 2012
A mathematical device
Thanks to an ingenious mathematical invention we can now tell the time in the garden.
(Added 22/9/12) It seems fairly accurate though I haven't found the switch for the British Summer Time setting yet. Must be on the back where it's inaccessible!
Sunday, 16 September 2012
Amazing mathematics - the abc conjecture
I heard earlier this week that the solution to an outstanding mathematical problem, the proof of the abc conjecture, has been announced. (The proof is claimed by Shinichi Mochizuki of Kyoto University, and the ideas are so new and deep that it will take a long time for mathematicians to digest the work and be convinced of the validity of the proof, but the claim is being taken seriously.) Why did this news on Twitter make me happy?
You might think this would have been because it was a problem I have been thinking about for years, that I have attempted to solve myself, that I have lost sleep on. Or perhaps because I am aware of the ramifications for mathematics, the new vistas that will be opened up, the new opportunities that will arise.
Well, I had heard of the abc conjecture. Perhaps I once even knew what it was, but I had forgotten. I have no knowledge of the subject area, the applications (if any) or the methods used.
Yet I am excited by the news of the possible proof! How is it that a development about a problem which I know nothing about, which I couldn't even describe in the most general terms, can matter to me?
Well, here (from the excellent Wikipedia article on the conjecture) is a statement of the conjecture: it asserts that the answer to the following question is "yes": "For every ε > 0, are there only finitely many triples of coprime positive integers a + b = c such that c > d (1+ε), where d denotes the product of the distinct prime factors of abc?" What does this mean? Basically if the conjecture is true, then (as I understand it) it means that only exceptionally is the product of the prime factors of abc significantly less than c.
Now this is, frankly, quite an obscure statement about numbers, and I cannot envisage any life-changing applications. What I find wonderful is that human minds like mine can prove this statement. I cannot even begin to imagine how one would set about proving such a conjecture. I still have an instinctive belief (which, rationally, I know is rather naive) that mathematical facts are true regardless of the nature of the human brain, the laws of nature, and so on: thirteen would still be a prime if the human race had never existed, if the laws of physics were totally different, if no sentient creature had ever come into being. That a mind like mine can establish such necessary, deep facts is amazing, a glimpse of something much more true than anything else in our existence.
You might think this would have been because it was a problem I have been thinking about for years, that I have attempted to solve myself, that I have lost sleep on. Or perhaps because I am aware of the ramifications for mathematics, the new vistas that will be opened up, the new opportunities that will arise.
Well, I had heard of the abc conjecture. Perhaps I once even knew what it was, but I had forgotten. I have no knowledge of the subject area, the applications (if any) or the methods used.
Yet I am excited by the news of the possible proof! How is it that a development about a problem which I know nothing about, which I couldn't even describe in the most general terms, can matter to me?
Well, here (from the excellent Wikipedia article on the conjecture) is a statement of the conjecture: it asserts that the answer to the following question is "yes": "For every ε > 0, are there only finitely many triples of coprime positive integers a + b = c such that c > d (1+ε), where d denotes the product of the distinct prime factors of abc?" What does this mean? Basically if the conjecture is true, then (as I understand it) it means that only exceptionally is the product of the prime factors of abc significantly less than c.
Now this is, frankly, quite an obscure statement about numbers, and I cannot envisage any life-changing applications. What I find wonderful is that human minds like mine can prove this statement. I cannot even begin to imagine how one would set about proving such a conjecture. I still have an instinctive belief (which, rationally, I know is rather naive) that mathematical facts are true regardless of the nature of the human brain, the laws of nature, and so on: thirteen would still be a prime if the human race had never existed, if the laws of physics were totally different, if no sentient creature had ever come into being. That a mind like mine can establish such necessary, deep facts is amazing, a glimpse of something much more true than anything else in our existence.
Sunday, 2 September 2012
A lottery oddity
There is a minor news item today about a curious result in the UK National Lottery. Five people who each chose the six correct numbers shared £4.8 million, getting £968,000 each, but the single "runner-up", who matched five of the six main balls plus the bonus ball, won nearly £1.5 million - half as much again as the winners!
This is very unusual: it required several people to pick the same six winning numbers while only a single person chose any of the six possible "runner-up" combinations. Apparently the winning numbers were 15, 30, 36, 39, 41 and 49 and the bonus number was 2.
I am curious about the distribution of the prizes. There are six "runner-up" combinations - any five of the six main balls plus the bonus ball - while only one winning selection, so I would have expected that the pot for the winner would be six times that for the runner-up. Instead it's just over three times. That makes this unexpected result rather more likely than if the relative size of the pots had been proportional to the odds.
Of course, this is good publicity for the lottery (and, as the spokesman says, good news for six people who've won sizeable sums of money).
This is very unusual: it required several people to pick the same six winning numbers while only a single person chose any of the six possible "runner-up" combinations. Apparently the winning numbers were 15, 30, 36, 39, 41 and 49 and the bonus number was 2.
I am curious about the distribution of the prizes. There are six "runner-up" combinations - any five of the six main balls plus the bonus ball - while only one winning selection, so I would have expected that the pot for the winner would be six times that for the runner-up. Instead it's just over three times. That makes this unexpected result rather more likely than if the relative size of the pots had been proportional to the odds.
Of course, this is good publicity for the lottery (and, as the spokesman says, good news for six people who've won sizeable sums of money).
Monday, 27 August 2012
Idiosyncracies?
Saturday's Guardian newspaper feature about Julian Assange contained an amusing quote from a friend from his university days (Assange studied mathematics amongst other subjects): "I've often heard it remarked in the press that Julian has some idiosyncrasies. The people who make such remarks tend not to have hung around mathematics departments very much."
It's perhaps not surprising that mathematicians are associated with eccentricity. We love to tell stories about the idiosyncrasies of Erdos and Godel. Alexander Masters' recent book about the mathematician Simon Norton, The Genius in my Basement is an outstanding reflection on the biographer's art which doesn't do much for the public image of mathematicians. On the other hand, fictional mathematicians who are leading characters in novels such as Iain Banks's The Steep Approach to Garbadale and Ann Lingard's The Embalmer's Book of Recipes are reasonably normal people.
Are mathematicians more eccentric than other creative people? I suspect not: I am sure one can find just as many writers, painters, composers, actors, ... Is it perhaps just the abstract nature of our subject, and the difficulty of talking about the technicalities to non-mathematicians (or even mathematicians who specialise in different areas) which results in a focus on eccentricity? We can't tell our non-mathematical friends about a mathematician's brilliant ideas so we end up talking about their amusing eccentricities.
I'm not sure how far I have convinced myself!
It's perhaps not surprising that mathematicians are associated with eccentricity. We love to tell stories about the idiosyncrasies of Erdos and Godel. Alexander Masters' recent book about the mathematician Simon Norton, The Genius in my Basement is an outstanding reflection on the biographer's art which doesn't do much for the public image of mathematicians. On the other hand, fictional mathematicians who are leading characters in novels such as Iain Banks's The Steep Approach to Garbadale and Ann Lingard's The Embalmer's Book of Recipes are reasonably normal people.
Are mathematicians more eccentric than other creative people? I suspect not: I am sure one can find just as many writers, painters, composers, actors, ... Is it perhaps just the abstract nature of our subject, and the difficulty of talking about the technicalities to non-mathematicians (or even mathematicians who specialise in different areas) which results in a focus on eccentricity? We can't tell our non-mathematical friends about a mathematician's brilliant ideas so we end up talking about their amusing eccentricities.
I'm not sure how far I have convinced myself!
Wednesday, 8 August 2012
Maths history and anecdotes
I have been meaning to write something in response to two recent blog posts on history of mathematics and anecdote. Dennis Des Chene (aka "Scaliger") wrote "On bad anecdotes and good fun" and Peter Rowlett responded with "Mathematics: a culture of historical inaccuracy". I'm also grateful to Thony Christie of the always excellent The Renaissance Mathematicus blog, whose Twitter comment brought Scaliger's blog to my attention.
Scaliger writes about the anecdote that Euler spouted a piece of mathematical nonsense claiming to prove the existence of God, embarrassing Diderot in front of the Empress. The anecdote, as commonly told, is far from true, so why do mathematicians still tell without qualification? Rowlett wonders about the role of such anecdotes, which are arguably part of the culture of mathematics, in training mathematicians.
These are deep issues. I love anecdotes. I like serious history, too, but I am useless as a historian because I don't know enough. Whenever I have worked on historical topics, I have found that, the more I research, the less I feel able to say anything because I am increasingly aware that I do not know enough of the story. I admire both historians, who as a result of years of study are able to enlarge our understanding, and popular writers, who can so often find interesting angles on history without being intimidated as I am by the knowledge of their limitations.
Anecdotes appeal to me when they seem to tell me something, and I don;t think this is always entirely illusory. It doesn't take much thought to see that most anecdotes are constructions. As a teenager I bought a book of literary anecdotes which began with the rather rude "Switter Swatter" story about Sir Walter Ralegh (which is the subject of a contemporary round you can watch being performed here.). Is this anecdote true? It's documented at the time, but how did it get into the public domain? Only, presumably, from Sir Walter himself, and I don't take as gospel any stories by young men about their romantic successes. So despite its impeccable historical credentials I have some scepticism about this story. Most anecdotes are spread because they show someone in a good (perhaps self-depreciating) light (or because they present a negative image of someone's enemy.
Constructions may not be accurate. In his cricket report in last Saturday's Guardian, Vic Marks tells the story of the Durham wicket-keeper Chris Scott, who in 1994 dropped Brian Lara, the best batsman in the world, when he had made 18. Scott (Marks, being a good journalist, adds "it is said"!) moaned "I bet he goes on to get a hundred". Lara in fact went on to 501 not out, the highest score ever made. It's quite likely that Scott made exactly this comment at the time, and I'm not suggesting that this story is not exactly true as it stands. But even if he didn't say it at the time, he might afterwards have said something like "When I dropped him I thought that it would be just my luck if he made another hundred", and in the telling the story at some stage changed to "Scott said at the time that ...". The way memory works seems to be that we reconstruct our memories rather than retrieving them, and anecdotal memories get rewritten so that we come to "remember" what happened in the revised form. Consequently one should interpret even the most "authentic" anecdotes as a slightly idealised form of history.
What I would argue, I guess, is that anecdotes are valuable in passing on the culture of mathematics. We don;t need to express uncertainty about their "truth" because we should all know that anecdotes are not exactly true. When I first read the Euler story as a teenager I may have believed it to be literally true,.but as I became more experienced in such matters I realised that a more nuanced understanding was required. It's good that the background is now readily accessible, but we want to train young mathematicians who can read a story like this and enjoy it without necessarily believing it!
Academics apply rigour when they need to. Having been at an Oxbridge high table when World Cup football was being discussed, I can say with certainty that rigorous thinkers in one field don't necessarily bring the same quality of analysis of other matters. We all apply different standards of rigour in different parts of our lives: that's part of being human.
A final comment: I don't believe it is only mathematicians who take a cavalier approach to the history of their subject. All disciplines and professions have their cultures, built on anecdote and dubious history. Mathematics is no different in that respect.
Scaliger writes about the anecdote that Euler spouted a piece of mathematical nonsense claiming to prove the existence of God, embarrassing Diderot in front of the Empress. The anecdote, as commonly told, is far from true, so why do mathematicians still tell without qualification? Rowlett wonders about the role of such anecdotes, which are arguably part of the culture of mathematics, in training mathematicians.
These are deep issues. I love anecdotes. I like serious history, too, but I am useless as a historian because I don't know enough. Whenever I have worked on historical topics, I have found that, the more I research, the less I feel able to say anything because I am increasingly aware that I do not know enough of the story. I admire both historians, who as a result of years of study are able to enlarge our understanding, and popular writers, who can so often find interesting angles on history without being intimidated as I am by the knowledge of their limitations.
Anecdotes appeal to me when they seem to tell me something, and I don;t think this is always entirely illusory. It doesn't take much thought to see that most anecdotes are constructions. As a teenager I bought a book of literary anecdotes which began with the rather rude "Switter Swatter" story about Sir Walter Ralegh (which is the subject of a contemporary round you can watch being performed here.). Is this anecdote true? It's documented at the time, but how did it get into the public domain? Only, presumably, from Sir Walter himself, and I don't take as gospel any stories by young men about their romantic successes. So despite its impeccable historical credentials I have some scepticism about this story. Most anecdotes are spread because they show someone in a good (perhaps self-depreciating) light (or because they present a negative image of someone's enemy.
Constructions may not be accurate. In his cricket report in last Saturday's Guardian, Vic Marks tells the story of the Durham wicket-keeper Chris Scott, who in 1994 dropped Brian Lara, the best batsman in the world, when he had made 18. Scott (Marks, being a good journalist, adds "it is said"!) moaned "I bet he goes on to get a hundred". Lara in fact went on to 501 not out, the highest score ever made. It's quite likely that Scott made exactly this comment at the time, and I'm not suggesting that this story is not exactly true as it stands. But even if he didn't say it at the time, he might afterwards have said something like "When I dropped him I thought that it would be just my luck if he made another hundred", and in the telling the story at some stage changed to "Scott said at the time that ...". The way memory works seems to be that we reconstruct our memories rather than retrieving them, and anecdotal memories get rewritten so that we come to "remember" what happened in the revised form. Consequently one should interpret even the most "authentic" anecdotes as a slightly idealised form of history.
What I would argue, I guess, is that anecdotes are valuable in passing on the culture of mathematics. We don;t need to express uncertainty about their "truth" because we should all know that anecdotes are not exactly true. When I first read the Euler story as a teenager I may have believed it to be literally true,.but as I became more experienced in such matters I realised that a more nuanced understanding was required. It's good that the background is now readily accessible, but we want to train young mathematicians who can read a story like this and enjoy it without necessarily believing it!
Academics apply rigour when they need to. Having been at an Oxbridge high table when World Cup football was being discussed, I can say with certainty that rigorous thinkers in one field don't necessarily bring the same quality of analysis of other matters. We all apply different standards of rigour in different parts of our lives: that's part of being human.
A final comment: I don't believe it is only mathematicians who take a cavalier approach to the history of their subject. All disciplines and professions have their cultures, built on anecdote and dubious history. Mathematics is no different in that respect.
Sunday, 29 July 2012
Proof that I am a *pure* mathematician
I have always been drawn towards pure, rather than applied, mathematics. As a student I had no feel for mechanics: I could solve the equations but I had no physical intuition. I always felt out of my depth in any kind of applied mathematics, whereas I felt at home with abstract pure mathematics.
Now when I teach mathematics I have sometimes regretted my preference. I see the value of physical applications; I enjoy reading about relativity and quantum theory, and feel that I have gained some understandings of these topics to which, thirty years ago, I couldn't relate at all.
Now I have been looking at Mark Levi's two marvellous books, The Mathematical Mechanic and Why Cats Land on Their Feet. These are books about mechanics. The former uses physics to "prove" pure mathematical propositions, for example using an argument about a rotating fishtank to "prove" Pythagoras's Theorem. (The inverted commas reflect my pure mathematical reluctance to accept that physics can be used in this way!) The latter presents physical paradoxes and their resolution - for example, if, sitting in a seat attached rigidly to the frame of a spacecraft which is stationary in outer space, I push a balloon away from me, what happens to the spaceship?
These books are wonderful. Despite my comment about physics and proof above, I find the contents beautiful and astonishing. But they are also over my head. I have to read carefully and think deeply to get the point, and I need to be told why the paradoxes in the second volume are paradoxical because my physical understanding is so limited I don't "get" it easily.
So what these books confirm is that I have very little intuition about the physics of the world around us. Had I had a different upbringing that might not have been the case, but I have to accept that my mathematical talents do not extend at all to mechanics. I don't particularly regret that - the pleasure of pure mathematics compensates - except when I realise that my appreciation of Levi's books is so limited. I feel as I imagine someone would respond to Matisse who knew the paintings only from monochrome reproduction - there is much to enjoy but I will never be able fully to appreciate them.
This helps me understand why I enjoy quantum theory. I can't do mechanics because I have no physical intuition - but quantum theory makes a nonsense of our intuitions so my weakness isn't a problem!
Can I finish by encouraging any reader to seek out Levi's books, if you don't already know them. If you're an applied mathematician you'll love them; pure mathematicians will also find things to wonder at. Even if I can't fully appreciate them, I can still admire and enjoy.
Tuesday, 17 July 2012
Maths and technology - from the HE Maths Ideas Exchange
This post is late because I spent an exciting but exhausting weekend in Sheffield, first at the CETL-MSOR conference on mathematics teaching in HE and then at the Ideas Exchange weekend organised by Peter Rowlett of the MSOR Network. Both were extremely productive, not just for the inspiring presentations but also for the informal discussions with colleagues from across the sector. I'm very grateful for the valuable and productive conversations I've had over the last few days, and especially to Dagmar Waller and Peter Rowlett for organising these events.
The Ideas Exchange provides an opportunity for each participant to put forward, in five minutes, an idea for subsequent discussion with sympathetic colleagues. It was suggested that this might be something which one had tried and wished to share, an idea that one was planning to implement and wanted advice on, or a mad suggestion that, with refinement and suggestions from others, might just turn into something workable. This was the second such Ideas Exchange weekend - my report on the first was published in MSOR Connections (and one of the best pieces of news from the CETL-MSOR Conference is that Connections will continue). That they work so well is due to the openness and friendliness of the participants: it's a delight to spend time with people so committed to teaching mathematics effectively.
There is a lot I could write about from both events (and much that I have still to digest). But this post will focus one of my "ideas" - not actually mine at all, in fact.
At my University's teaching and learning conference one speaker mentioned a proposal that every degree course should have a compulsory final year module on "how new technology will change this subject" or something similar. (I didn't catch the name of the person to whom this proposal was attributed.) Should maths degrees contain such a module?
The Ideas Exchange provides an opportunity for each participant to put forward, in five minutes, an idea for subsequent discussion with sympathetic colleagues. It was suggested that this might be something which one had tried and wished to share, an idea that one was planning to implement and wanted advice on, or a mad suggestion that, with refinement and suggestions from others, might just turn into something workable. This was the second such Ideas Exchange weekend - my report on the first was published in MSOR Connections (and one of the best pieces of news from the CETL-MSOR Conference is that Connections will continue). That they work so well is due to the openness and friendliness of the participants: it's a delight to spend time with people so committed to teaching mathematics effectively.
There is a lot I could write about from both events (and much that I have still to digest). But this post will focus one of my "ideas" - not actually mine at all, in fact.
At my University's teaching and learning conference one speaker mentioned a proposal that every degree course should have a compulsory final year module on "how new technology will change this subject" or something similar. (I didn't catch the name of the person to whom this proposal was attributed.) Should maths degrees contain such a module?
My first reaction was that it would be hard to make a case to colleagues for dropping their favourite courses in Complex Analysis or the Analytical Algebraic Topology of Locally Euclidean Parametrization of Infinitely Differentiable Riemannian Manifolds or whatever. These courses obviously give students skills and techniques which are immediately valuable to a wide range of employers in a way which thinking about possibly applications of technology and mathematics could not possibly match.
But should our maths graduates be thinking more about how technology will change mathematics? I think there is a case. For one thing, new technology such as apps on mobile devices are making like better in many small (and some big) ways. I'd like our maths graduates to be among the leaders in this field, but I'm not sure that our teaching particularly promotes this kind of creative thinking. I'm also struck that, despite many university mathematicians' preference for chalk and blackboards, IT is changing maths in ways which we don't articulate to our students. At BMC last year Sir Timothy Gowers noted that Wikipedia is an invaluable tool for mathematicians, greatly facilitating the practice of our subject. In his recent LMS Popular Lecture Sir Timothy talked about the emergence of computers as research assistants. He, Terence Tao and many others use blogs as a tool for mathematical collaboration, and Gowers's Polymath project is perhaps shows how mathematics will be done by humans in the years before the field is taken over by creative computer research mathematicians.
It certainly seems to me that my childhood view of mathematics as an individual activity, which was perhaps only rarely a reality (Andrew Wiles?), is no longer tenable: technology is making mathematics an increasingly collaborative venture. And we should perhaps be making more of this in our undergraduate teaching.
Monday, 9 July 2012
Mathematics and Tennis
Now that a fascinating fortnight at Wimbledon has finished, it's perhaps appropriate to consider the contributions that the mathematics of tennis made to our enjoyment. I've read long ago (I forget where) an analysis that shows that the scoring system at tennis - dividing the match into games and sets - is remarkably effective at maximising the interest for spectators, creating points of excitement throughout the match: simply counting up the total points won wouldn't be nearly so exciting. As I recall, the argument is that this is why tennis attracts a bigger audience than other racket sports.
The opening chapter of Julian Havil's wonderful book Nonplussed presents three tennis paradoxes. The best (most surprising) of these is that if you are playing a strong server (the probability that the server wins a point on their serve is just over 90%) then you are more likely to break serve from 40-30 or 30-15 down than you are at love all. (There is an assumption that the probability of winning the point is independent of the score.)
Of course in any knock-out tournament, even if the better player wins every match it is by no means certain that the top two players will meet in the final. If they are in the same half of the draw, they meet earlier and, in a random draw with 2^n players, the probability of that is (2^(n-1)-1)/((2^n)-1) which is only just under 1/2. Seeding addresses this problem. But seeding can create its own issues. Suppose we have three top players, equally good, all some way better than any of their other competitors. Then two will be in the same half of the draw, and the third will have double the chances of winning of each of the other two. If this victory is then used to determine the seeding for the next tournament, that player will carry this advantage forward, and will win more tournaments than their equally-good adversaries.
The system for challenging line-calls brings in a new dimension. In each set each player can make at least three challenges when they believe a ball was wrongly called out (or in). Successful challenges don't count, but unsuccessful ones are deducted from the allowance of three. If the call is shown to be wrong, then depending on the circumstance the point is replayed or the point is won by the player challenging. The challenge must be played immediately - if you think the ball was out and you return it, you've lost your right to challenge.
How sure do you have to be that the call was wrong before you should challenge? There are obviously a lot of factors that influence the decision. How many challenges one has left - presumably one wants one in reserve for the dubious call in the decisive game still to come. The stage of the set - if the set is almost finished and you haven't used any challenges, there is little to be gained by saving them. The importance of the point - if the disputed call is resulting in a crucial service break you might challenge even if you are sure the call is correct! The gain from a successful challenge - if you are going to win, rather than lose, a point, the challenge is more worthwhile than if you are going to replay it. Yesterday Andy Murray challenged a call that his first serve was out - would that be a better challenge than a second serve called out? I would guess that the threshold estimate of probability of success before you challenge is strongly affected by all these circumstances.
There are other factors too. Sometimes you might just want a moment to regroup, and it might be worth using a challenge just to get a short break. And how do you decide whether to challenge or to play the ball? If you think your opponent's shot is just out, but you can return it, do you stop and challenge or do you play on? The decision must be made instantly!
It would be fascinating to now how much professional tennis players think about these things when using their challenges. I doubt if they do probability calculations in their heads, so do they have heuristics and if so, how good are they? Are some players better tacticians in using challenges than others?
The opening chapter of Julian Havil's wonderful book Nonplussed presents three tennis paradoxes. The best (most surprising) of these is that if you are playing a strong server (the probability that the server wins a point on their serve is just over 90%) then you are more likely to break serve from 40-30 or 30-15 down than you are at love all. (There is an assumption that the probability of winning the point is independent of the score.)
Of course in any knock-out tournament, even if the better player wins every match it is by no means certain that the top two players will meet in the final. If they are in the same half of the draw, they meet earlier and, in a random draw with 2^n players, the probability of that is (2^(n-1)-1)/((2^n)-1) which is only just under 1/2. Seeding addresses this problem. But seeding can create its own issues. Suppose we have three top players, equally good, all some way better than any of their other competitors. Then two will be in the same half of the draw, and the third will have double the chances of winning of each of the other two. If this victory is then used to determine the seeding for the next tournament, that player will carry this advantage forward, and will win more tournaments than their equally-good adversaries.
The system for challenging line-calls brings in a new dimension. In each set each player can make at least three challenges when they believe a ball was wrongly called out (or in). Successful challenges don't count, but unsuccessful ones are deducted from the allowance of three. If the call is shown to be wrong, then depending on the circumstance the point is replayed or the point is won by the player challenging. The challenge must be played immediately - if you think the ball was out and you return it, you've lost your right to challenge.
How sure do you have to be that the call was wrong before you should challenge? There are obviously a lot of factors that influence the decision. How many challenges one has left - presumably one wants one in reserve for the dubious call in the decisive game still to come. The stage of the set - if the set is almost finished and you haven't used any challenges, there is little to be gained by saving them. The importance of the point - if the disputed call is resulting in a crucial service break you might challenge even if you are sure the call is correct! The gain from a successful challenge - if you are going to win, rather than lose, a point, the challenge is more worthwhile than if you are going to replay it. Yesterday Andy Murray challenged a call that his first serve was out - would that be a better challenge than a second serve called out? I would guess that the threshold estimate of probability of success before you challenge is strongly affected by all these circumstances.
There are other factors too. Sometimes you might just want a moment to regroup, and it might be worth using a challenge just to get a short break. And how do you decide whether to challenge or to play the ball? If you think your opponent's shot is just out, but you can return it, do you stop and challenge or do you play on? The decision must be made instantly!
It would be fascinating to now how much professional tennis players think about these things when using their challenges. I doubt if they do probability calculations in their heads, so do they have heuristics and if so, how good are they? Are some players better tacticians in using challenges than others?
Monday, 2 July 2012
The LMS Popular Lectures - Gowers and Penrose
Last week was exceptionally busy, but one highlight was the opportunity to hear two of today's greatest mathematicians, Sir Timothy Gowers and Sir Roger Penrose, deliver the London Mathematical Society's Popular Lectures in central London. (There is another opportunity to hear them in Birmingham on 26 September, and the lectures are normally made available on DVD).
This was a wonderful opportunity to hear two top mathematicians talking about their views of mathematics. Both talks took their theme from the Turing centenary, a reminder of just how important a contribution Turing made to the culture and practice of contemporary mathematics.
Gowers was talking about the possibility of computers becoming mathematicians, in the sense of creating and proving theorems. He was particularly interesting in his analysis of two fairly well-known mathematics puzzles which can be solved by the right insight. What Gowers did here was to show how these insights can be understood, not as a sudden creative spark from nowhere, but as the consequence of an understandable line of mathematical thinking. This was fascinating, and empowering: these kind of insights don't need incomprehensible strokes of brilliant genius but could be accessible if you think hard enough about the problem.
Gowers predicted that in 50 years time computers will be able to do mathematical research better than humans. He didn't discuss how mathematicians will react to this. If pure mathematics can be created by humans, will people still want to do it? This kind of mathematics is already an esoteric pursuit: if one is doing world-leading pure mathematics it is unlikely that more than a small number of people are actively engaging with your work. If computers are doing it better, will human mathematicians still see it as a worthwhile activity to devote their life to?
Penrose talked about consciousness. His argument, as I understand it, is that human consciousness is not subject to the limitations that Godel's Theorems show apply to formal systems, and that we need a new theory that goes beyond quantum theory in order to understand consciousness. He presented a toy model universe to show that the universe need not be computable - the toy model certainly proved that it is possible to have uncomputable universes - but I am unconvinced by his arguments.
Although I am not qualified to disagree with Penrose, who knows much more and has thought much more deeply about these subjects than I do, I don't think his arguments are valid. I do not understand why he thinks Godel's Theorems do not apply to human thought. His assertion seems to be based on a claim that humans can always jump "outside the system" to see implications that are not formally consequences of a logical system, so there can be no "Godel sentence" that is true but which human consciousness cannot prove to be true. I don't understand why he can be confident of this, so I don't see that consciousness presents a problem for quantum theory. Nor do I share the worries he expressed that the Measurement Problem in quantum theory. Everett's interpretation, that we view only one branch of the universal wavefunction, seems to me to be a perfectly logical solution to the Measurement Problem - we don't have to accept it, but it does show that a solution is possible, in the same way that Penrose's toy model uncomputable universe makes its point.
Regardless of whether .I was persuaded by the speakers, this was an exciting and stimulating evening which kept a huge audience engaged for the whole of a long evening. Well done the LMS!
This was a wonderful opportunity to hear two top mathematicians talking about their views of mathematics. Both talks took their theme from the Turing centenary, a reminder of just how important a contribution Turing made to the culture and practice of contemporary mathematics.
Gowers was talking about the possibility of computers becoming mathematicians, in the sense of creating and proving theorems. He was particularly interesting in his analysis of two fairly well-known mathematics puzzles which can be solved by the right insight. What Gowers did here was to show how these insights can be understood, not as a sudden creative spark from nowhere, but as the consequence of an understandable line of mathematical thinking. This was fascinating, and empowering: these kind of insights don't need incomprehensible strokes of brilliant genius but could be accessible if you think hard enough about the problem.
Gowers predicted that in 50 years time computers will be able to do mathematical research better than humans. He didn't discuss how mathematicians will react to this. If pure mathematics can be created by humans, will people still want to do it? This kind of mathematics is already an esoteric pursuit: if one is doing world-leading pure mathematics it is unlikely that more than a small number of people are actively engaging with your work. If computers are doing it better, will human mathematicians still see it as a worthwhile activity to devote their life to?
Penrose talked about consciousness. His argument, as I understand it, is that human consciousness is not subject to the limitations that Godel's Theorems show apply to formal systems, and that we need a new theory that goes beyond quantum theory in order to understand consciousness. He presented a toy model universe to show that the universe need not be computable - the toy model certainly proved that it is possible to have uncomputable universes - but I am unconvinced by his arguments.
Although I am not qualified to disagree with Penrose, who knows much more and has thought much more deeply about these subjects than I do, I don't think his arguments are valid. I do not understand why he thinks Godel's Theorems do not apply to human thought. His assertion seems to be based on a claim that humans can always jump "outside the system" to see implications that are not formally consequences of a logical system, so there can be no "Godel sentence" that is true but which human consciousness cannot prove to be true. I don't understand why he can be confident of this, so I don't see that consciousness presents a problem for quantum theory. Nor do I share the worries he expressed that the Measurement Problem in quantum theory. Everett's interpretation, that we view only one branch of the universal wavefunction, seems to me to be a perfectly logical solution to the Measurement Problem - we don't have to accept it, but it does show that a solution is possible, in the same way that Penrose's toy model uncomputable universe makes its point.
Regardless of whether .I was persuaded by the speakers, this was an exciting and stimulating evening which kept a huge audience engaged for the whole of a long evening. Well done the LMS!
Saturday, 23 June 2012
In praise of Knuth - the joy of algorithms
Since I am taking part in a Turing-related Lates evening at the Science Museum on Wednesday 27 June, in which my contribution will look at a curious mathematical algorithm and its applications in computing, I have been thinking about some of the algorithms I have used in programming, and this post will describe one particular favourite (not the algorithm I will be demonstrating on Wednesday).
I used to be a keen bridge player. Sometimes in competitions hands were "computer-dealt" and people noticed that the card distributions seemed to be less regular than in hands shuffled and dealt physically. Good players even adjusted their play according to whether the hand was dealt by a human or a computer. The moral is that human shuffling isn't perfectly random.
Anyway, I wanted to programme the shuffle of a pack of cards so that I could use computers to explore strategies for playing various forms of patience ("solitaire" to Americans). In particular a student project verified that the probability of clock patience coming out is indeed 1/13, as I had proved as a schoolboy. How do you get a computer to shuffle a pack of cards (or equivalently to generate a random permutation of the numbers 1 to 52, or more generally 1 to n)? (In this post I am using "random" to mean that all possibilities are equally likely, which is what one hopes for in shuffling cards.)
I thought the obvious approach was to generate successive random (equally likely) integers between 1 and n, simply rejecting those that had already been drawn. As we go through the shuffle, we are more and more likely to generate random numbers corresponding to cards already dealt, so towards the end it can take a long time to find an undealt card - when there is only one number left to be drawn, the probability of hitting it is only 1/n and the expected number of draws required to get it is n. In practice this algorithm seemed to work OK for a pack of 52 cards - the probability that it takes an absurdly long time to get the last few cards is in fact quite small, and some clumsy ad hoc adjustments could improve it. The result was a messy piece of code which might take some time to complete the shuffle for a large pack.
Then a colleague (thanks, Don) told me about Knuth's shuffling algorithm (which it turns out is a variation of one devised by Fisher and Yates before the computer era). It's beautiful. To shuffle n cards, you simply do the following: for i between 1 and n, we successively swap card i with a random card in the pack. It's easy to see that the result is a random permutation of the pack and that all possible permutations are equally likely. We have a simple algorithm, very easy to programme, that generates a random shuffle in a fixed number of steps.
One of the joys of mathematics can be finding that there is a beautiful, simple solution to a problem on which one has spent a lot of time working with messy, unsatisfactory methods. This was such an example. I remember how happy I felt when I learned about this algorithm!
Of course, the downside is that I am reminded that people like Knuth always seem to produce beautiful, simple algorithms in contrast to my clumsy efforts. It's a bit like watching Messi play football - how many can aspire to play like that? - except that I am left with the frustrating feeling that I should have been able to come up with the algorithm myself. Luckily perhaps I have now reached an age where I can appreciate beautiful mathematics without beating myself up over my failure to do it for myself.
I used to be a keen bridge player. Sometimes in competitions hands were "computer-dealt" and people noticed that the card distributions seemed to be less regular than in hands shuffled and dealt physically. Good players even adjusted their play according to whether the hand was dealt by a human or a computer. The moral is that human shuffling isn't perfectly random.
Anyway, I wanted to programme the shuffle of a pack of cards so that I could use computers to explore strategies for playing various forms of patience ("solitaire" to Americans). In particular a student project verified that the probability of clock patience coming out is indeed 1/13, as I had proved as a schoolboy. How do you get a computer to shuffle a pack of cards (or equivalently to generate a random permutation of the numbers 1 to 52, or more generally 1 to n)? (In this post I am using "random" to mean that all possibilities are equally likely, which is what one hopes for in shuffling cards.)
I thought the obvious approach was to generate successive random (equally likely) integers between 1 and n, simply rejecting those that had already been drawn. As we go through the shuffle, we are more and more likely to generate random numbers corresponding to cards already dealt, so towards the end it can take a long time to find an undealt card - when there is only one number left to be drawn, the probability of hitting it is only 1/n and the expected number of draws required to get it is n. In practice this algorithm seemed to work OK for a pack of 52 cards - the probability that it takes an absurdly long time to get the last few cards is in fact quite small, and some clumsy ad hoc adjustments could improve it. The result was a messy piece of code which might take some time to complete the shuffle for a large pack.
Then a colleague (thanks, Don) told me about Knuth's shuffling algorithm (which it turns out is a variation of one devised by Fisher and Yates before the computer era). It's beautiful. To shuffle n cards, you simply do the following: for i between 1 and n, we successively swap card i with a random card in the pack. It's easy to see that the result is a random permutation of the pack and that all possible permutations are equally likely. We have a simple algorithm, very easy to programme, that generates a random shuffle in a fixed number of steps.
One of the joys of mathematics can be finding that there is a beautiful, simple solution to a problem on which one has spent a lot of time working with messy, unsatisfactory methods. This was such an example. I remember how happy I felt when I learned about this algorithm!
Of course, the downside is that I am reminded that people like Knuth always seem to produce beautiful, simple algorithms in contrast to my clumsy efforts. It's a bit like watching Messi play football - how many can aspire to play like that? - except that I am left with the frustrating feeling that I should have been able to come up with the algorithm myself. Luckily perhaps I have now reached an age where I can appreciate beautiful mathematics without beating myself up over my failure to do it for myself.
Tuesday, 19 June 2012
When conceding a goal doesn't matter
As a follow-up to yesterday's post, here's more on the maths of football tournament mini-groups.
In last night's UEFA Euro 2012 matches, Croatia played Spain and Italy played Ireland. Because Ireland had lost both previous matches and the others had been drawn, it was likely that the tie-breaking rules would come into effect and the score in the Croatia-Spain match would be crucial. Assuming Italy beat Ireland, the winners of Croatia-Spain would go through and the losers would be out. Croatia and Spain would both qualify if they drew 2-2, Spain and Italy if the result was 0-0, while if Spain and Croatia drew 1-1 then Italy would have to beat Ireland by two goals and score three times to deny Croatia.
So we're in the last couple of minutes of both matches, Spain and Croatia is goalless and Italy lead Ireland 1-0. At this point Spain and Italy are on course to qualify. Croatia need a goal, but the interesting thing is that conceding a goal doesn't damage Croatia. In fact Spain score just before the end, but Croatia need to score just one goal to go through, and the Spanish goal makes no difference to them. At this point there is no likely situation in which a 1-0 win for Croatia is better for them than a 1-1 draw, unless Italy were to score two more goals in injury time. (In fact Italy scored one more to win 2-0 so perhaps that wasn't an impossible scenario.)
Indeed, perhaps Croatia are more likely to score if Spain have scored, since Spain are likely to be less worried about conceding an equaliser which won't put them out.
Spain's goal did change the order of the top two in the group, but, coming when it did, it made no difference at all to Croatia's chances.
I am reminded of the Scottish Premier Division 1990/91. Rangers and Aberdeen are fighting it out with two matches left. In these days it is two points for a win (not three) and one for a draw, and if two teams finish level then the deciding factor is goal difference, with goals scored then being decisive if both teams have the same goal difference. The last match of the season is Rangers against Aberdeen. With two matches left, Rangers are two points above Aberdeen. Aberdeen are winning their penultimate match but, in injury time, Rangers are trailing Motherwell 1-0. If it finishes like that, the teams will go into their last match level on points. Rangers will have scored 60 and conceded 21, Aberdeen have scored 62 and conceded 25. Rangers' goal difference is better and they need only draw the last match against Aberdeen to win the league.
So what do Rangers do? They throw caution to the wind in seeking an equaliser. Had they got an equaliser they would have gone into the last match a point above Aberdeen, and needed a draw to win the league. So scoring an equaliser would not have benefited them in the slightest: they need exactly the same from the last match as if they lose 1-0 to Motherwell. But in fact the reckless attack allowed Motherwell to score twice more in injury time. Rangers lost 3-0, their goal difference is now the same as Aberdeen's, they have scored fewer goals, and now they need to win the final match if they are to win the championship. The reckless pursuit of an equaliser which could make no difference has actually made the league title much harder to win.
(Rangers beat Aberdeen 2-0 so this mathematical madness didn't actually cost them the championship.)
In last night's UEFA Euro 2012 matches, Croatia played Spain and Italy played Ireland. Because Ireland had lost both previous matches and the others had been drawn, it was likely that the tie-breaking rules would come into effect and the score in the Croatia-Spain match would be crucial. Assuming Italy beat Ireland, the winners of Croatia-Spain would go through and the losers would be out. Croatia and Spain would both qualify if they drew 2-2, Spain and Italy if the result was 0-0, while if Spain and Croatia drew 1-1 then Italy would have to beat Ireland by two goals and score three times to deny Croatia.
So we're in the last couple of minutes of both matches, Spain and Croatia is goalless and Italy lead Ireland 1-0. At this point Spain and Italy are on course to qualify. Croatia need a goal, but the interesting thing is that conceding a goal doesn't damage Croatia. In fact Spain score just before the end, but Croatia need to score just one goal to go through, and the Spanish goal makes no difference to them. At this point there is no likely situation in which a 1-0 win for Croatia is better for them than a 1-1 draw, unless Italy were to score two more goals in injury time. (In fact Italy scored one more to win 2-0 so perhaps that wasn't an impossible scenario.)
Indeed, perhaps Croatia are more likely to score if Spain have scored, since Spain are likely to be less worried about conceding an equaliser which won't put them out.
Spain's goal did change the order of the top two in the group, but, coming when it did, it made no difference at all to Croatia's chances.
I am reminded of the Scottish Premier Division 1990/91. Rangers and Aberdeen are fighting it out with two matches left. In these days it is two points for a win (not three) and one for a draw, and if two teams finish level then the deciding factor is goal difference, with goals scored then being decisive if both teams have the same goal difference. The last match of the season is Rangers against Aberdeen. With two matches left, Rangers are two points above Aberdeen. Aberdeen are winning their penultimate match but, in injury time, Rangers are trailing Motherwell 1-0. If it finishes like that, the teams will go into their last match level on points. Rangers will have scored 60 and conceded 21, Aberdeen have scored 62 and conceded 25. Rangers' goal difference is better and they need only draw the last match against Aberdeen to win the league.
So what do Rangers do? They throw caution to the wind in seeking an equaliser. Had they got an equaliser they would have gone into the last match a point above Aberdeen, and needed a draw to win the league. So scoring an equaliser would not have benefited them in the slightest: they need exactly the same from the last match as if they lose 1-0 to Motherwell. But in fact the reckless attack allowed Motherwell to score twice more in injury time. Rangers lost 3-0, their goal difference is now the same as Aberdeen's, they have scored fewer goals, and now they need to win the final match if they are to win the championship. The reckless pursuit of an equaliser which could make no difference has actually made the league title much harder to win.
(Rangers beat Aberdeen 2-0 so this mathematical madness didn't actually cost them the championship.)
Monday, 18 June 2012
The maths of "The Group of Death"
Tonight Spain play Croatia in the UEFA Euro 2012 football tournament. At this stage teams are playing in groups of four - each team plays each of the other three (six matches) - with the top two in each group progressing to the next round. The results of the first four matches mean that if Spain and Croatia draw tonight and both teams score at least twice, then Italy are knocked out whether or not they win their match against Ireland.
While sporting integrity (a strange concept currently being invoked in discussion about the future of Rangers in the Scottish Premier League) will ensure that Spain and Croatia both play to win, history suggests that if the score is 2-2 with five minutes remaining it would be irrational either side were to take risks in attempting to score a late winner. A similar situation arose in 1982 when West Germany played Austria in the World Cup. A one-goal victory for the Germans would ensure that both sides progressed. West Germany scored an early goal and thereafter, according to Wikipedia, "Onlookers noted that both teams played as though they were content with the
result", the 1-0 outcome seeing both teams through at the expense of Algeria. There have been other examples in international and club matches. Robert Axelrod writes about an English league match in his book about the Prisoners' Dilemma and the mathematics of altruism, "The Evolution of Co-operation".
In fact there is a lot of (relatively simple) mathematics in sporting league tables. Quite possibly my interest in mathematics began around my eighth birthday in 1965 when Kilmarnock had to beat Hearts 2-0 in the final match to win the league on goal average (the ratio of goals for to goals against), a much more complicated tie-break mechanism than the goal difference used now.
The current rules for deciding ties in mini-leagues in tournaments like Euro 2012 is that, rather than look at goal difference over all matches, instead we look at results over the matches involving the teams who have tied. So if A and B finish with the same number of points (3 for each win and one for each draw), and A won the match between A and B, then A finish above B. This avoids the situation which eliminated Scotland in the 1974 World Cup, when all three of Scotland, Yugoslavia and Brazil beat Zaire while the matches between these three were drawn. Scotland played the weak Zaire side first which meant that Brazil knew exactly how many goals they had to score against Zaire to finish above Scotland.
While the newer rules work well in some circumstances, they still give rise to situations like tonight's Spain-Croatia match.
As a boy I loved mathematical puzzles based around sporting league tables - typically one was given an incomplete table and had to work out the results of every match. These appealed to my twin loves of sport and logic. I noticed however that on the final edition of Dara O'Briain's School of Hard Sums, when such a problem was presented, it didn't enthuse the students on the programme: perhaps these problems no longer have wide appeal. Nevertheless I still enjoy the mathematics of league tables.
For example, (1) in mini-leaguers of four teams as in Euro 2012, can a team win two matches out of three and still not qualify for the next round? (2) Can a team win one and lose two, and still not qualify? (3)What is the smallest number of points with which a team can qualify?
Answers: (1) Yes if three teams win two matches and the fourth loses all three - this would have happened if Denmark had beaten Germany in the "Group of Death" last night, since Germany would have been out despite winning their first two matchers. (2) Yes, if one team wins three matches and the other three each beat one of the others - this would have happened if Netherlands had beaten Portugal last night. (3) A team can go through with one defeat and two draws, if one team win all three matches and the other three matches are all drawn.
One point which was obvious when league tables were displayed during the TV coverage of last night's matches is that, under today;s tie-breaking rules, traditional league tables (recording numbers of matches, wins, draws,losses, goals for and against, and points) do not contain enough information to show which teams will qualify. If both last night's matches had been drawn, Denmark and Portugal would each have had four points. Portugal would have finished second, and qualified, because they beat Denmark in the match between these two teams. But one couldn't have deduced that from thr group table. Exactly the same numbers could have arisen in the group table with Denmark beating Portugal and drawing with Netherlands, and Portugal beating Netherlands.
So whereas once a group table gave all the information you needed to decide the order of the teams, this is no longer the case. How could we create an improved group table which actually included all the information we need?
In fact there is a lot of (relatively simple) mathematics in sporting league tables. Quite possibly my interest in mathematics began around my eighth birthday in 1965 when Kilmarnock had to beat Hearts 2-0 in the final match to win the league on goal average (the ratio of goals for to goals against), a much more complicated tie-break mechanism than the goal difference used now.
The current rules for deciding ties in mini-leagues in tournaments like Euro 2012 is that, rather than look at goal difference over all matches, instead we look at results over the matches involving the teams who have tied. So if A and B finish with the same number of points (3 for each win and one for each draw), and A won the match between A and B, then A finish above B. This avoids the situation which eliminated Scotland in the 1974 World Cup, when all three of Scotland, Yugoslavia and Brazil beat Zaire while the matches between these three were drawn. Scotland played the weak Zaire side first which meant that Brazil knew exactly how many goals they had to score against Zaire to finish above Scotland.
While the newer rules work well in some circumstances, they still give rise to situations like tonight's Spain-Croatia match.
As a boy I loved mathematical puzzles based around sporting league tables - typically one was given an incomplete table and had to work out the results of every match. These appealed to my twin loves of sport and logic. I noticed however that on the final edition of Dara O'Briain's School of Hard Sums, when such a problem was presented, it didn't enthuse the students on the programme: perhaps these problems no longer have wide appeal. Nevertheless I still enjoy the mathematics of league tables.
For example, (1) in mini-leaguers of four teams as in Euro 2012, can a team win two matches out of three and still not qualify for the next round? (2) Can a team win one and lose two, and still not qualify? (3)What is the smallest number of points with which a team can qualify?
Answers: (1) Yes if three teams win two matches and the fourth loses all three - this would have happened if Denmark had beaten Germany in the "Group of Death" last night, since Germany would have been out despite winning their first two matchers. (2) Yes, if one team wins three matches and the other three each beat one of the others - this would have happened if Netherlands had beaten Portugal last night. (3) A team can go through with one defeat and two draws, if one team win all three matches and the other three matches are all drawn.
One point which was obvious when league tables were displayed during the TV coverage of last night's matches is that, under today;s tie-breaking rules, traditional league tables (recording numbers of matches, wins, draws,losses, goals for and against, and points) do not contain enough information to show which teams will qualify. If both last night's matches had been drawn, Denmark and Portugal would each have had four points. Portugal would have finished second, and qualified, because they beat Denmark in the match between these two teams. But one couldn't have deduced that from thr group table. Exactly the same numbers could have arisen in the group table with Denmark beating Portugal and drawing with Netherlands, and Portugal beating Netherlands.
So whereas once a group table gave all the information you needed to decide the order of the teams, this is no longer the case. How could we create an improved group table which actually included all the information we need?
Sunday, 10 June 2012
A tree which is not a tree
Graph theory is perhaps my favourite area of mathematics. It features diverse and appealing modes of reasoning, and many of the results are relatively accessible, rather than requiring a lot of theory.
A tree is a special kind of graph, like that above (the numbers serve only to identify the vertices). Technically a connected graph is a tree if there is exactly one path connecting any two vertices, or equivalently the graph contains no cycle. In lay terms this means a diagram where points are joined by lines.in such a way that there is no loop. The branching is like the branches of a tree (in the natural sense of the word) or of a family tree, as one sees in the diagram below, where one of the vertices is chosen as a root.
There are lots of results one can prove about graph-theoretical trees. For example, a tree with n vertices must have exactly n-1 edges. Trees are particularly useful in computer science. But what I am interested in today is the definition.
Books like Robin Wilson's excellent Introduction to Graph Theory motivate the definition by referring to family trees and botanical trees. Now there is a nice sycamore tree in our garden:
But this is not a tree in the graph theory sense! It contains a cycle:
Two squirrels could climb to the top of the tree taking different routes! And that's impossible in a graph which is a tree, where there is only one route between any two points.
So what's gone wrong? Why is my tree not a tree? Why does the mathematical definition which seeks to encapsulate the "tree-ness" of trees, exclude this perfectly respectable specimen of a tree in our garden?
Well, mathematics is about abstraction. The definition of "tree" abstracts something typical of real trees, which does reflect the nature of branching (our sycamore is slightly unusual). It leads to a very rich branch of graph theory, with many applications. So it's a good definition. (But it doesn't exactly fit family trees either. If you go back a few generations you get connections which give loops, when the same person is an ancestor in two different lines. Otherwise we would each have far more ancestors than all the humans who have ever lived.)
Here's another illustration. If I take a square piece of paper, it has four corners. Let's get rid of the corners by cutting them off with scissors (cutting off a triangular piece at each corner). I've taken four away from four: mathematically I might expect to have zero corners left, but in fact I have eight since the remaining paper forms an octagon. So 4-4=8 in this situation. (Peter Flom's blog has some other examples where 2+2 is not 4.)
So does maths fail? No. When I was younger I'd have felt that these examples of arithmetic failure were contrived, obviously artificial and not to be taken seriously. I now think I was wrong. These are real examples: it's mathematics that is contrived. We abstract the rules of arithmetic from a world in which they don't always apply, to create a system which is enormously useful. The power of mathematics comes from simplifying the real world. The mathematical idea of a tree is powerful, even if it excludes some legitimate trees. Arithmetic is powerful even if it doesn't apply to some real-life situations.
So it's amusing that our sycamore tree is not a tree, but it isn't a flaw in graph theory!
Thursday, 7 June 2012
Dara's Big Mistake
I've just watched the latest (and final) episode of Dara O'Briain's School of Hard Sums, with Andi Osho again demonstrating a remarkable ability to get straight to the solution by what appeared to be inspired guesswork. If I were doing undergraduate groupwork, I'd want her in my team!
SPOILER ALERT: If you're planning to watch this programme (it's repeated over the next few days) you may not want to read this yet as I give some hints as to what happens (though I don't give away the maths)..
The final problem involves runners going round a race-track at different speeds and asks how long it is after the start before all three runners are again together. Dara produces reams of incomprehensible algebra to get an answer in which he doesn't have full confidence - he presents it reluctantly, convinced that he has gone wrong. His delight when he finds his answer is correct was the highlight of the series.
I think Dara got it badly wrong. He didn't check his answer! His algebra was complex but it gave a solution: it would have taken only a moment to see where the three runners were at the time in question and establish that they were all together. ((OK, only a partial check: it might not have been the first time, but it would at least have shown that his answer was plausible.)
In any maths problem, if you can check your answer is valid, it gives you confidence in your solution. I worked for many years in the mathematical modelling of industrial plant. There were essentially two kinds of problems. Engineers wanted either to find the steady state of the model - the normal plant operating condition when nothing is happening to disturb it - or to find out what happened dynamically when some disturbance was applied (say if a pipe broke and the cooling system failed).
The models were essentially defined by a set of differential equations. Finding the steady state involved finding values such that all the derivatives were zero - that is, finding a zero of a system of a couple of thousand non-linear equations. Not easy - sometimes the equation-solving algorithms failed and we were reduced to starting from a random initial state and running the model for a very long time in the hope that it would naturally arrive at the steady state. But if the algorithm gave a solution, you could test it by plugging the numbers into the equations and making sure they all gave you zero.
But running dynamically involved numerical integration, and there is no simple way to check your answer! Since numerical integration can give completely incorrect results (try integrating dx/dt = -1000x by Euler's method with step-size 0.01, say, to see output which is not only quantitatively but also qualitatively completely wrong) this is a serious problem. (There are ways one establishes confidence in the output from numerical integration, but they aren't as simple as plugging the answers in to the equations to check that you get zero!)
Dara's algebra may or may not have been right. But if he had checked where the runners were after the time he had deduced, he could have had a lot more confidence in his calculations. Of course, it wouldn't have made such good TV.
SPOILER ALERT: If you're planning to watch this programme (it's repeated over the next few days) you may not want to read this yet as I give some hints as to what happens (though I don't give away the maths)..
The final problem involves runners going round a race-track at different speeds and asks how long it is after the start before all three runners are again together. Dara produces reams of incomprehensible algebra to get an answer in which he doesn't have full confidence - he presents it reluctantly, convinced that he has gone wrong. His delight when he finds his answer is correct was the highlight of the series.
I think Dara got it badly wrong. He didn't check his answer! His algebra was complex but it gave a solution: it would have taken only a moment to see where the three runners were at the time in question and establish that they were all together. ((OK, only a partial check: it might not have been the first time, but it would at least have shown that his answer was plausible.)
In any maths problem, if you can check your answer is valid, it gives you confidence in your solution. I worked for many years in the mathematical modelling of industrial plant. There were essentially two kinds of problems. Engineers wanted either to find the steady state of the model - the normal plant operating condition when nothing is happening to disturb it - or to find out what happened dynamically when some disturbance was applied (say if a pipe broke and the cooling system failed).
The models were essentially defined by a set of differential equations. Finding the steady state involved finding values such that all the derivatives were zero - that is, finding a zero of a system of a couple of thousand non-linear equations. Not easy - sometimes the equation-solving algorithms failed and we were reduced to starting from a random initial state and running the model for a very long time in the hope that it would naturally arrive at the steady state. But if the algorithm gave a solution, you could test it by plugging the numbers into the equations and making sure they all gave you zero.
But running dynamically involved numerical integration, and there is no simple way to check your answer! Since numerical integration can give completely incorrect results (try integrating dx/dt = -1000x by Euler's method with step-size 0.01, say, to see output which is not only quantitatively but also qualitatively completely wrong) this is a serious problem. (There are ways one establishes confidence in the output from numerical integration, but they aren't as simple as plugging the answers in to the equations to check that you get zero!)
Dara's algebra may or may not have been right. But if he had checked where the runners were after the time he had deduced, he could have had a lot more confidence in his calculations. Of course, it wouldn't have made such good TV.
Saturday, 2 June 2012
Atiyah and Villani at Tate Modern - the value of blackboards
A wonderfully stimulating afternoon at the Tate Modern gallery this afternoon, for a mathematical event in a series entitled "Topology". The afternoon had two parts - a showing of the film "Au Bonheur des Maths" by Raymond Depardon and Claudine Nougaret, followed by discussion between Michael Atiyah and Cédric Villani, two mathematical superstars, chaired by Ian Stewart.
The film comprised a series of short statements by mathematicians, essentially presented in black and white as talking heads, though Villani alone used the blackboard to communicate (more on this later). They all had interesting things to say. I was slightly puzzled by Atiyah's comment afterwards on the diversity of the mathematicians: most seemed to be white, male and middle-aged, and presenting similar views of the subject.
The discussion, ably chaired by Ian Stewart, was absorbing. It helped that the questions from the audience were all interesting (or were made interesting by Villani and Atiyah). It was being recorded, and I hope the result will be made widely available.
There were so many interesting ideas that I can only pick out a few that resonated with me. One was the extent to which mathematics is a frustrating activity: 'You spend most of your time thinking "I can't understand this"' (and if that's the experience of Stewart, Atiyah and Villani, what is it for the rest of us?) This is something that must influence the characters of mathematicians: how can you be arrogant when you are always wrestling with problems you can't solve? It must be off-putting for undergraduates who found mathematics easy at school to find at university that there are hard problems on which they get stuck: appreciating that being stuck is the natural state of even the greatest mathematicians, and isn't a sign of one's inadequacy, might help keep them in the profession. So I'll be quoting this to my students.
I was really impressed by the evident breadth of knowledge of both Atiyah and Villani. They had read Kepler; Villani told us about Voltaire's comment that Kepler's achievement was a humiliation for philosophy in that such a deluded thinker could make such great discoveries. Both seemed keen on reading the writings of the great mathematicians of the past. Villani made fascinating comments about the different ways in which the composers Xenakis and Ligeti made use of maths, and showed how a work of Ligeti refutes a remark in Euler's writing on music. When there is such a focus on research outputs that reading outside one's immediate subject can seem like a luxury one can't afford, I found Atiyah and Villani inspirational in their range of serious interests.
There were insights into similarities between mathematicians and artists - both proceed by breaking rules (for example, non-Euclidean geometry arising by discarding a previously unchallenged axiom) and both work with a mix of individual and collaborative activity. There was wisdom on teaching - the personality of the teacher is the most important factor, and Villani commented that saying that we need more people to study science is not the best way to encourage young scientists.
Villani was asked about the "enemies" a mathematician faces, and identified three - lack of imagination; a priori ideas, which you have to eliminate in order to make progress; and too much information, hiding the things that really matter in fog. Stewart told a fascinating anecdote about a friend discussing three-dimensional topology over the phone with a mathematician who had been blind since birth (a remarkable idea in itself) and noting that the blind mathematician was obviously visualising the problem very clearly but without taking any single point of view - a very interesting concept. There was discussion of ugliness in mathematics - with, I think, agreement that a good proof is about understanding, and a proof which doesn't tell you why something is true is inherently unsatisfactory.
Artists in the audience asked some of the most interesting questions. One was about symbols - how do mathematicians choose their symbols? Villani's answer made me aware just how important choice of symbols can be - they permit a level of automatic checking and most symbols carry meaning for the reader which helps comprehension. (I remember struggling with the German letters symbolising group varieties in an algebra book: I couldn't immediately tell which was which and, not surprisingly, I never got on top of the material. My fault, not the authors: I should have spent time copying the letters until I could read them at a glance!)
The final question, again an insight into how the practice mathematics relates to that of the visual artist, was about the "extraordinary mark-making" mathematicians do on blackboards, and whether this can survive in the digital age. Atiyah's answer agreed on the importance of the hand in doing maths, and that communication of mathematics involves body movement, props, blackboards - the garnish that makes something palatable. Villani, noting that blackboards are now not always available in lecture rooms, felt that the blackboard is unrivalled: much more than with a computer, one can improvise, erase, keep some material and lose other bits; it forces you not to overflow, and when you're frustrated, you can bang your head on it!
This was an amazing afternoon, rich in provocative ideas. The mathematicians' passion came across; there were insights about mathematics, art and music. If these notes are garbled, it was because there was so much to think about! I do hope the recording will be made available, even though it will probably show that I have misunderstood and misinterpreted a lot of what was said. I have rarely spent such a stimulating afternoon in a gallery (and that is a very strong statement).
The film comprised a series of short statements by mathematicians, essentially presented in black and white as talking heads, though Villani alone used the blackboard to communicate (more on this later). They all had interesting things to say. I was slightly puzzled by Atiyah's comment afterwards on the diversity of the mathematicians: most seemed to be white, male and middle-aged, and presenting similar views of the subject.
The discussion, ably chaired by Ian Stewart, was absorbing. It helped that the questions from the audience were all interesting (or were made interesting by Villani and Atiyah). It was being recorded, and I hope the result will be made widely available.
There were so many interesting ideas that I can only pick out a few that resonated with me. One was the extent to which mathematics is a frustrating activity: 'You spend most of your time thinking "I can't understand this"' (and if that's the experience of Stewart, Atiyah and Villani, what is it for the rest of us?) This is something that must influence the characters of mathematicians: how can you be arrogant when you are always wrestling with problems you can't solve? It must be off-putting for undergraduates who found mathematics easy at school to find at university that there are hard problems on which they get stuck: appreciating that being stuck is the natural state of even the greatest mathematicians, and isn't a sign of one's inadequacy, might help keep them in the profession. So I'll be quoting this to my students.
I was really impressed by the evident breadth of knowledge of both Atiyah and Villani. They had read Kepler; Villani told us about Voltaire's comment that Kepler's achievement was a humiliation for philosophy in that such a deluded thinker could make such great discoveries. Both seemed keen on reading the writings of the great mathematicians of the past. Villani made fascinating comments about the different ways in which the composers Xenakis and Ligeti made use of maths, and showed how a work of Ligeti refutes a remark in Euler's writing on music. When there is such a focus on research outputs that reading outside one's immediate subject can seem like a luxury one can't afford, I found Atiyah and Villani inspirational in their range of serious interests.
There were insights into similarities between mathematicians and artists - both proceed by breaking rules (for example, non-Euclidean geometry arising by discarding a previously unchallenged axiom) and both work with a mix of individual and collaborative activity. There was wisdom on teaching - the personality of the teacher is the most important factor, and Villani commented that saying that we need more people to study science is not the best way to encourage young scientists.
Villani was asked about the "enemies" a mathematician faces, and identified three - lack of imagination; a priori ideas, which you have to eliminate in order to make progress; and too much information, hiding the things that really matter in fog. Stewart told a fascinating anecdote about a friend discussing three-dimensional topology over the phone with a mathematician who had been blind since birth (a remarkable idea in itself) and noting that the blind mathematician was obviously visualising the problem very clearly but without taking any single point of view - a very interesting concept. There was discussion of ugliness in mathematics - with, I think, agreement that a good proof is about understanding, and a proof which doesn't tell you why something is true is inherently unsatisfactory.
Artists in the audience asked some of the most interesting questions. One was about symbols - how do mathematicians choose their symbols? Villani's answer made me aware just how important choice of symbols can be - they permit a level of automatic checking and most symbols carry meaning for the reader which helps comprehension. (I remember struggling with the German letters symbolising group varieties in an algebra book: I couldn't immediately tell which was which and, not surprisingly, I never got on top of the material. My fault, not the authors: I should have spent time copying the letters until I could read them at a glance!)
The final question, again an insight into how the practice mathematics relates to that of the visual artist, was about the "extraordinary mark-making" mathematicians do on blackboards, and whether this can survive in the digital age. Atiyah's answer agreed on the importance of the hand in doing maths, and that communication of mathematics involves body movement, props, blackboards - the garnish that makes something palatable. Villani, noting that blackboards are now not always available in lecture rooms, felt that the blackboard is unrivalled: much more than with a computer, one can improvise, erase, keep some material and lose other bits; it forces you not to overflow, and when you're frustrated, you can bang your head on it!
This was an amazing afternoon, rich in provocative ideas. The mathematicians' passion came across; there were insights about mathematics, art and music. If these notes are garbled, it was because there was so much to think about! I do hope the recording will be made available, even though it will probably show that I have misunderstood and misinterpreted a lot of what was said. I have rarely spent such a stimulating afternoon in a gallery (and that is a very strong statement).
Monday, 28 May 2012
Vampires, maidens and Angela, my old matchbox computer
As promised in my post a fortnight ago, I'm returning to the Vampires and Maidens problem from Dara O'Briain's School of Hard Sums. Thinking about it took me back to the matchbox computer I made as a teenager: its modus operandi would have solved this problem.
Like so many of my mathematical interests, this one began with a Scientific American column by Martin Gardner (or perhaps I encountered it in one of his books). The theme was that a machine can learn by experience. I had a very limited knowledge of computers, naturally, and one of the things people said at the time that distinguished human from machine intelligence is that a machine's programme is hardwired and cannot learn from its experience. Gardner's article showed that this is not so.
His example (if I remember correctly - I am often surprised how wrong my memories are when I check up, so I won't!) was a simple game with chess pawns on a 3x3 board. I implemented it for Nim's Game, with three piles of three matchsticks. (Why matches? Because I had a lot of matches to use up, as we will see.) In Nim's game, players take it in turn to remove as many matches as they wish (but at least one) from any single pile, and the player who takes the last match loses. (Or wins- counter-intuitively, it makes little difference to the strategy which version you choose.)
There is a mathematical strategy for Nim's game, but my "computer" didn't do it that way. I enumerated all the different positions, from (0,0,1) to (3,3,3). I reckon there are nineteen essentially different positions, with (0,0,1) being an immediate loss for the player about to move. These days, one would do this on a real computer, but this was long before teenagers had access to computers so, following Gardner, I labelled an empty matchbox for each position. In each box I placed a counter indicating a legal move from that position, so for example the box for (3,3,3) would contain counters for (2,3,3), (1,3,3) and (0,3,3), representing the three possible moves: one can take 1,2 or 3 counters from any one pile.
The matchbox "computer" is now ready to play against a human. When it is the computer's turn, one opens the box for the current position and draws a counter randomly: that is the computer's move. The human then tales their turn, and then opens the box for the next position and draws another counter.
The beauty of this is that the machine can learn from its mistakes! When the machine loses, we discard the last counter chosen. (All other counters are replaced in the box they were drawn from.) Consequently the computer will never make the same final losing move twice. So, for example, consider the box (0,0,2). There are two moves: to (0,0,0) (immediate defeat) or (0,0,1) (immediate victory). The first time the machine reaches this position, the counter for (0,0,0) may be drawn. The machine will lost that game (from a position where any human would have won), but that counter will be discarded, and the machine will always win from that position in future.
When a box is empty, that means the machine knows that there is no move from that position that does not result in eventual defeat (assuming perfect play from the opponent), so it resigns. As a consequence, counters from boxes which allow the opponent to move to that position will begin to be removed.
After a reasonable number of games, the only counters left in any boxes will be those which never lead to defeat. The machine will start playing randomly, but will end by playing perfectly!
This fascinated me. I called my machine Angela, for "A Nim's Game Experience Learning Automaton". (I'm not sure "automaton" is right for a device which requires one physically to draw counters from matchboxes, but it served the needs of the acronym.)
What does this have to do with the Vampires and Maidens problem? Remember that we have three vampires and three maidens at the foot of a building, needing to get to the top in a lift which can only carry two people at a time and which needs one person inside to operate it. If at any time, vampires outnumber maidens in any location, then something unspeakable happens. How can all six safely get to the top?
Well, the matchbox computer idea will solve this. As suggested in my previous post on this problem, w can enumerate the possible positions - these will be of the form (v,m,l) where v is the number of vampires at the bottom, m is the number of maidens at the bottom and l is the position of the lift. So v and m are between 0 and 3 and l is -1 for lift at ground level and +1 for lift at the top. The problem is to go from (3,3-1) to (0,0,1), where we fail if at any stage m is non-zero and m is m<v.
Now create matchboxes for each position, and use counters to indicate possible moves. From the starting position (3,3-1) there will be five possible moves, to (2,3,1), (1,3,1), (3,2,1), 3,1,1) and (2,2,1). (Two of these leave maidens outnumbered at the bottom.) We then begin at the starting point and draw a counter, move to the indicated matchbox and draw another. Any time we reach a failing position, we discard the last counter drawn.
Of course, the counter we draw may take us back to a position we were in earlier. To avoid looping for ever, we discard any counter which returns us to a state we have previously encountered.
The result is that, eventually, drawing counters at random and backtracking when we fail, we are left with a solution to the puzzle.
What is wonderful, or appalling, about this is that, as with my Nim's Game device, we solve the problem with no deep understanding of the strategy. On the one hand it is amazing that we can find solutions to difficult problems without any serious thinking about the problem: on the other hand, the standard strategy for playing Nim's Game (which I already knew from earlier articles by Martin Gardner) is better maths!
So is it better to solve problems by mathematical thinking or by the brute force approach of my matchbox computer?
Like so many of my mathematical interests, this one began with a Scientific American column by Martin Gardner (or perhaps I encountered it in one of his books). The theme was that a machine can learn by experience. I had a very limited knowledge of computers, naturally, and one of the things people said at the time that distinguished human from machine intelligence is that a machine's programme is hardwired and cannot learn from its experience. Gardner's article showed that this is not so.
His example (if I remember correctly - I am often surprised how wrong my memories are when I check up, so I won't!) was a simple game with chess pawns on a 3x3 board. I implemented it for Nim's Game, with three piles of three matchsticks. (Why matches? Because I had a lot of matches to use up, as we will see.) In Nim's game, players take it in turn to remove as many matches as they wish (but at least one) from any single pile, and the player who takes the last match loses. (Or wins- counter-intuitively, it makes little difference to the strategy which version you choose.)
There is a mathematical strategy for Nim's game, but my "computer" didn't do it that way. I enumerated all the different positions, from (0,0,1) to (3,3,3). I reckon there are nineteen essentially different positions, with (0,0,1) being an immediate loss for the player about to move. These days, one would do this on a real computer, but this was long before teenagers had access to computers so, following Gardner, I labelled an empty matchbox for each position. In each box I placed a counter indicating a legal move from that position, so for example the box for (3,3,3) would contain counters for (2,3,3), (1,3,3) and (0,3,3), representing the three possible moves: one can take 1,2 or 3 counters from any one pile.
The matchbox "computer" is now ready to play against a human. When it is the computer's turn, one opens the box for the current position and draws a counter randomly: that is the computer's move. The human then tales their turn, and then opens the box for the next position and draws another counter.
The beauty of this is that the machine can learn from its mistakes! When the machine loses, we discard the last counter chosen. (All other counters are replaced in the box they were drawn from.) Consequently the computer will never make the same final losing move twice. So, for example, consider the box (0,0,2). There are two moves: to (0,0,0) (immediate defeat) or (0,0,1) (immediate victory). The first time the machine reaches this position, the counter for (0,0,0) may be drawn. The machine will lost that game (from a position where any human would have won), but that counter will be discarded, and the machine will always win from that position in future.
When a box is empty, that means the machine knows that there is no move from that position that does not result in eventual defeat (assuming perfect play from the opponent), so it resigns. As a consequence, counters from boxes which allow the opponent to move to that position will begin to be removed.
After a reasonable number of games, the only counters left in any boxes will be those which never lead to defeat. The machine will start playing randomly, but will end by playing perfectly!
This fascinated me. I called my machine Angela, for "A Nim's Game Experience Learning Automaton". (I'm not sure "automaton" is right for a device which requires one physically to draw counters from matchboxes, but it served the needs of the acronym.)
What does this have to do with the Vampires and Maidens problem? Remember that we have three vampires and three maidens at the foot of a building, needing to get to the top in a lift which can only carry two people at a time and which needs one person inside to operate it. If at any time, vampires outnumber maidens in any location, then something unspeakable happens. How can all six safely get to the top?
Well, the matchbox computer idea will solve this. As suggested in my previous post on this problem, w can enumerate the possible positions - these will be of the form (v,m,l) where v is the number of vampires at the bottom, m is the number of maidens at the bottom and l is the position of the lift. So v and m are between 0 and 3 and l is -1 for lift at ground level and +1 for lift at the top. The problem is to go from (3,3-1) to (0,0,1), where we fail if at any stage m is non-zero and m is m<v.
Now create matchboxes for each position, and use counters to indicate possible moves. From the starting position (3,3-1) there will be five possible moves, to (2,3,1), (1,3,1), (3,2,1), 3,1,1) and (2,2,1). (Two of these leave maidens outnumbered at the bottom.) We then begin at the starting point and draw a counter, move to the indicated matchbox and draw another. Any time we reach a failing position, we discard the last counter drawn.
Of course, the counter we draw may take us back to a position we were in earlier. To avoid looping for ever, we discard any counter which returns us to a state we have previously encountered.
The result is that, eventually, drawing counters at random and backtracking when we fail, we are left with a solution to the puzzle.
What is wonderful, or appalling, about this is that, as with my Nim's Game device, we solve the problem with no deep understanding of the strategy. On the one hand it is amazing that we can find solutions to difficult problems without any serious thinking about the problem: on the other hand, the standard strategy for playing Nim's Game (which I already knew from earlier articles by Martin Gardner) is better maths!
So is it better to solve problems by mathematical thinking or by the brute force approach of my matchbox computer?