Editor’s Note
On 23rd June the world observed the 106th birth anniversary of renowned British mathematician, Alan M Turing. He is celebrated as the father of modern computing and a pioneer in the field of Artificial Intelligence (AI) today. In addition to being a mathematician and logician, he was a computer scientist, cryptanalyst and even a theoretical biologist towards the end of his life (working on Morphogenesis).
Turing’s work during the war has been the focus of the popular culture. However, his work before and after the war got lesser recognition. The work on the computability theory was indeed paradigm shifting. In this article, Rohit Arora discusses one of his most prominent contributions to the question of decidability and if there are theoretical limits to what machines can do – the halting problem.
Alan Turing is best known for cracking Nazi Germany’s Enigma machine – beautifully scripted in the 2014 movie The Imitation Game1. This achievement deserves all the praises, and it is easy to appreciate it once we understand how the Enigma machine worked and how Turing and his colleagues were able to crack it.
International Trailer of The Imitation Game (2014)
Enigma machine, was a typewriter-like machine which generated polyalphabetic substitution ciphers to provide necessary encryption to transmitted radio messages. For example – if you have seen the famous 1968 movie 2001: A Space Odyssey (if you haven’t seen it, I strongly urge you to!). The name of the AI in the film was HAL; merely each letter of IBM mapped to one alphabet to its left in the series. So if I understand the mapping rules here, when you say HAL, I would know that you mean IBM.
Now think instead of such a simple mapping rule, you have an encryption rule, where any letter could be mapped to any other. Even better, every time you try the same letter, it would be assigned to a different letter, and every time you get to an answer, the encryption changes.
Woohla!! That is how enigmatic Enigma worked!
The complexity of an encryption code can be determined by the number of possible combinations needed to arrive at the right solution. The technology was revolutionizing for its time. The urgency of a more secure-mode of communication during the war-period, lead the Germans to augment the complexity of the encryption.
That meant the number of possible combinations for a decryption key was in the order of 1018-1019. For an entomology reference, the number of insects on earth is roughly 1019. A system with the number of states of this order is still reasonably intractable in some cases.
This may seem like an unbreakable phantom code in 1930s-40s, there were still shortcomings that the Nazis did not foresee. For instance, no alphabet could map to itself, i.e., an ‘A’ could only map to 25 alphabets ‘B-Z’. As another example, the Nazis generated encrypted weather reports which always followed the same pattern of information.
Turing and colleagues, recruited by the British government to decode Enigma, focused on these aspects. This allowed them to resolve the complexity of the code significantly and enabled them to build a reverse Enigma machine (called Bombe machine).
Today, such a task can be accomplished by developing a predictive model trained on the data from captured Enigma machines and successfully unravel the hidden messages.
Without the work of Turing and his colleagues, the Allied Forces might have lost the war. Unfortunately, instead of being honored for his noteworthy contribution, post-war Turing confronted the humiliations for being a homosexual. Which was back then, against the law in the UK. Devasted by the professional and personal tragedy, he, committed suicide in June 1954 at his prime.
Image source: Wikimedia Commons. Enigma (L) and Bombe (R)
The Halting Problem
If you were asked to write a series of instructions to find all prime numbers between 1-1000, in a finite amount of time, you would eventually execute the task. Such a task is an example of a decidable problem. Essentially, if it is possible to design a series of instructions (or algorithms) to answer a question within a finite time – the problem is said to be decidable. However, not every problem is decidable. Over the course of mathematical and scientific development, it was assumed that with better mathematical tools and physical knowledge every question could be answered.
From Poincaré’s work on the trajectories of celestial bodies to development of quantum mechanics to Gödel’s incompleteness theorem, late nineteenth and early twentieth centuries gave rise to questions about undecidability and indeterminism. Undecidability is the logical analog of a larger physical question about what we can or cannot know2.
It should be noted, that a problem or a function is decidable as long as it can be evaluated, i.e., efficiency is not a factor (in theory). The works of David Hilbert and Kurt Gödel on this topic are essential reading if you wish to explore more about logic and decidability3.
In fact, it was David Hilbert who introduced Entscheidungsproblem (German for “decision problem”), which was solved by both Alonzo Church and Alan Turing independent of each other and around the same time in 1935-36 (see Church-Turing thesis).
While Church’s proof relied on previous work by Stephen Kleene, Turing’s proof was more intuitive. Part of Turing’s proof was what we now know in computability theory as the “halting problem”.
The halting problem was one of the first problems that were proven to be undecidable. Explicitly, it states that it is impossible to write an algorithm which can decide if a program running on specific input will ever halt or will go on forever, in a finite amount of time. Note that the halting problem does not say that it is impossible to do this for every program. In fact, we can quickly write such algorithms for a reasonably sophisticated program. It merely means that it will not work for all applications.
Turing’s proof for this problem involves an understanding of a Turing machine – essentially a theoretical construct of a modern day computer. This proof of the halting problem is a proof by contradiction, in which we assume that what we want to prove is not true and that usually leads to some logical contradiction or paradox, which tells us that our assumption was wrong.
I’m a fan of logical paradoxes. An example of a logical paradox may include an inconsistency resulting from self-reference. The proof of the halting problem uses a self-reference approach, and therefore it is vital to understand what it means. Consider the following statement which is a form of Liar Paradox:
“This statement is false.”
If the statement is indeed false, then that means it is true, but if this statement is true, then it states that it is false. This is one of the simplest (and oldest) forms of self-reference paradox and is relevant for the Turing’s proof of the halting problem.
There are proposed solution to such paradoxes, and we will discuss them at length in future articles.
For now, let us try to understand Turing’s proof with an example, and then try to write out this example more programmatically as pseudocodes in python.
The reason we want to do this in a more contemporary fashion is that it might facilitate the understanding of the proof (as opposed to using Turing machines which deserves an article or two of its own).
Note that in python statements after # are comments and do not have any effect on the function.
In order to see how a function is written in python, consider a function sqr that returns the square of it’s input x input. In python, it will be written as:
Now let us assume that a halting function exists which can always tell if any function f on input x will halt (stop) or if it will go on infinitely4.
Simply put, f is just minding its own business and performing some action on x, but the halting function is observing them and when f does something to x, halting function decides if whatever f is doing to x will ever stop, or if it will go on infinitely.
This is the superpower of the halting function, and it can do this for any f and x.
The above definition makes it clear that if a halting function were to exist (remember we are assuming that it does), this is what it would look like. It will return True if it decides that f(x) will halt, or it will return False if it thinks that f(x) will never stop.
It is important to note that the way we have defined it, the halting function itself will always halt after given a True or False response.
So now we are armed with halting_function(), and we want to sell it as a product by marketing its ability, always to make the right halting decision. We are confident that it works for any function f and it’s input x because we tested this function on a bunch of (f,x) pairs and it works every time.
Then along comes someone more clever than us who wants to ruin our business, and writes another function, call it paradox(), and it is defined as follows:
This one is a bit of a head-scratcher because someone more clever than we wrote it, so let us try to follow the logic to the end.
Two things to note here – first, the only reason paradox() exists is because we claimed that halting_function() exists, and second that It is very reasonable to feed one function f1() to another function f2(), i.e., f2(f1()). The f2() will merely perform it’s designated operation on the output of f1() fed to it as an argument.
We observe the following:
Next, the intelligent person did something more clever, i.e., fed paradox() to itself. Why not? As noted above, functions can be passed as arguments to other functions so a function can be passed as an argument to itself.
Any paradox(f) can only give one of the two answers (because it has 2 branches to choose from; see 3 and 4 above).
So paradox(paradox) will either halt or go in an infinite loop. Let’s work our way back to the conditions that lead to either of these answers:
So what we are essentially saying is that paradox(paradox) will halt if halting_function(paradox, paradox) says it will not terminate, and it will not halt when halting_function(paradox, paradox) says it will halt.
Both results are contradictions to the conditions which generate them, and all of this is because we assumed that halting_function() exists. It turns out that paradox(paradox) indeed leads to a puzzle. Therefore we conclude that halting_function() cannot exist.
So the clever person has successfully destroyed our business model.
Why Care About The Halting Problem?
We saw an exhilarating proof5 of the halting problem above. This proof goes to show the genius of Alan Turing. But with that being said, it is fair to ask why one should care about the halting problem, and what are its consequences for the future of computing?
One of the most straightforward interpretations of the halting problem is that it is not possible to write an algorithm which will reliably check if your code will halt or not. If you had a bug in your code that leads to an infinite loop (as a typical example think of the annoying rotating hourglass or BSoD on your windows machine), at what point would you give up waiting to see if the program which leads to the bug will halt or not, on its own?
You could write an algorithm which takes care of this bug, but what the halting problem is telling you, is that you cannot write an algorithm which will reliably find and fix all the bugs in all your programs.
Can you write an anti-virus program which will protect your computer from all virus and will never itself do anything malicious? Halting problem says that is not the case.
These kinds of practical questions are at the essence of how we approach modern computing, and the undecidability of the halting problem sets some limits.
Aside from the more straightforward (and perhaps even obvious) implications of the halting problem, there are issues involving machine ethics which is seemingly impossible to solve due to the undecidability of the halting problem. As we enter the future of autonomous machines and robots, it is no secret that we are not well prepared for it economically, politically and more importantly, ethically.
For instance, can we trust autonomous drones and weapons to choose the right target if (and when) used in a war zone? In 2014, Englert et al. came out with a paper to make the case that we can not. While a thorough reading of the article is highly recommended, let us briefly examine here one of the scenarios presented in the paper, which is a modified version of the famous philosophical and ethical thought experiment known as the Trolley Problem.
Following are the simplified breakdown of the problems under consideration:
- An uncontrolled trolley is racing down an abandoned track – completely harmless. However, there is a lever at the junction which, when pulled, will divert the cart on a different track and at the end of this track there is a group of workers who will get severely hurt if the trolley crashes into them.
- There is a person, who is known to be evil, is now at the lever and can pull it to divert the train towards the workers. You can intervene and stop this person by shooting them before they can pull the lever, thus saving workers’ lives.
- Now consider the following replacements in this scenario: (i)You are replaced by a robot. (ii) A software now controls lever
- The evil person writes the code for the software, so it could very well (intentionally) malfunction and hurl the trolley towards workers.
- The evil person swears they have had a change of heart and the software is clean. They offer the robot to the check source code before using the software.
- A robot will have to be equipped with an algorithm which can analyze the code for bugs which may make the software go on forever. As a consequence of the halting problem, we know that such an algorithm is impossible. So a robot will not be able to make this decision. In this case, should it stop the evil person or let them proceed?
Computer science and AI have come a long way since Turing, and have effectively taken over the lives of many across the globe. But the transformation has been reasonably quick, and there are many unanswered questions about our relationship with these technologies.
It is also worth noting that we are still in nascent stages of this transformation, so it is safe to assume that further philosophical, ethical and scientific questions will pile up. There are things which we don’t know, that we don’t know. There is a lot more to learn.
Alan Turing tackled questions which became the foundation of this technology and our understanding of computing theory.
I will continue to discuss more topics like this in future articles.
References
- It is worth pointing out that the title of the movie is based on the concept introduced by Alan Turing in his 1950 paper Computing Machinery and Intelligence, in which he attempts to design a test to see if machines can think or “imitate” a human.
- Recommended reading on this topic is What We Cannot Know by Marcus du Sautoy.
- Some aspects of their work will be discussed in future articles.
- It is a bit of a mental exercise to think of a function which will go on infinitely, but it is encouraged.
- Examples may be discussed in comments.If you disagree, I urge you to read it again!
Recommended Reading
- Gödel, Kurt. On formally undecidable propositions of Principia Mathematica and related systems. Courier Corporation, 1992.
- Lloyd, Seth. “A Turing test for free will.” Trans. R. Soc. A370.1971 (2012): 3597-3610.
- Turing, Alan. “Can digital computers think?(1951).” Jack Copeland(2004): 476.
- Davis, Martin. The undecidable: Basic papers on undecidable propositions, unsolvable problems and computable functions. Courier Corporation, 2004.
- Kanter, I. “Undecidability principle and the uncertainty principle even for classical systems.” Physical Review Letters4 (1990): 332.
Author
Rohit Arora obtained his Ph.D. from ENS in France. Post-PhD he worked as a postdoc in France in collaboration with a major pharmaceutical company. He is currently a research fellow at Beth Israel Deaconess Medical Center. His research focus includes understanding biological structure-function relationships and developing novel tools to make sense out of “big data” in biology. He enjoys reading about the history of mathematics, geometry, and philosophy.
Scientific Content And Editing
Pawan Nandakishore is Data Scientist working at Colaberry. He started out as a was a Ph.D. in Physics, then one thing led to another and he discovered the joy of working with machine learning computer vision tools and techniques. He has applied them through his Ph.D. and postdoc (which was in the super cool field of collective behavior, really, go read about it!). He now leads product development at Colaberry and in his spare time helps out people in making a transition to Data Science. If he has any more spare time left during the week dabbles with various research problems in computer vision
Rituparna Chakrabarti is the Editor-in-Chief at CSW. She pursued her Ph.D. in Neuroscience from Georg-August University (Göttingen, Germany) and is currently a post-doctoral fellow at the Center for Biostructural Imaging of Neurodegeneration (BIN), Göttingen. For her, the interface of Science and art is THE PLACE to be! To unwind herself she plays mandolin and eagerly looks for a corner at a coffee house to slide herself in with a good read or company. Follow her on Twitter.
Illustrator
Vinita Bharat is a post-doc at Stanford University, USA and had been a Ph.D. student at International Max Planck Research School (IMPRS, Göttingen, Germany). Her research area focuses on cellular and molecular neuroscience. Other than enjoying ‘being a scientist’, she has also been working on science education. Presenting science in an easy and fun way is what she loves doing through her platform “Fuzzy Synapse”. She is a fun, enthusiastic and curious person, passionate about traveling, loves celebrations and bringing smiles around her. Follow her work as Fuzzy Synapse at Instagram, Facebook, and Twitter.
The contents of Club SciWri are the copyright of PhD Career Support Group for STEM PhDs {A US Non-Profit 501(c)3}. (PhDCSG is an initiative of the alumni of the Indian Institute of Science, Bangalore. The primary aim of this group is to build a NETWORK among scientists, engineers and entrepreneurs). This work by Club SciWri is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.