A while ago I spent the time to “audit” online a Cambridge University course given by Tim Gowers on computational complexity.
In this course, Gowers takes the students through a proof by Razborov that there is no “natural proof”, for a particular form of the problem, that P ≠ NP, which I found very interesting.
This got me thinking about why this is a hard problem to solve, and I thought it might be useful to explain the intuitive reasons behind it.
So let’s think about this by looking at the Traveling Salesman Problem, since it is a fairly intuitive example of one of the hardest problems in the NP complexity class, known as NP-complete.
In this problem, you have to find the shortest path through a number of cities, visiting each city once and only once. This is a typical NP problem, where the naïve approach to solving the problem would be to just work through all the combinations possible, check the value of that combination (in this case the length), and pick the shortest one.
The problem with that is that the number of combinations gets very large very fast, so it really isn’t practical to do this.
Let me show you what I mean. Let’s take the case where there are 5 cities: A, B, C, D, E. Any path through these cities can be described by a sequence of these letters. For example, CDEAB would be the sequence starting at city C, then going on to city D, etc. To figure out how many combinations there are, I have to use each of the letters once and only once, so there are 5 possible choices for the first slot, four for the second, and so on. This gives me 5x4x3x2x1=5!=120.
In general, this brute force approach to solving the problem, which represents the worst case scenario among solutions, will take n! steps, where n is the number of cities we are considering. This gets very big, very fast. For example 10! is equal to 3628800, so you can see that as you add cities it becomes vastly harder to solve the problem this way.
Now it is very important to realize that this is the worst-case scenario only, or as it is described in mathematics, an “upper bound” on how hard the problem is. For example, if all the cities are in a straight line, you can solve the problem in many fewer steps: just start on one end and move to the next in line and so on
This works because we know that on a straight line, the shortest path will be on that line, so that if the cities are aligned ABCDE, we know right away that we aren’t going to have to consider any of the combinations that don’t follow the sequence on the line. In other words, there is a pattern to the data that allows us to ignore a number of combinations (in this case, all but the two, which are of equal length: ABCDE, and EDCBA).
But to say that some problem is in the class NP (and not in P) is essentially to say that there will always be some collection of cities you could choose that would require you to go through the worst case scenario to find the shortest one. (Technically, the actual worst case scenario for this problem is only exponential in complexity, but that is still pretty bad and doesn’t affect my argument here.)
That is, unless someone can prove that P=NP. This would mean that there was always a shortcut that allowed you to find the solution without having to work through the majority of the possible combinations. And this shortcut would almost certainly involve being able to consistently find some pattern in the data, just as we did with the straight line example, that simplifies the data down to a smaller number of cases.
So in a sense, it should be much easier to prove P=NP than that P ≠ NP. (That no one has proven the former yet is a pretty good reason to suspect that the latter is true, even though no one has proved it either.)
The reason is that to prove P=NP, you just need to show that there is always a pattern in the data that will simplify it, whereas to prove P ≠ NP, you would have to show that there can’t be such a pattern for some scenarios in the problem set.
Why is it easier to show that there is a pattern than that there can’t be a pattern? Well, if you have a pattern, you have a relatively small number of steps needed to describe that pattern, and it is much easer to work through those steps and convince yourself that you have covered all the bases. You can also try your pattern on all the hairiest known instances of the problem and show that they work.
In a sense, having a pattern simplifies the search space required for the proof in exactly the same way that having the pattern simplifies the search space to find the answer to the problem.
But how do you show that there is no possible pattern for some set of data?
Let’s say that I give you a list of digits and tell you that one of two scenarios generated these digits: either I am generating them randomly for each digit as you ask for them, or I am using some relatively simple-to-genarate pattern to generate them. Here is a sample list:
Can you see a pattern? You could do a statistical test on the digits to see if they show the properties of randomness, and in this case, I think you would find that they do show those properties.
So, can you then conclude that they are random?
No, unfortunately not, since there is a pattern here: these are the digits of pi after the well-known first three 3.14…
So the challenge is that you can never really be sure that there is no pattern to some data set; it could just be a really, really unobvious pattern that you haven’t thought of yet.
So you can see how very hard it would be to prove that there is always some configuration of cities that can’t be simplified by some pattern, which is only part of what you would have to do to prove that P ≠ NP.
This is why it is unlikely that anyone will find such a proof unless they can develop some hard-to-imagine technique that allows us to detect when there aren’t any patterns in a set of data.