*(Lance asked me to post an explanation of my response to his tweet about Rubik's cubes and airline seating (in reference to a recent New York Times article), so here goes...)*

The Rubik's Cube is often held up as an example of a really hard puzzle. Lots of smart people brag about figuring out how to solve the cube by themselves; particularly dextrous geeks hold speed-cubing tournaments. But from a computational complexity standpoint, solving the cube is utterly trivial: There are only a constant number of configurations, so we can build the entire configuration graph in O(1) time! Yes, it's a big constant, but it's just a constant. The same goes for any larger cube that anyone has actually constructed, either physically or virtually.

What about the n×n×n cube? Now the complexity status depends on the precise question. There are several algorithms to solve any solvable n×n×n cube in O(n²) turns; once you've solved the 5×5×5 cube, larger cubes are apparently just drudgery. Moreover, if the given cube is actually unsolvable, the same algorithms will produce one of a small number of canonical unsolvable configurations, like everything in place, but one corner cube rotated or one edge cube flipped. (At least, this is true for the 3×3×3 and 4×4×4 cubes. I haven't verified this for larger cubes, but it looks like a tedious elementary exercise.) So the question "Is this cube solvable?" can be answered in polynomial time.

But what about finding the *optimal* solution, that is, the solution with the fewest moves? This problem is *obviously* NP-hard, right? Surprisingly, this problem appears to be open! For a recent claim of openness, see the 2008 Survey on NP-complete Puzzles by Kendall, Parkes, and Spoerer.

The generalized 15-puzzle is a similar puzzle whose complexity status is known. A given configuration of the (n²-1)-puzzle is solvable if and only if the permutation of the pieces is even, so there is a easy algorithm to decide solvability in polynomial time. If the configuration is solvable, it can be solved in O(n³) moves, which can be found in O(n³) time. (Why people like to solve the 15-puzzle using graph searching is longstanding and frustrating open problem.) Ratner and Warmuth proved that the corresponding optimization problem is NP-complete, by reduction from a special case of 3SAT; they explicitly left the complexity of Rubik's cube optimization as an open problem.

The 15-puzzle, Rubik's cube, and dozens of other puzzles are all instances of the following family of puzzles. Fix a set of permutations, called *generators*, of some finite set; given a single permutation π, your task is to express π as a product of a sequence of generators. For example, for the n×n×n Rubik's cube, the ground set is the set of 6n² cubelet faces, and the generators describe the effect of turning a single slice of the cube. Furst, Hopcroft, and Luks proved that an algorithm of Schreier and Sims solves the decision problem (Can π be written as a sequence of generators?) in polynomial time; the algorithm was later simplified and reanalyzed by Knuth. Jerrum proved that the corresponding optimization problem (Can π be written as a sequence of length at most k?) is PSPACE-complete. For some permutation puzzles, the shortest solution can have exponential complexity, so membership in NP is too much to hope for.

Comments are welcome, of course, but why don't you post them at Theory Overflow instead of here?

Hi Jeff,

I notice that Chris Hardwick's article (linked from this blog post) suggests that there is a linear correlation between number of pieces in a given Rubik's Cube and the time to solve that cube. Is there a good theoretical explanation for why, at least for a good human Cube solver, the "time per piece" to solve is roughly constant when we vary the size of the cube?

Perhaps it's not too useful to model human Cube performance algorithmically--I'm just curious why Hardwick's time increase is much less severe than polynomial complexity would suggest.

Thanks,

Forrest

Posted by: Forrest Iandola | June 28, 2010 at 07:12 PM