Showing posts with label complexity. Show all posts
Showing posts with label complexity. Show all posts

Tuesday, March 20, 2012

A New CGT Blog and some complexity thoughts.

Oops, I missed an extra week in there. For some reason, these papers don't get graded unless I look at them... ;)

Fraser Stewart has a wonderful (relatively) new blog up at: http://combinatorialgametheory.com/.

I will continue with more regular posts! I promise!

I've been thinking a lot about the dichotomy of game complexity. Are there any rulesets with symmetric starting positions where the complexity of the game is NP-complete? (Usually they are PSPACE-hard or in P.)

If you drop the symmetric aspect, then it's easy to design games that are NP-complete. Take, for example, the following coloring game on graphs. Starting positions consist of a single red-colored vertex and a connected, uncolored graph. The red vertex is then connected to each vertex in the uncolored graph. In addition, there are some number, k, of additional uncolored vertices, each connected only to a single blue-colored vertex.

Play proceeds as in Col, meaning on their turn, a player chooses an uncolored vertex that is also not adjacent to a vertex of their own color, then paints that vertex. This game is NP-complete because the blue player is essentially trying to find a maximum independent set on the connected uncolored graph, with size greater than k.

Not all starting positions are symmetric, however. Col doesn't fit this description, because the starting positions in Col are uncolored and thus are symmetric.

EDIT (3/29/12): Corrected some spelling.

Wednesday, November 30, 2011

NP-complete games?

Are there any "balanced" games that are known to be NP-complete? I know that some are NP-hard (Clobber, Col, Graph NoGo) but I don't know of any that are NP-complete...

By balanced, I mean games that have symmetric starting positions (G = -G).

Are there any like this that don't require a hardness of choosing an appropriate move?

Tuesday, November 8, 2011

Somewhat Random Musings

Yes, I didn't post at all last week. I am still very much catching up on things I missed while at Integers. Next week I will be at Supercomputing and will likely miss another post or both. I don't often have the desire to pause the teaching part of the semester to get research/blogging done, but this is turning out to be one of those semesters. Mostly, I want to make sure I write good posts about the game talks at Integers (or what I understood of them). I hope I can find extra time to spend on them, but it's more likely that they'll come out a bit less-than-perfect. Hopefully some of them are readers and can comment on all the errors

While at Integers, I spent a lot of time struggling with NoGo, only to run into problems I encountered at BIRS in January. While there, I found that NoGo on a graph is NP-hard, but was neither able to show that Graph NoGo was PSPACE-complete, nor show any hardness for standard NoGo (on a grid). The same thing happened last month: no new progress. So I tried flipping it around and started looking for an efficient algorithm for NoGo. Either Neil McKay or Alex Fink (I think it was Neil) asked me about it, and I told him what I was doing. He was surprised I had given up on computational hardness so quickly. His comment made sense: I have more experience finding hardness results than showing efficient algorithms for problems (though I would argue that hardness reductions ARE efficient algorithms). Research-wise, you strive for results! So, you should spend your time conquering problems you're good at. Instead, I was trying something a bit different.

Luckily my job is far more focused on teaching than research, so the pressure to publish is less intense. It's very nice to know I can try a completely different tactic if I get frustrated with a problem!

... not that I had any luck with this!

On a related note, student presentations have started in my games class! One question that came up is: What does it mean for a game to be solved? I answered that there's an easy way to evaluate the game without drawing out all of the game tree. I hope that's a good enough answer. For myself, it means there is an efficient algorithm to solve the problem. I generally consider a completeness result to be "solving" the game, though perhaps it's the opposite: the game (probably) cannot be solved!

Tuesday, October 25, 2011

The Difficulty of Making a Move in International Draughts

As I begin the trip down to the University of West Georgia for Integers, I want to be sure to report on another awesome thing I saw in Banff back in January.

For most games I'm interested in, the rules are simple. Deciding whether one move is legal is usually an easy task. Usually, it's even easy to list all the moves. (Some games, such as Phutball, may have an exponential number of moves, so this task may not be considered "easy".) The challenge usually arises in finding the best move, not just one that works.

This is not always the case for International Draughts. For my American comrades, Draughts is the word the rest of the world uses for variations of the game Checkers. International Draughts is a variant played on a 10x10 board that is one of many "pub games" popular in Europe. The game is different from "standard" draughts in two big ways:
  • Regular pieces may jump other pieces forward or backward.

  • Crowned (Kinged) pieces can move as many spaces in one direction as desired, even after jumping another piece.

  • A turn must include capture of as many opponents' pieces as possible.
This last rule is extremely interesting! Not only must your turn consist of a capture (jumping a piece) if possible, it must capture as many pieces as it can. If you watch the animation on the wikipedia page, it becomes clear that once you have a crowned piece, this maximizing move is not always trivial to find.

In fact, this question was posed last year as a theoretical computer science question: Is it NP-hard to find a legal move? Bob Hearn rose to the challenge and showed that it is, in fact, an NP-hard question. Bob presented this result at BIRS. I've never seen a game where it's computationally difficult to figure out whether you made a legal move!

How is this policy enforced in the "real world" of international draughts?

Friday, October 14, 2011

Presenting Neighboring Nim to Undergrads

I replaced getting a post ready for Tuesday with preparing a talk for this past Monday. I spoke at our department colloquium about Neighboring Nim, describing the reduction from (Vertex) Geography. Two excellent things happened.

First, I had a great audience. While playing Neighboring Nim against them, they were very energetic to make good plays against me. Will, a senior student of mine, immediately took on the role of surveyor, collecting move nominations from the crowd, then calling for a vote. The discussions were loud and boisterous. At one point, I made a very fast response to one of the audience's moves, using the Quick and Confident method. Unfortunately, I moved to an N position, with two P options. The audience quickly picked the board apart (it was by no means trivial) and found the two winning moves. The problem was that, individually, they didn't see the viability of the other move. So there were two groups arguing vehemently for two separate perfect moves. The attitude nearly boiled over. Luckily, the audience finally agreed and they made one of the moves. It was very exciting to speak to such a charged group!

Second, I changed the order of my talk around regarding the proof of computational complexity. Usually I claim that a game is hard for some complexity class, explain what that means, then show the reduction. Talking about complexity classes to a general undergraduate audience is a bit of a speed bump, and often shakes people out of the rest of the talk, meaning they don't follow the steps of the reduction. This time, I just said that the two games were related, then explained the reduction. This kept the crowd interested, because it's a description of positions in a game they just learned how to play. At the end of the talk, I spoke briefly about the implications of the transformation: the PSPACE-hardness of Geography implies the PSPACE-hardness of Neighboring Nim. At that point, having seen the reduction in full, I hope it sank in. Although I wouldn't do this in the future when presenting to a primarily graduate-student/professor group, it seemed to work really well for people who hadn't seen computational complexity before.

The slides for the talk are here.

Next week is Fall Break at Wittenberg, so there will probably not be a post on Tuesday as I spend the time off preparing for Integers 2011 the week after. With any luck, once I'm there I'll get to post some notes each day.

Monday, September 5, 2011

An answer to an old Nimber question

Near to the start of this blog, I posed a question:

if the nimbers for a game are bounded above, does this imply that there is an efficient algorithm, or shortcut, to evaluate impartial games?

I recently realized there is strong evidence against this. It is true that any impartial ruleset where all the nimber values are either *0 or *1 is very easy to evaluate---all options from one position have the opposite value (otherwise there would be *2 states)---so all paths down the game tree have the same parity. All that's needed to create PSPACE-hard instances, however, are states with the value *2. Why is this? Well, we can change the rules for any game with higher values into an equivalently-winnable* game with maximum nimber *2. The plan here is to break down the options from a game when there are more than two. We do this by splitting up those options into separate groups, then preventing the other player from being able to make any choices.

For example, we can transform the game G = {A, B, C} into the game G' = { { { A, B } }, { { C } } }.

Here's a diagram of the game trees, with the nimbers *0, *1, and *2 respectively assigned to A, B, and C.
For positions with more than 3 options, this splitting can be done in a recursive fashion. Thus, games with any range of nimbers can be transformed into a game with maximum nimber of *2.

* game states may not have the same value, but winning strategies from one translate into winning strategies in the other.

Friday, September 2, 2011

Game Descriptions: NimG and Neighboring Nim

What happens if you restrict which games can be moved on in a game sum? This seems to be the idea behind NimG, a game played on a collection of Nim Heaps embedded in a graph. There are a few variants, but each turn a player does two things:

* traverse an edge of the graph adjacent to the last move, and
* remove sticks on the nim heap embedded either into the edge traversed or one of the vertices.

In the sticks-on-edges version, players remove sticks from the edges while traversing them during the turn. Naturally, if the heap on the edge is empty, it cannot be traversed. In other words, the player who starts their turn on a vertex where all incident edges have empty nim heaps loses. Masahiko Fukuyama studied this game a bunch and more recently Lindsay Erickson has looked into instances on the complete graph.

When sticks are embedded into the vertices, a move still consists of traversing one edge to arrive at a new vertex. Upon arriving at the new vertex, a move is made on the nim heap embedded there. Gwendolyn Stockman may have been the first person to analyze this game in an REU advised by Alan Frieze and Juan Vera. This is the version that interests me: the vertices act like different heaps in a game sum, but the edges restrict which heap can be chosen each turn. In this ruleset, the complete-graph version is equivalent to standard Nim. (If each vertex is adjacent to a "self-loop", an edge connected twice to the same vertex.) In this game, you lose if all vertices adjacent to the current location contain empty nim-heaps.

In order to avoid the self-loop issue, I altered the rules slightly and set out to do some analysis. Neighboring Nim is exactly the same game as (Vertex) NimG, but a player may choose not to traverse an edge and instead remove sticks from the current vertex (again).

I have actually played this game very rarely, mostly because I can't imagine a standard starting position. Nevertheless, I helped show that the game is PSPACE-hard. I was lucky enough to present this result at BIRS in January, which was quite pleasant. The paper should appear in Games of No Chance 5. (arXiV version)

The cool thing about the computational complexity of this game is that once all vertices have maximum heaps of size 1, the game is the same as undirected vertex geography, which is solvable efficiently. In order to get PSPACE-hard instances, we needed most vertices to have a heap of size 1, with only a few having a second stick on them (less than 1 in 7 of the vertices need the extra stick)! It's very interesting that such a small rule change can result in such a big complexity change!

Tuesday, March 29, 2011

PSPACE problems versus NP problems

I've recently been paying attention to community theoretical computer science forums, including on sites such as StackExchange. This question recently came up:

Are PSPACE-complete problems inherently less tractable than NP-complete problems?

Naturally, since we don't know whether PSPACE is not equal to NP, there is not yet a formal answer to this question. Intuitively, however, this becomes a question of whether it is harder to figure out which player can win a game than to find the solution to a (one-player) puzzle. The link above provides a wonderful bunch of points, but let me mention a few specifically. :)

* Ryan Williams notes that some randomized game tree solvers are very effective, and perhaps solving games isn't quite as terrifying as many might think.

* Neil de Beaudrap notes that verifying solutions is still extremely important, which we know how to do for NP-complete problems. Aaron Sterling backs this up with a great comment about humans being able to remember and describe proofs.

* Lance Fortnow points out that we know how to create average-case PSPACE-complete problems, but don't yet know how to do this for NP.

* Suresh Venkat mentions that in practice, NP-solvers are actually often very fast, and NP-completeness is not always seen as a barrier to computing.

This is one of the few questions on the forum that I have actually read multiple answers to, and it's one I feel I can read over and over. Personally, I find it hard to believe that PSPACE isn't much harder than NP, and professionally I'm partly banking on it by spending my time finding PSPACE-complete games!

Tuesday, February 8, 2011

Why bother about Computational Complexity?

In a recent phone call with my Ph.D. advisor, Shang-Hua Teng, I was reminded how important the computational complexity is for combinatorial games. The punchline is very simple:
"Hard games give humans a chance to beat machines."
If no efficient algorithm exists to determine who wins a game, then a program cannot always efficiently choose the best move to make. Hardness results are evidence that this is the case (although we still don't know that no efficient algorithm can solve NP-hard or PSPACE-hard problems). These games give us some sense of safety that the computer doesn't already have the game won.

This also applies to the concept of experts and novices. It's not too hard to become an expert in an easy game like Nim. Learn the evaluation trick (computable in polynomial time) and you're good to go. At this point, if you play against someone who doesn't know that simple trick, you can beat them every time (unless they are very lucky). In a game such as Go, much more practice is required to become an expert. There is no quick evaluation algorithm to learn. However, since the strategies are not as clear cut, there is a bigger chance that the novice will be able to beat the expert.

When first designing Atropos, our goal was not just to find a 2-player game, but to find a (PSPACE) hard game. We changed the rules until we could get a proof of the hardness working. At that point we were quite satisfied! Some other games I've defined seem less impressive without a similar result. Learning that games are provably hard usually makes me want to play them more!

(I'm trying to keep track of game facts on this table, with quite a bit of bias towards computational complexity! Please let me know if there's something you'd like to add!)

Tuesday, February 1, 2011

Graph NoGo is NP-hard

Last week, I mentioned we had some computational complexity results concerning the game NoGo, which was the "new game" played at the game workshop at BIRS this year. After getting more comfortable with the game play, it felt like the game might be another great example of PSPACE-completeness. I put some effort into solving this, knowing it would be great to resolve the complexity while at the workshop.

Finding gadgets is tough, sometimes especially when dealing with the sort of grid-geometry of the NoGo board. Thus, I took a step backwards and considered NoGo played on a general graph. The rules to this game are still the same: each connected subgraph where all vertices have the same color must be adjacent to a vertex in the graph which is uncolored. Still, I had no luck finding a PSPACE-complete reduction. Instead, I took another step backwards and considered showing NP-hardness for Graph NoGo.

Woohoo! Success! Here is the plan for this proof: we will reduce from the Independent Set problem, known to be NP-hard. Given an instance of the Independent Set Problem (G, k), construct a Graph NoGo situation, G', that only the black player can move on, where each play corresponds to choosing a vertex in G, the original graph, to be part of the independent set. If the black player can make k plays on the new game, G', then there will exist a corresponding independent set of k vertices on G. To turn this into a Graph NoGo situation where the only question is: "Can the black player win if they are playing next?" instead of "Can the black player make k plays?" we will add (k-1) unconnected nodes where only the White player can move. Now, in order for Black to win, they must be able to make k (or more) plays (assuming they are going first).

So, there are two things to show. First, we need gadgets in Graph NoGo where only one of the players can play. Second, we need to be able to glue those pieces together so that playing in one of these locations means you can't play in an "adjacent" one. As it turns out, neither of these is terribly hard.

As above, if G is our graph for the Independent Set problem, then for each vertex in G, we will use the following gadget for our game of Graph NoGo:
Here, x is the place Black can choose to play their stone in. The only other blank spot cannot be played by either Black or White. Notice that White cannot play on vertex x.

Now, for each edge in G connecting two vertices, we will connect only the x vertices of the gadgets with another gadget:
Now, if Black plays in one of the vertex-gadgets, he can't play in the neighboring ones. Thus, any set of Black plays on the Graph NoGo board G' corresponds to an independent set on G. For example, two adjacent vertices in G, x and y, and the edge that connects them will look like this:
Thus, if we can always efficiently answer the question "Can the Black player win this game of Graph NoGo?" we can also solve Independent Set in polynomial time. Thus, Graph NoGo is NP-hard.

One nice property of this reduction is that it preserves planarity of the graph. Thus, if Independent Set is hard on a planar graph (is it?) then so is Graph NoGo.

So, how do we now take either of those two steps forward, to reach a PSPACE result or an actual NoGo result? Perhaps next week...

Tuesday, January 25, 2011

Computational Complexity of Games

At BIRS this year, the game everyone was interested in was Go.

I guess it's hard to compete with that! Aside from Go, one of the big features was NoGo. We had the first ever NoGo world tournament, both for humans and computers. After those tournaments were over, Fan Xie, the reigning human champion, battled Bob Hearn's champion NoGo-playing program. When presented with a new game like NoGo, as a computer scientist I'm always interested in the computational complexity: how hard is it to determine which player has a winning move? How long would it take a computer program to figure this out?

For some games, we have a bad (or good, depending on how you see it) answer: a lot. For Chess, for example, an algorithm that can evaluate the "winnability" of every game requires an exponential number of steps in the worst case. When I say "exponential", I mean, "exponential in the amount of information needed to describe the game board". Thus, if I look at all Chess boards that can be described using 1,000,000 bits of information, some of those require around

a1,000,000

steps to figure out who is going to win where a is some constant greater than 1. This is not a feasible amount of time even for computers to have. As the boards get bigger, the number of steps increases by a multiplicative factor of a. We say that these problems are in the class EXPTIME because they take an exponential amount of time. (Figuring out who wins a game of Go is also one of these problems.) To be more general, we also include problems which can be solved in less time. Thus, the "complexity class" EXPTIME is the set of all problems which can be solved in an exponential amount of time (or less).

In general, if you want to figure out who wins a game by just trying all the options, this method usually takes an exponential amount of time because there are an exponential number of scenarios to reach the end of the game which must be tested.

However, for some games, the player with a winning strategy can be figured out really quickly! In the game Nim, for example, there is a fast procedure ("trick") to figure out whether the next player can win. This can be done in a polynomial amount of time, meaning there is a polynomial function that describes the number of steps needed for a computer to determine who has a winning strategy, again in terms of the size of the information needed to describe the game state. Although polynomials can have large exponents, for any one given polynomial, that exponent remains constant. We say that problems that be answered in a polynomial amount of time are in the complexity class P. Since these are both sets, and anything that can be solved in exponential time can also be solved in polynomial time, we know that P is a subset of EXP-TIME and that the two are not equal.

For some games, we don't know how long the fastest algorithm to determine winnability takes. Sometimes we describe these in terms of resources other than time. The complexity class PSPACE is the set of problems which can be solved with a polynomial amount of "space" meaning a polynomial amount of room to keep a record of important information in the algorithm. The space can be reused; erasing is allowed! Imagine that instead of restricting the amount of time you have to take a test, you are instead restricted by the amount of scrap paper you are given to do your calculations. It has been shown that P is a subset of PSPACE, which is in turn a subset of EXPTIME. It is not known whether either is equal to PSPACE (obviously both cannot be).

We can, however, say that there are some games that REQUIRE a polynomial amount of space to solve, and are thus among the hardest problems in PSPACE. These are known as PSPACE-complete. We show that a game (or any sort of computational problem) is PSPACE-complete by showing two things:
  • The game is in PSPACE
  • Any other problem in PSPACE can be efficiently rewritten as an equivalent instance of this game. (Known as hardness. Proving this means the game is PSPACE-hard.)
The notion of completeness works for any complexity class. Thus, for some class XXXXX, proving that a problem is in XXXXX and is XXXXX-hard means that the problem is XXXXX-complete. Chess, for example, is known to require and exponential amount of time because it has been shown to be EXPTIME-complete.

To show the second part (hardness) for a game, Y, means starting with any other PSPACE-complete game, X, and showing that any instance of X can be rewritten as Y where the next player wins Y exactly when the next player would win X. This is known as "finding a reduction" or "reducing X to Y". This is where the fun is! Check out this game table to see some known complexity results for games.

Why bother to show that a game is PSPACE-complete? Although we don't know whether P = PSPACE, it is likely that the two are not equal. Solutions to problems are generally considered "efficient" if there is a polynomial-time algorithm, so then PSPACE-complete problems cannot be solved "efficiently".

This post has gotten long, but I realized I should explain a bit more about computational complexity before blundering forward into some complexity results from BIRS. Next week I will talk more about that, and especially how it relates to NoGo.

I'm sure I left lots out! Please help me out in the comments!

Friday, February 5, 2010

Game Description: Phutball

Michael Albert responded to my last post by mentioning the game Phutball as a classic result in exactly what I was looking for. Luckily, it is Friday and I haven't done a "game description" post in a while. Here goes!

Phutball is short for Philosopher's Football and is played on the intersections of a 19 x 15 grid (though other sizes are often used). On either end of the long side are the "end zones". There is one ball on the field and any number of "men" (I'll call them athletes). All of these objects reside on the intersections of the grid (see the wikipedia page for nice pictures). The goal of each player is to end a turn with the ball inside the opponents' end zone.

A turn consists of either creating a new athlete in any unoccupied intersection or moving the ball. The ball is moved by "jumping" one or more athletes, removing those athletes, then moving the ball again if the current player wishes. Each jump consists of moving the ball in any of the 8 cardinal directions as long as there is one or more neighboring athlete in that direction (diagonal intersections are considered adjacent). The ball is moved to the next open space in that direction (it may not be moved if no adjacent spaces are available) and the athletes that were jumped are then removed.

I have never played this game, so I have no idea how game play goes. Luckily, I have found some people here to play games with over lunch and this week have played my first ever games of Domineering and Cram! I think next week will consist of some Phutballing!

It relates to Tuesday's post because it is a game where figuring out whether you can win this turn is NP-complete. Furthermore, the game has been shown to be PSPACE-hard to determine which player is winning.

I am looking forward to giving this game a try!

Tuesday, February 2, 2010

End States: Hardness without Randomness

My last post was about trying to show some games were (NP) hard to play even near the end. I have a few motivations for thinking this way, and some tie more into the basis of combinatorial games than others.

In the post I mentioned the game Cave Troll, which is a cool fantasy-setting board game. Each player controls a cohort of adventurers and monsters which meander around a cave, sometimes avoiding each other and sometimes ganging up on characters of other players. Every so often (determined randomly) the board is "scored" and players gain points based on the value of each room they have more adventurers in than those of other players.

The randomness comes from card drawing. Each player has their own small deck of cards and when they draw the last card, the game ends after after the card is played. Some cards are marked and after enough of them are played, the board is scored. Other than this, there are no sources of randomness in the game. Players can only take actions on their own turns. Thus if a player has only one card left in their deck, they know what that card is and can determine whether any sequence of actions on their turn will win the game. (Actions include moving characters one space or using their abilities, as well as drawing and playing one card. A player is limited to a certain number of actions per turn.) Without the randomness, this situation acts a lot more as a combinatorial game.

Thus, if the current player has only one card in their deck, we can ask the question: "Do you have a winning strategy this turn?" The hope is that this is NP-hard. (It would explain why my last turns always take too long!)

There is another reason to investigate end states instead of general states: I cannot think of another way to approach other analysis! Our game Dictator (sorry, no link yet!) is like this. This game simulates investigation of a set of ballots, until a dictator is found (or IIA is violated) as per Arrow's Theorem. Instead of trying to show hardness at any turn, I hope to show that it is hard to come up with a proof enabling you to win that turn.

In any case, there are a variety of games where the last few turns are meaningless, but some others where everything can change in the last few decisions! Which actual properties make these games different?

Monday, November 23, 2009

Initial Game Positions

Note: it's unlikely there will be a post on Wednesday, as I am traveling to Massachusetts for American Thanksgiving.

We know the following non-constructive result about Hex and Chomp: the first player has a winning strategy.

This only applies to games in the starting position, and for different reasons for both games (although they both go by the "strategy-stealing argument" monker). In Hex, we know that it is better to have more tiles of your color on the board, so it is naturally better to be playing first. In Chomp, we have a very special move from the first board. This move has, as children, all the other children of the initial position. Thus, if you go first, there are two possibilities: either one of those other children is a winning move (make it!) or none are, meaning that the special move is a winning move (make it!).

As mentioned, neither of these are constructive as in neither hint at the best strategy. They just mention that such a strategy exists. We've noticed a similar situation with Atropos (at least as far as brute force searches have allowed so far). A repeating pattern of winnability appears: boards with n open circles along a side have a winning strategy for the first player exactly when n is congruent to 0 or 3 modulo 4. The other two (1 and 2) have a winning strategy for the second player. This may seem a bit convoluted (especially since it's only been shown up through boards of size 11 or so) but it is the same as saying that the first player has a winning strategy when there's an even number of open circles on the initial board, and doesn't when there are an odd number.

In general, Atropos and Hex are difficult to analyze (both PSPACE-complete) and Chomp has resisted all efforts to figure out. Could it be that for many games, the complexity of those initial positions is just inherently easier than a general game state? To some, this is very disconcerting. In some cases, the general state argument is a bit distracting: it could be that a winning strategy exists from that initial position and is easy to describe. Complexity results come from reductions that don't usually say anything about the difficulty of the game in common positions.

Could it be, then, that this easiness comes from the size of information needed to describe those initial positions? In a game of Hex, to describe an intermediate state, you need to list the color (or lack of) for each hexagon. With an initial state, however, you only need to specify the size of the game board.

Have a great Thanksgiving! Also, if you will happen to be in the South-Central Ohio area next week, I will be giving a talk here at Wittenberg on Matchmaker on Thursday (Dec. 3). More info about that soon!

Friday, October 30, 2009

Misere Hardness

I'm very pleased with the way my software engineering class is going. I've accidentally asked the students to code up some difficult games at times, but they've done a good job of figuring out exactly how to fix these issues. For the most part, using a game suite as the continuing project has worked extremely well!

Early on, I asked them to include both nim and misere nim. As with all misere games, misere nim is the same game, except that the person who makes the last move loses. Thame Plambeck used to maintain Advances in Losing, a blog solely about misere games. My first combinatorial game, Atropos, was misere: the first player to create a 3-colored simplex (triangle) loses. The misere notion is very intuitive for many games.

As we continue with the ongoing class project, I have continued to add difficult parts but at some point I stopped asking them to add more games to the suite. Oops! Currently, I've given them another tricky task: have users make moves through a series of mouse clicks (instead of entering choices in text boxes). I figured it was time to throw in some new games along with that. At the same time, I hope they realize they can apply "misereness" with any games, and not just with Nim.

Solution? They've already programmed a simple version of Kayles and Hex, so why not include misere Kayles and misere Hex? Even after thinking about that, I became immediately interested in misere Hex. How does that game work? Whichever player first creates their bridge between both sides loses? Weird! Who wins misere Hex? What is the complexity of this game?

In 1999, Jeffrey Lagarias and Daniel Sleator tackled that first question. They find the immediate intuition---that the first player never has a winning strategy---incorrect and instead base the winnability on the parity of the board size. They do this by showing that the losing player can stretch out the game play until the final hexagon has been colored. Thus, on boards with an even number of hexagons, the first player can win! The proof they give applies to games starting from the initial (empty) game configuration.

Does this answer the complexity question? Not in any obvious way. In order for the complexity to be solved, an algorithm would have to work for any game state. One can invoke a board where the game cannot be dragged out to fill in all the cells. (Give red two nearly-complete paths with only one hex missing, and let the rest of the board be filled in. If it's red's turn, what can they do?) It could be that the proof generalizes well into an algorithm.

Thinking from the complexity angle, as this blog (too?) often does, is there anything we can say about the relationship between the complexity of a game and it's misere counterpart? Again, I am interested in considering Atropos, which is already misere. The non-misere version means that the first player to create a 3-colored triangle wins. In this impartial game, each player may choose to color one circle any of the three available colors. The circles are laid out similarly to hex---so that each has six neighbors---inside a triangle with borders colored to suffice for Sperner's Lemma. This means that any play along the borders can win: there are always two different neighboring colors, so simply paint that circle with the remaining color.

Thus, the strategy from the initial position is easy. Choose one of those bordering circles and paint it the third color. Game over. In mid-game positions, however, a player must choose to paint a circle that neighbors the last-painted circle (if an open circle exists). Thus, the game might not be quite as simple from any given situation; one of the border circles may not be adjacent to the last play. (But why wouldn't the first player have just won the game? Why would you ever get to this situation?)

In general, are there any results about misere games and complexity? It's been a while since we phrased a question, so let's do it: (I've been marking the relevant posts with question# labels).

Question (2): Given a game G and it's misere version, M, must the complexity of determining the winner of one of these games be in P?

I think this is the best way to phrase the "What is the relationship between the complexity of G and M?" question. Consider nim and misere nim: both are easy. With Hex and Atropos, each has a PSPACE-complete version, but the opposite looks like it might be easy. Is there a game I'm overlooking which is hard both normally and misere...ishly? (someone help me out with French!)

Of course, I'm still curious about misere Hex.

Question (3): What is the complexity of determining whether the current player in misere Hex has a winning strategy?

For those interested, user Zickzack at boardgamegeek.com has a good description and list of misere games.

Friday, October 16, 2009

Complexity of Games with Randomness

It is known that Hex is a PSPACE-hard game. (See this page, it is the "Hex ist PSPACE-vollstaendig" paper by Stefan Reisch.) Once we add a bit of randomness, however, the strategies suddenly become far easier! As I think I've mentioned before, if you change the game so that the current player is chosen by the flip of a coin, the winning strategy becomes simple.

I'm also interested in trying to analyze some games by removing their sources of randomness. Instead of relying on die rolls, for example, give each player the set of potential die rolls and let them choose one at a time, then removing that "roll" from their set of available moves. Once all options are chosen, either the player has no more moves or they get a new set of false rolls. I don't know of any analytical results like this, but I would love to see them!

On the other hand, I also have some interest in answering complexity questions of games that include randomness. For example: What is the probability that the current player has a winning strategy? Alternatively, rephrasing as a decision question, does the current player have a strategy that will allow them to win with probability > .5?

I don't know of any results like this. Do they exist?

Wednesday, September 30, 2009

No Loops and EXPTIME versus PSPACE

Last time I commented about adding rules to unbounded-turn games to prevent stalemate-causing loops of play. On the other side, there is an aspect of games which have a bounded number of turns: they are often shown to be PSPACE-complete instead of EXPTIME-complete (unless they aren't hard at all, such as with Nim). Thus, if PSPACE does not equal EXPTIME, these games are in PSPACE and not EXPTIME \ PSPACE (or EXPTIME - PSPACE, depending on which "set minus" notation you prefer).

Following our previous discussion of constraint logic, these two classes are categorized by whether a game is bounded or unbounded. Thus, it would be expected that potentially hard games such as Oshi would be EXPTIME-complete, while games such as Tsuro are better candidates for PSPACE-completeness. (I'm not sure whether Tsuro is a hard game at all; I finally got to play a bit over the weekend. Perhaps it would be more difficult if I didn't have such a small hand of tiles.)

Interestingly, Go has some play variations which threaten to go against this. Using the ko rule, which prevents looping back to a position from the most recent turn, the game is EXPTIME-complete. Without this rule, however, it is only known that the game is PSPACE-hard, though not necessarily complete. Some versions of Go use the superko rule, which prevents moving to any position that ever previously existed. I personally do not know the complexity of this version. Sadly, I am not familiar with Go and thus can't say which rules are most often used.

Mostly I think it's interesting that this very obvious facet of games---whether or not they have an unbounded number of turns---is a strong intuitive tool in game complexity.

Friday, September 25, 2009

Constraint Logic: Modelling Computation as Games

The past few years I've been very interested in a family of games from Erik Demaine and Robert Hearn. This family, known as Constraint Logic, revolves around making moves which consist of flipping the direction of edges in a graph. The rules for when edges may be flipped differ only slightly for each version of the game in the family (whether the edge can ever be flipped back to its original state and the number of players in the game).

Nevertheless, they are able to divide the family into games that are complete for many different complexity classes (and one that is undecidable). The brilliant construction is potentially the cleanest way to characterize these classes in terms of combinatorial games. This version of the paper is very easy to digest and clearly describes the graphs used in all Constraint Logic games. The long version of the paper, which contains example reductions to various games, is Hearn's PhD thesis. The examples there show the nature of using this as a vehicle to show completeness in games: Constraint Logic is flexible enough to be used for any game (for example, it handles planar situations well) but is already a game itself.

I feel like I have kept running into different versions of this framework for the past few years (though I don't recall how) but I don't know whether it has actually been used yet by anyone other than its creators. Does anyone know of an appearance of Constraint Logic in another reduction? If so, I would love to hear about it!

Wednesday, September 9, 2009

Kayles of the planar variety

Before I knew much of anything about CGT and I was looking into the complexity (of "Who can win?") of another game, I saw a cool relationship: all of the "tricky" cases boiled down to solving a different graph game. In this game, a move was equivalent to choosing a node, then removing it and all neighbors from the graph. (Thus, with normal play, the player who can no longer select a node (because the graph is empty) loses).

How interesting! I should look further into this game! It seemed simple, but perhaps not simple enough. I looked around, but didn't know all the best places to look, and thus didn't find out about the exact same game---known as Kayles---until after I'd spent a lot of time figuring out things that were already known. Oops.

As it turns out, Kayles is a very fundamental impartial game, and it's not surprising that my Atropos situations were looking like Kayles boards.

Just knowing this name, however, opened up my resources greatly! Sometimes it's not what you're Googling for, but whether you know what you're Googling for is called. I was interested in the complexity of Planar Kayles (Kayles played only on planar graphs), which is still an open problem (as far as I know). Although we know from Schaeffer that Kayles is PSPACE-complete on general graphs, there are some other planar graphs which are efficiently solvable.

The most recent example I know of is from Rudolf Fleischer and Gerhard Trippen, who showed that star-graphs of bounded degree have been shown to be polynomial-time solvable by . This, as well as Bodlaender's result for graphs with small asteroidal numbers are the two results I'm a bit familiar with. Still, these do not answer the problem of playing Kayles on any planar graph.

Is there a general intuition amongst Kayles enthusiasts that goes either way? Does anyone have a good conjecture about the complexity of Planar Kayles?

Monday, September 7, 2009

Adding Resources and Misereness

I mentioned early on that I wanted this to list lots of good resources for CGT material. I have gotten to say a lot of other things, but today I will repeat a lot of what I've been emailed, and use that to update the surroundings of this blog.

Richard Nowakowski
(one of the three authors of Lessons in Play) provides a link to articles from the Games of No Chance series, which he edited. Aside from this, his game page provides good links for "Budding Combinatorial Game Theorists", including Aaron Siegel's Combinatorial Game Suite.

Martin Mueller mentions that his team's Fuego Go-playing program beat a 9 Dan professional last month. He provides some games on the Fuego page. (I'll quickly admit to not knowing (yet) the rules to Go.) I am not familiar with the program, and thus do not know if it uses any randomization, as mentioned in the complexity blog.

Aviezri Fraenkel's selected CGT bibliography certainly deserves a link. It's been a long time since I saw this and said "Woah!"

I added a link to Thane Plambeck's misere games blog. Unlike the basic definition of a combinatorial game, a misere game is one in which the last play is the losing play. When I was first taught Nim (called "Sticks" by my 8th grade teacher) I learned it as a misere game; the player who took the last "stick" loses.

Although I learned the 3-5-7 version of Nim thoroughly, I was ignorant of the evaluation trick until midway through college. In my freshman year, I challenged my linear algebra professor to a game and he proceeded to mentally process the trick and make all the right moves---until the end of the game. He was playing the standard, non-misere version, and at the end asked "Wait... which one of us wins?"

I learned that version of Nim, Krypto and Equations all at the same time; which games did you learn early on that helped spark your interest in CGT?