Team A and Team B are perennial football rivals. Every year they meet for a series of games. The first team to win four games gets to take home the Golden Teapot and keep it for a year. The teams are evenly matched except for a small home advantage. When playing at home, each team has a 51 per cent chance of winning. (And a 49 per cent chance of losing. No ties are allowed.) Every year, the first three games are played at the home of Team A, and the rest at the home of Team B. Which team is more likely to win the Golden Teapot?
Or, what is the probability that Team A wins the Golden Teapot?
To apply Markov chains to a probability puzzle, the puzzle must adhere to the markov principle (memorylessness). This is true here, because the chance of winning (or losing) is a set chance, and not dependent on past states.
However, Markov chains are still the wrong tool for the job. To track a first-to-four game state, we need all states from (0,0) to (4,4), totalling 25 in all. That's a 25x25 matrix! The fact that the chain is fully absorbing and contains no loops also makes it rather boring.
Team A: 0.496874249699800 Team B: 0.503125750300200