A Markov Game Model for Valuing Player Actions in Ice Hockey

Master's thesis
Simon Fraser University
Markov Game, Q-learning, sports analytics, ice hockey, player raking
Résumé :

Title: A Markov Game Model for Valuing Player Actions in Ice Hockey

Evaluating player actions is very important for general managers and coaches in the National Hockey League. Researchers have developed a variety of advanced statistics to assist general managers and coaches in evaluating player actions. These advanced statistics fail to account for the context in which an action occurs or to look ahead to the long-term effects of an action, and often apply a fixed value to actions. I apply the Markov Game formalism to play-by-play events recorded in the National Hockey League to develop a novel approach to valuing player actions. The Markov Game formalism incorporates context and lookahead across play-by-play sequences. A dynamic programming algorithm learns the value of the game states of the Markov Game model. These values quantify the impact of actions on goal scoring, receiving penalties, and winning games. Learning is based on a massive dataset that contains over 2.8 million events in the National Hockey League. The impact of player actions varies widely depending on the context, with possible positive and negative effects for the same action. My results show using context features and lookahead makes a substantial difference to the action impact scores. Accounting for context and lookahead also increases the information in the model. Players are ranked according to the aggregate impact of their actions, and compared with previous player metrics, such as plus-minus, total points, and salary, as well as recent advanced statistics metrics.