Deep Learning for Predicting Human Strategic Behavior

Master's thesis
University of British Columbia
artificial intelligence, machine learning, behavioural game theory, deep learning

Computation is abundant, and we are surrounded by devices that get faster and faster each year. Artificial neural networks have been able to harness this computation, creating a leap forward in our capacity to create more intelligent programs. Nonetheless, modern deep neural networks -with their hundreds of millions of connections- are still orders of magnitude smaller than their biological human counterparts, and require substantial amounts of computations to perform accurately. The reason: every artificial neuron must be evaluated in order to give a correct answer.
This thesis proposes an alternative modeling of deep neural networks, where the network learns through examples to self-regulate in terms of necessary computations. In other words, given some query it learns to turn off regions of itself that are unnecessary, just like biological brains where only some regions activate depending on the task at hand.
This work is achieved through the combination of the frameworks of reinforcement learning and deep learning. The network is trained through deep learning to correctly perform predictions, while simultaneously being trained through reinforcement learning to turn off regions itself. To this end, the network is given high rewards when it activates few regions and still predicts correctly, and is penalized when deactivating the wrong regions. This creates a trade-off between accuracy and time. The more time a network's designer is willing to allow to the network, the more accurate it will be, at the cost of more computation. This thesis also proposes a simple way to control this trade-off via a simple sparsity ratio parameter.
In practice, as demonstrated in this thesis, this approach means training neural networks that are 5 to 10 times faster on tasks such as image recognition, while retaining all the accuracy of their fully-evaluated counterparts. Such gains also allow intelligent systems to spend less energy, which makes more appealing the deployment of artificial neural network technologies on battery-powered devices such as intelligent assistive devices. It also potentially enables much larger neural networks to be trained, taking us a step closer to realistically training mammal-sized artificial networks.