By: Pierluigi Vito Amadori, Senior Engineer – Global R&D, Tim Bradley, Engineer II – Global R&D, Guy Moss, Senior Principal Engineer – Global R&D, Ayush Raina, Senior Machine Learning Engineer, Ryan Spick, Engineer – Global R&D

Achieving human-like behaviour in autonomous agents is a longstanding challenge in game development. We have been looking into the intricacies of training agents to play the iconic game “Doom 2” using Imitation Learning (IL) utilizing pixel data as input. Additionally, we conduct a comparative analysis between IL and Reinforcement Learning (RL) concerning humanness, scrutinizing camera movement and trajectory data. 

Employing behavioural cloning, our study examines the capacity of individual models to assimilate diverse behavioural traits. By endeavouring to replicate the behaviours exhibited by human players with varying play styles, our research demonstrates the feasibility of training agents to manifest aggressive, passive, or distinctly human-like behaviours in contrast to conventional artificial intelligence techniques. Our proposed methodologies contribute to the inclusion of nuanced depth and human-like attributes within video game agents.

The IL-trained agents exhibit performance commensurate with average players in our dataset, surpassing the capabilities of less adept players. While not attaining the same ‘superhuman’ performance benchmarks as prevalent RL approaches, our findings underscore the provision of substantially stronger human-like behavioural traits in the trained agents.

For more information you can find our research paper here: [2401.03993] Behavioural Cloning in VizDoom (arxiv.org)