New AI technology could change game preparation for Super Bowl teams
Players and coaches from the Philadelphia Eagles and Kansas City Chiefs will spend hours in film rooms this week in preparation for the Super Bowl. They study positions, plays and formations and try to identify which opponent tendencies they can exploit, while watching their own film to strengthen weaknesses.
New artificial intelligence technology being developed by engineers at Brigham Young University could significantly reduce the time and costs that go into film studies for Super Bowl teams (and all NFL and college football teams), while also improving game strategy by using harnessing the power of big data.
BYU professor DJ Lee, master’s student Jacob Newman and Ph.D. students Andrew Sumsion and Shad Torrie use AI to automate the time-consuming process of manually analyzing and annotating game footage. Using deep learning and computer vision, the researchers have developed an algorithm that can consistently locate and label players from game films and determine the formation of the attacking team – a process that can consume the time of an array of video assistants.
“We had a conversation about this and realized: We could probably learn an algorithm to do this,” says Lee, professor of electrical and computer engineering. “So we set up a meeting with BYU Football to learn their process and we immediately knew, yes, we can do this a lot faster.”
Although still in the early stages of research, the team has already achieved over 90% accuracy in detecting and tagging players with their algorithm, along with 85% accuracy in determining formations. They believe the technology can ultimately eliminate the need for the inefficient and tedious practice of manual annotation and analysis of recorded video footage used by NFL and college teams.
Lee and Newman first watched real game footage of BYU’s football team. When they started analyzing it, they realized that they needed some additional angles to properly train their algorithm. So they bought a copy of Madden 2020, which shows the court from above and behind the offense, and manually tagged 1,000 images and videos from the game.
They used that footage to train a deep learning algorithm to locate the players, which is then fed into a Residual Network framework to determine what position the players play. Finally, their neural network uses the location and position information to determine which formation (out of more than 25 formations) the offense is using – from the Pistol Bunch TE to the I Form H Slot Open.
Lee said the algorithm can identify formations with 99.5% accuracy if player location and tag information is correct. The I-formation, where four players line up in front of each other – center, quarterback, fullback and running back – proved to be one of the most challenging formations to identify.
Lee and Newman said the AI system could have applications in other sports as well. In baseball, for example, it could pinpoint players’ positions on the field and identify common patterns to help teams refine how they defend against particular hitters. Or it can be used to locate footballers to help determine more efficient and effective formations.
“Once you have this data, you can do a lot more with it; you can take it to the next level,” Lee said. “Big data can help us know this team’s strategies, or that coach’s tendencies. It can help you know if they’re likely to go for it on 4th Down and 2 or if they’re going to punt. The idea of AI to use for sports is really cool, and if we can give them even a 1% advantage, it’s worth it.”