小蓝视频

Skip to content

AI learns to outsmart humans in video games - and real life

Speed around a French village in the video game Gran Turismo and you might spot a Corvette behind you trying to catch your slipstream.
2023022706020-63fc8d61d74bed5ad837b0b4jpeg
This image released by Sony Interactive Entertainment shows a scene from the video game Gran Turismo Sophy. Grand Turismo players have been competing against computer-driven race cars since the franchise launched in the 1990s, but the new AI driver that was unleashed last week on Grand Turismo 7 is smarter and faster because it's been trained using the latest AI methods. (Sony Interactive Entertainment via AP)

Speed around a French village in the video game Gran Turismo and you might spot a Corvette behind you trying to catch your slipstream.

The technique of using the to speed up and overtake them is one favored by skilled players of PlayStation's realistic racing game.

But this Corvette driver is not being controlled by a human 鈥 it's GT Sophy, a powerful artificial intelligence agent built by PlayStation-maker Sony.

Gran Turismo players have been competing against computer-generated racecars since the franchise launched in the 1990s, but the new AI driver that was unleashed last week on Gran Turismo 7 is smarter and faster because it's been trained using the latest AI methods.

鈥淕ran Turismo had a built-in AI existing from the beginning of the game, but it has a very narrow band of performance and it isn鈥檛 very good," said Michael Spranger, chief operating officer of Sony AI. 鈥淚t鈥檚 very predictable. Once you get past a certain level, it doesn鈥檛 really entice you anymore.鈥

But now, he said, 鈥渢his AI is going to put up a fight.鈥

Visit an at universities and companies like Sony, Google, Meta, Microsoft and and it鈥檚 not unusual to find AI agents like Sophy racing cars, slinging angry birds at pigs, fighting epic interstellar battles or helping human gamers build new Minecraft worlds -- all part of the job description for computer systems trying to learn how to get smarter in games.

But in some instances, they are also trying to learn how to get smarter in the real world. In a January paper, a University of Cambridge researcher who built an AI agent to control Pok茅mon characters argued it could 鈥渋nspire all sorts of applications that require team management under conditions of extreme uncertainty, including managing a team of doctors, robots or employees in an ever-changing environment, like a pandemic-stricken region or a war zone.鈥

And while that might sound like a kid making a case for playing three more hours of , the study of games has been used to advance AI research 鈥 and train computers to solve complex problems 鈥 since the mid-20th century.

Initially, AI was used on games like checkers and chess to test at winning strategy games. Now a new branch of research is more focused on performing open-ended tasks in complex worlds and interacting with humans, not just for the purpose of beating them.

鈥淩eality is like a super-complicated game,鈥 said Nicholas Sarantinos, who authored the Pok茅mon paper and recently turned down a doctoral offer at Oxford University to start an AI company aiming to help corporate workplaces set up more collaborative teams.

In the web-based Pok茅mon Showdown battle simulator, Sarantinos developed an algorithm to analyze a team of six Pok茅mon 鈥 predicting how they would perform based on all the possible battle scenarios ahead of them and their comparative strengths and weaknesses.

Microsoft, which owns the popular Minecraft game franchise as well as the Xbox game system, has tasked AI agents with a variety of activities 鈥 from steering clear of lava to chopping trees and making furnaces. Researchers hope some of their learnings could eventually play a role in real-world technology, such as how to get a home robot to take on certain chores without having to program it to do so.

While it 鈥漡oes without stating" that real humans behave quite differently from fictional video game creatures, 鈥渢he core ideas can still be used,鈥 Sarantinos said. 鈥淚f you use psychology tests, you can take this information to conclude how well they can work together.鈥

Amy Hoover, an assistant professor of informatics at the New Jersey Institute of Technology who鈥檚 built algorithms for the digital card game Hearthstone, said 鈥渢here really is a reason for studying games鈥 but it is not always easy to explain.

鈥淧eople aren鈥檛 always understanding that the point is about the optimization method rather than the game,鈥 she said.

Games also offer a useful testbed for AI 鈥 including for some real-world applications in robotics or health care 鈥 that鈥檚 safer to try in a virtual world, said Vanessa Volz, an AI researcher at the Danish startup Modl.ai, which builds AI systems for game development.

But, she adds, 鈥渋t can get overhyped."

鈥淚t鈥檚 probably not going to be one big breakthrough and that everything is going to be shifted to the real world,鈥 Volz said.

Japanese electronics giant Sony launched its own AI research division in 2020 with entertainment in mind, but it's nonetheless attracted broader academic attention. introducing Sophy last year made it on the cover of the prestigious science journal Nature, which said it could potentially have effects on other applications such as drones and .

The technology behind Sophy is based on an algorithmic method known as reinforcement learning, which trains the system by rewarding it when it gets something right as it runs virtual races thousands of times.

鈥淭he reward is going to tell you that, 鈥榊ou鈥檙e making progress. This is good,鈥 or, 鈥榊ou鈥檙e off the track. Well, that鈥檚 not good,'" Spranger said.

The world's best Gran Turismo players are still finishing ahead of Sophy at tournaments, but average players will find it hard to beat 鈥 and can adjust difficulty settings depending on how much they want to be challenged.

PlayStation players will only get to try racing against Sophy until March 31, on a limited number of circuits, so it can get some feedback and go back into testing. Peter Wurman, director of Sony AI America and project lead on GT Sophy, said it takes about two weeks for AI agents to train on 20 PlayStations.

鈥淭o get it spread throughout the whole game, it takes some more breakthroughs and some more time before we鈥檙e ready for that,鈥 he said.

And to get it onto real streets or Formula One tracks? That could take a lot longer.

adopt similar machine-learning techniques, but 鈥渢hey don鈥檛 hand over complete control of the car the way we are able to,鈥 Wurman said. 鈥淚n a simulated world, there鈥檚 nobody鈥檚 life at risk. You know exactly the kinds of things you鈥檙e going to see in the environment. There鈥檚 no people crossing the road or anything like that.鈥

Matt O'brien, The Associated Press

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks