Jonathan Lee, director of sports performance technology in Intel’s Olympic Technology Group, has flown 11 hours and quarantined for 14 days to stitch skeletons together on a computer. And if it works as well as he hopes, it’ll be an amazing innovation for action replays at the 2020 Toyko Olympics.
“Part of our artificial intelligence is [designed] to stitch together the right skeletons when you have up to eight or nine athletes running down the track,” Lee told Digital Trends from his hotel room in Tokyo’s Olympic Village, 5,000 miles from his home in San Francisco.
He is part of a crack team of Intel engineers that have been dispatched to the Tokyo Games to incorporate Intel’s 3D Athlete Tracking (or 3DAT, pronounced “three-dat”) technology into this year’s Olympic broadcasts. 3DAT’s breathtaking overlay visualizations will be made available during replays of the 100-meter, 200-meter, 4×100-meter relay, and hurdle athletic events, taking place between July 30 and August 4.
“3D Athlete Tracking … is a technology we have developed here at Intel that allows us to take standard video of athletes and extract information about their form and motion,” said Lee. “We do this using A.I. and computer vision. [Using our technology, we can] recognize the different parts of the body, from the eyes and nose all the way down to the ankles and toes, and use this to construct a 3D skeleton of the athlete or, in some cases, multiple athletes. From those skeletons, we can then extract information like velocity, acceleration, and biomechanics.”
To put it simply, 3DAT fuses together video taken from multiple 4K machine vision cameras with broadcast footage and uses this to create a three-dimensional model of Olympians in action. These can be used to provide computer-generated replays of the action. What gives this the edge over traditional video replays, however, is that the 3DAT technology can ingest its various sources of video footage and use this to generate motion-capture models that can be rotated in 3D space.
“You give the [broadcaster] the ability to, in essence, rotate, zoom, put the camera wherever they want to,” said Lee.
It’s not just rotating the “camera,” either. By extracting data like velocity and acceleration from the 3D models, 3DAT can overlay the models with added information such as heat maps to indicate how fast an athlete is running, when they hit their top speed, and how long they’re able to maintain this speed. It’s a level of eye-catching data visualization never before tried at the Olympic Games — or just about anywhere else, for that matter.
“What you want is something that is useful and beautiful and helps the viewer at home really connect up with the athletes and understand something that they didn’t know before,” said Lee.
Motion capture is, of course, nothing new. It’s been used in Hollywood for years, most notably in some of the astonishing theatrical performances captured by companies like Weta Digital from actors like Andy Serkis (who has played everyone from Gollum in The Lord of the Rings to Caesar in Dawn of the Planet of the Apes to King Kong in, err, King Kong). Mo-cap is also frequently used in the gaming world to ensure that on-screen avatars move as closely as possible to real people. But while many mo-cap suits are studded with sensors to capture the movement of individual limbs, 3DAT requires zero sensors.
The problem, Lee said, is that while motion capture suits are fine for certain scenarios, tracking elite-level athletes is not necessarily among them.
“Imagine you put a sensor on someone’s head, elbow, chest, and then you say, ‘All right, go do a high jump, and when you land, you’re gonna feel all these sensors being pressed into your body,’ right?” he said. “You can imagine that’s not necessarily a pleasant experience. Or [how about] a sprinter? If the sensor is placed right below the knee, that’s going to interfere with how they come out of the blocks and how they run.”
Instead, 3DAT relies entirely on computer vision and pose estimation algorithms to analyze the biomechanics of athletes’ movements. Lee said that this can be done accurately enough to capture even the smallest nuances of an athlete’s motion. No trackers required.
With this in mind, Lee doesn’t just see 3DAT as a data-viz tool for entertaining and informing the viewers at home. It’s also being used as a training tool for athletes to review their performance. “We’ve had three different elite-level coaches use the exact same phrase to describe this as the ‘holy grail of coaching,’” he said. One potentially transformative use case? Helping to diagnose injuries.
“One of the biggest things that affect athletes are soft tissue injuries,” Lee said. “These usually start to show up in the form of asymmetry. So perhaps your left and right stride length might become different, or the way that your hips move might [change], right? These are things that can happen even before an injury occurs. If you start to look at an athlete a little more longitudinally — say, at the beginning of the season, during, after, maybe during some sort of functional movement assessment — [an A.I.] can take over to recognize precursors for injuries, [so that the coaches and athletes] can ward them off before they happen.”
For this reason, Lee sees the future of 3DAT being increasingly wrapped up with artificial intelligence. “The question that comes up is what do you do with all this data?” he noted. “This is where I see the next frontier for 3DAT … There has to be a next level, whether that’s for injury prevention or for improving performance or helping rehabilitation … That next question is really the one that we’ll need to answer to be able to turn this from a technology that’s [really] cool into a technology that’s cool and [incredibly] useful.”
For now, though, Olympics viewers will have to settle for just “really cool.” Something tells us that will probably be enough. Coming soon to a gold medal event near you (or, at least, on your television).