AI for safer surgeries: Oklahoma specialists have developed a system that uses artificial intelligence to visualize superimposed and anatomically aligned 3D computed tomography data during surgery.
OU Health plastic and reconstructive surgeon Dr. Christian El Amm holds a model of the skull while demonstrating a surgical imaging device he developed in collaboration with energy company Baker Hughes.
The University of Oklahoma researcher and an Oklahoma City-based surgeon at OU Health conceived of using artificial intelligence to visualize superimposed and anatomically aligned 3D computed tomography data during surgery. The challenge was to augment each surgery.
“Compared to a pilot flying an airplane or even the average Google Maps user on his way to work, surgeons today have tools behind their backs and hanging on the wall,” says Mohammad Abdul Mukit, a graduate student in electrical and computer engineering at the University of Oklahoma and a graduate student and research assistant. His research focuses on the application of computer vision, augmented reality and artificial intelligence to medical operations.
“A Google Maps user or pilot gets constant real-time updates on where they are, what to do next, and other vital data that help them make split-second decisions,” he explained. “They don’t have to plan a trip for days or memorize every turn and detail of every landmark along the way. They just do it.”
Surgeons today, on the other hand, have to do careful surgical planning, memorize the specifics of each unique case and know all the necessary steps to ensure the safest possible surgery. They then perform complex procedures for hours, with no targeting or guidance devices or head-mounted displays at their disposal.
“They have to grope their way to the target and hope it goes as they planned,” Mukit said. “Through our research, we aim to change that process forever. We’re creating ‘Google Maps for Surgery.
To make this vision a reality, Mukit and OU Health plastic and reconstructive surgeon Dr. Christian El Amm have been working together since 2019. However, this journey began in 2018 when El Amm partnered with energy technology company Baker Hughes.
BH specializes in using augmented reality/mixed reality and computed tomography to create 3D reconstructions of rock samples. For geologists and oil and gas companies, this visualization is extremely useful because it helps them plan and execute drilling operations efficiently.
This technology caught El Amm’s eye. He envisioned that this technology, combined with artificial intelligence, could allow him to visualize superimposed and anatomically aligned 3D CT data during surgery. This could also be used to view the reconstruction steps he planned during surgery without losing sight of the patient.
However, several key challenges had to be met for the prototype mixed reality system to be ready for use in surgery.
“Over the course of a year of collaboration, the BH team created solutions to problems that had been unsolved up to that point,” Mukit recalls. “They implemented a client/server system. The server, a high-end computer equipped with RGBD cameras, did all the computer vision work to estimate the position of the patient’s head at six points.
“It then transferred the stored CT data to the client device, the Microsoft Hololens-1, for anatomically verified imaging,” he continues. “BH developed a proprietary compression algorithm that allowed them to transmit large amounts of CT data. BG also integrated a proprietary artificial intelligence mechanism for pose estimation.”
It was a complex engineering project done in a very short time. After the prototype was completed, the team better understood the limitations of such a setup and the need for a better system.
“The prototype system was somewhat impractical for a surgical setting, but it was necessary to better understand our needs,” Mukit said. “First, the system could not assess the position of the head in a surgical setting when most of the patient’s body was covered by clothing other than the head. Next, the system required time-consuming steps to calibrate the camera each time we exited the application.
“This was a problem because, in our experience, surgeons only accept devices that work from the beginning,” he continues. “They don’t have time to fumble with technology while they’re focused on vital procedures. The ability to control the system with voice commands is also very important to us. This is an important element when it comes to surgical procedures, because surgeons always have their hands full.
Surgeons won’t contaminate their hands by touching the computer to control the system or removing the device to recalibrate. The team realized that a new, more comfortable and seamless system was needed.
“I started working on building a better system from scratch in 2019 as soon as the official partnership with BH ended,” Mukit said. “Since then, we’ve moved most of the basic tasks to the edge, to the head-mounted display itself. We’ve also used CT data to train and implement machine learning models, which have become more robust in estimating head position than before.”
“We developed ‘markerless tracking,’ which allows us to overlay CT scans or other images with artificial intelligence instead of cumbersome markers to point the way,” he added. “Then we eliminated the need for manual camera calibration.”
Finally, they added voice commands. According to Mukita, all of these steps made the app/system “plug-and-play” for surgeons.
“Because of its convenience and usefulness, the apps have been very warmly received by OU-Medicine surgeons,” he said. “Suddenly ideas, feature requests, and inquiries started coming in from various medical experts. That’s when I realized that we had something special in our hands and that we had only scratched the surface. We began to develop these features for each unique genre of surgery.”
Gradually, this allowed us to enrich the system with various useful features and led to unique innovations, he added.
El Amm began using the device during surgical procedures to improve the safety and effectiveness of complex reconstructions. Many of his patients turn to him for craniofacial reconstruction after traumatic injuries; others have congenital deformities.
So far, he has used the device in several cases, including the reconstruction of a patient’s ear. The system took a mirror image of the patient’s other ear, then applied the device to the other side, allowing El-Amm to accurately attach the reconstructed ear. Previously, he had to cut out a template of the ear and achieve accuracy with the naked eye.
In another surgical case that required an 18-step facial reconstruction, the device superimposed CT scans on top of the patient’s actual bones.
“Each of these bones had to be cut and moved in the exact direction,” El Amm says. “The device allowed us to see the bones individually, and then it mapped every cut and every movement, which allowed the surgeon to make sure he went through all of these steps. It’s essentially going through all the steps of surgery in virtual reality.”
ADVICE FOR OTHERS
“When you change the way you see the world, you change the world you see,” Mukit said. That’s what mixed reality was created for.” MR is the next general-purpose computer. Powerful technology will no longer be in your pockets or behind your desks.
Thanks to MR, it will be integrated with your human self,” he continued. “It will change the way you solve problems, which in turn will lead to new and creative ways of solving problems with AI. I think we’re going to see another technological revolution in the next few years. Especially after a mixed reality headset is introduced in 2023, which will reportedly be lighter than any other visors on the market.”
Almost every industry is now incorporating mixed reality headsets into their businesses – and rightly so, as the benefits are clear, he added.
“The technology is already mature enough for countless possible applications in almost every industry, and especially in healthcare,” he concluded. “Mixed reality hasn’t fully entered this industry yet. We’ve only scratched the surface, and within months we’ve already seen such an overwhelming tsunami of ideas from experts. Ideas that can now be easily implemented.”
These application scenarios range from education and training to improving the safety, speed and cost-effectiveness of surgeries for both surgeons and patients.” The time to move to a mixed reality is now.”