Proprio’s OR navigation system could improve orthopedic surgeries (Medical Design & Outsourcing)
Proprio is rolling its AI-powered surgical navigation system into operating rooms to collect data that will ultimately help surgeons improve how they perform procedures.
The Seattle-based startup said it has placed its Paradigm system in several U.S. operating rooms to capture surgical data that will useful for accelerating the system’s development.
“Our data-informed platform allows all members of the surgical team access to the right information, at the right time, in the right environment,” Proprio co-founder and CEO Gabriel Jones said in a statement. “By passively capturing data in the background of surgery, we understand and quantify surgery better and are positioned to create the most data-rich platform for surgeons.”
For now, the system’s sensors are only watching and recording in select hospitals, but Proprio will activate the surgical navigation capabilities after getting a green light from the FDA. The company said it expects a commercial release in 2024.
How Proprio’s Paradigm system works
The system uses light field computer vision and AI to help surgeons visualize the patient’s anatomy and surgical space in three dimensions without radiation. Sensors and four cameras in different positions monitor the procedure in real-time, stitching the different views together to help a surgeon visualize from different angles and around obstructions.
Carls, who was once VP of R&D for Medtronic Spine and now explores how to apply Proprio’s technology to specific orthopedic procedures, explained the Proprio technology in an interview with Medical Design & Outsourcing.
The system is currently focused on navigating spinal procedures. It will allow surgeons to digitally map and visualize the surgery site in the operating room, and then take the information outside of the OR for post-op analysis. The system can monitor which implants are used, the length of the total procedure and specific portions, and peer deeper into the body than a surgeon might be able to easily see.
“We can see down very small corridors. I liken it to, say, a 22 mm hole, if you will,” Carls said. “We can see down that because we are projecting infrared light as well. It can see pretty far. It doesn’t have to be a completely wide-open procedure. We only need a pinkie fingernail’s surface area to register the locations of this anatomy.”
The system can also update anatomy as bones move in real-time to help surgeons position vertebral bodies.
What’s ahead for Proprio
There’s also potential for computers to analyze and utilize the surgical data in the operating room in real-time, Carls said, creating something like a digital twin of the surgical event.
“Your mind could go wild on what you could do if you had all that information in real-time digitized where a system could be applying AI and machine learning and bringing it right back in the OR in real-time,” he said. “That’s kind of the magic of light field. It allows us to live between two different worlds: the world that is physically in front of the surgeon as they’re trying to accomplish some sort of task, and then the digital world that can be overlaid or presented to them in a way that they previously have not had access to.”
For example, if a surgeon is placing a pedicle screw and the system detects a deviance from prior successful surgeries, the system might one day be able to display a warning sign.
“Because we’ve digitized the entire scene — the anatomy, the soft tissue, everything there — and we are also understanding the exact location of the instrument or implant that the surgeon is placing into the spine, we can do a boundary analysis on what happens if these two cross and provide a warning if you’re one or two millimeters away, before they make that critical error,” Carls said.
Proprio could eventually expand the technology beyond spinal surgeries to other orthopedic procedures such as hips, knees, ankles, wrists and so on. And further developments could integrate different imaging modalities together, combining hard bone imagery from CT scans with soft tissue imagery from MRI scans.
“That’s computing power a couple more notches above, but we would be able to help with soft tissue procedures as well as hard tissue procedures,” Carls said. “I think all of that is possible.”