Doctors who need 3D medical imaging of a patient have limited choices today. Non-xray technologies produce one or two star quality 3D imaging products. Xray technologies produce top five star quality 3D imaging products. However a new Xray 3D CT machine costs over $1,000,000 dollars. CT machines also expose the patient to a Fukushima/Chernobyl sized dose of radiation.
The Single Safe Scan device can produce a fairly good three or four star quality 3D imaging product. It exposes a patient to a minimal amount of radiation: approximately the same as dental Xrays. A new device would cost about $100,000-$250,000 EUR.
What distinguishes the Single Safe Scan device from conventional CT scan? The main difference is the type of radiation used: flouroscopy vs radiography. The subcomponents come from off-the-shelf flouroscopy C-Arm machines, which typically are in the under $100,000 EUR range. Six seconds of fluoroscopy radiation is all it takes to produce a 3D model. As for quality of image, Dean Jane's had a saying: "you should have seen how CT scan images looked when they just came out". Meaning that the algorithms we were initially using to produce 3D models can always be improved.
With the Single Safe Scan device a doctor has the option to request a diagnostic 3D image at a low cost without exposing the patient to a large dose of harmful xray radiation. If the diagnostic 3D image reveals or indicates that further study and a more detailed robust view is needed, a regular CT scan image can be ordered at a higher price and xray exposure.
The cost will probably be a more important factor for doctors than exposure to harmful radiation. At Imaging3 we interviewed a prominent local neurosurgeon and showed him a demo of the single scan device. He told us flat out doctors do not care one bit about the radiation they expose a patient to. To them the quality of the image is paramount.
We are looking for a mittelstand type of company which would be interested in developing the hardware for a Single Safe Scan device. This involves attaching an Xray generator/scanner combination on a gantry that rotates very fast.
We bring knowledge and experience to the table. Our staff developed the first version DVIS for Imaging3.
We would develop and you would own the computer hardware and software that drives the machine. Your firm would engineer, manufacture, and sell the hardware.
We would develop and own the computer software that performs the 2D to 3D transform and AI applications. The transforms would be done on a render farm on the Polish/Russian border. Link to a proposed render farm here. The medical customer would only need a cheap under $1000 NVidia graphics card installed in their computer to view and manipulate the 3D model product. No need to buy an expensive super computer that will be obsolete in two years.
The Imaging3 DVIS was originally intended to act as a continuous scan device. Real time 3D imaging actually worked! See youtube video of a demo run at 50% speed link here. However the state of the device in engineering terms was 'proof of concept'. It could only perform a once per second 3D refresh. Nowhere near the 12 refresh per second minimum rate required for motion pictures. A production model version of the machine, while possible, would require much more investment. It would involve hiring a team of engineers and scientists from various disciplines. In his final year at Imaging3 Dean Janes was developing the hardware for a follow on second verison of the device intended as a production model better suited for this task. Then the Securities Exchange Commision (SEC) tried to shut the company down and ousted Dean as CEO. What remained of the version 2 device he was building was scattered around the Imaging3 office/garage on Hollywood Way and eventually probably ended up as scrap metal and parts sold to some storage hunter after the rent was not paid. Unless an eager investor with seriously deep pockets and patience comes along, real time 3D imaging is outside the scope of our offer.
The real time imaging version of the product may not be for humans. Having worked with the proof of concept prototype human deficiencies become apparent. So why develop it for doctors in the first place? The machine produces too much information too quickly. Real time imaging would be better suited to guide robotics end effectors and act as a 3D machine vision system similar to LIDAR which is used to guide autonomous vehicles. A machine would be able to process and benefit from the incoming information in a timely manner. We would only be interested in working on such a system for applications like for example trash sorting or food harvesting, processing, or preparation.