Arthroscopy surgery is a trending technique among physicians but not widely adopted due to the high cost of training and the inability to perform those training in certain countries. The operation consists in manipulating surgical tools beneath the skin with vision support by means of a special camera and light. In regards to the above, we developed an arthroscopy simulator for knee-related pathologies under interactive visualization and haptic devices, where a user interacts inside a 3D environment by moving a force-feedback enabled haptic device and those movements are displayed on a surgery simulator from a PC screen. During the plan and execution of this project, I researched and provided a solution for the detection of the virtual collisions between surgery tools and human tissue. Therefore, my assignment was to develop collision detection algorithms.
The goal of collision detection software is to indicate when different objects in a scene are colliding, for performing this operation, we first need to think that objects in the scene are represented by a mesh of small triangles that connect each vertex of the object, giving some degree of reality when these triangles are small enough (or vertex are close enough). One approach for an efficient collision computation is based on comparing simple boxes that enclose the object under study, when these general boxes are overlapping, we can strip this major box into two smaller boxes containing half of the original object size, the process can continue until finding a box that contains a single 3D triangle, where the detection process moves into testing whether those triangles are “touching”. This tree-based way to represent an object shows different alternatives to fit the bounding box, the most basic one is the \ac{AABB}, which the object information is saved in boxes axis are aligned with respect to the 3D coordinate system. Another precise method is \ac{OBB}, where the axis of the box is aligned with the object itself, providing a better fit than the \ac{AABB} approach, but spending extra resources in the detection process where the axis of different objects do not share a common reference.
These algorithm candidates were programmed in C language and they shown to be successful to accurately detect when triangle-based objects present collision. However, as objects triangle count is increased (which is expected in high-resolution models), collision detection execution takes longer to process than the acceptable time to give a sense of real-time feeling (33[ms]). Therefore, those algorithms were re-designed for being executed under \ac{GPU} processing in \ac{CUDA} platform. The solution to parallelize a CPU-based implementation is not direct, but by applying certain ideas such as redefinition of the binary tree into a power of two (4, 8, 16, 32) legs, threads accessing shared memory locations, multiple object collision detection, massive bounding box check and retaining collision and graphical structure in the same memory space, squeeze the parallel capabilities of \acp{GPU} and yields to run a scene with complex objects with a feeling of real time interaction.
In addition to the above, a visual debugging platform was created to alleviate the testing of code during the development process, the visual debugger helps the designer to easily detect if the 3D object under examination is properly processed in each of the processing steps (deformation, collision, auxiliary graphical tools, haptic space representation).