A new AI-driven medical device, known as the Wound Viewer (WV), has demonstrated clinically validated capabilities in the automated classification of chronic wounds. The device integrates advanced imaging technology with a proprietary artificial intelligence algorithm to provide precise wound measurement and assessment. This innovation addresses the critical need for reliable and efficient wound care solutions, potentially improving patient outcomes and reducing healthcare costs.
The WV device underwent a clinical trial (OC 15194) approved by the Ethical Committee of the Azienda Ospedaliera Universitaria San Luigi Gonzaga in Italy. The study involved 150 patients, a statistically significant sample size for this type of clinical trial. The device is composed of a custom-designed electronic board equipped with a five-megapixel color CMOS camera, 16 high-precision infrared (IR) distance sensors, and four white LEDs. These components work in concert to capture high-resolution images of wounds while eliminating shadows, ensuring uniform lighting conditions for accurate analysis.
How the Wound Viewer Works
The WV device operates through a series of automated steps:
- The device is pointed towards the wound, maintaining a parallel orientation to the surface.
- Sixteen IR distance sensors calibrate the focal ratio of the camera.
- The device automatically identifies the wound in the picture through Regions of Interest (ROIs).
- The algorithm analyzes the wounds within the ROIs.
- Relevant information, including wound area, depth, tissue segmentation, and WBP score classification, is displayed on the device's touch screen.
The device calculates wound area by detecting and counting pixels, then multiplying this count by the focus distance of the CMOS camera, as determined by the IR sensors. The parameters computed by the device are crucial for wound diagnosis and predicting healing time. The WV system also includes a secure cloud system for sharing images and data with the medical team, facilitating remote clinical consultation. Additionally, the device implements a compliant Electronic Medical Record (EMR) to monitor wound evolution and therapy effectiveness through quantitative indicators.
Neuromorphic Methodology for Wound Analysis
The WV device employs a neuromorphic methodology for wound analysis, using algorithms composed of two sub-networks in a waterfall configuration. The first sub-network uses a multi-layered convolutional neural network (CNN) to detect and extract ROIs from the image. This CNN consists of a two-dimensional layer with 24 convolutional layers, utilizing a 9x9 kernel size and a stride of 2 to capture complex patterns. The network was trained on a dataset of approximately 1,500 images from open-source datasets and previously collected images, manually segmented and classified according to the WBP scale.
The second sub-network consists of a Discrete-Time Cellular Neural Network (DT-CNN) architecture applied to the extracted ROIs to segment the wound and provide relevant measurements. This network uses memristor elements, which mimic synapse-like dynamics, offering potential non-volatility and ease of programming. The DT-CNN leverages a Binary Pixel Interaction (BPI) training algorithm, inspired by bio-inspired learning models, to classify the different tissue types within the wound bed based on color analysis.
Memristor-Based Cellular Automata and BPI Algorithm
The DT-CNN can be considered a memristor-based Cellular Automata (CA), where each cell evolves based on its state, surrounding cells, and external inputs. The CA system is defined by its dimension, the space of states the cells can assume, the neighborhood index, and the generation transition function. The BPI training algorithm, applied to the CA, is a supervised learning model that adjusts synaptic weights to minimize errors in classifying input patterns.
The architecture of the BPI-CA model is designed as a three-layer cellular network, each layer corresponding to one of the three digital channels of the image (Red, Green, Blue). The cells take as input the brightness value of the corresponding pixel in an 8-bit digital form. The network undergoes a training session with a dataset of over 1500 wound images, allowing it to identify a sub-space in which color combinations and their mathematical relations distinguish chronic wounds from the background. The wound identification is performed by fixing the synaptic weights and moving the three-layered template across the ROI of a generic wound image, generating a binary mask that highlights the wound pixels.
By integrating advanced imaging technology with AI-driven analysis, the Wound Viewer offers a clinically validated solution for precise and automated wound classification, potentially transforming wound care management and improving patient outcomes.