New ways to separate noise from signal trim processes to boost image quality
Image processing software maker Visionary.ai has launched a new real-time video denoiser designed to improve video quality.
According to EdgeIR, the algorithms developed by Visionary are sufficiently lightweight to be deployed on cost-effective silicon and to run at the edge.
“In very low light, when there are few photons for an image sensor to capture, noise is the limiting factor,” explains Visionary Chief Technology Officer Yoav Taeib.
“For human vision applications, this noise adds speckles, blurs, and distortion to images, and for machine vision, it reduces the accuracy of object recognition,” he says.
To capture substantially more photons would require an image sensor and connected lens that are proportionally bigger, driving up costs.
“An AI-based approach that uses the raw image data and uses a sophisticated algorithm to separate the noise from the image signal is a more effective way to extend camera performance,” the CTO says.
(License Plate Recognition by Night. Source: Visionary.ai)
Visionary says it benchmarked its denoiser against other approaches to noise reduction. According to the company, the only close competitor in terms of results was Restormer, which reportedly required more processing power and took 212 times longer to execute.
Visionary CEO Oren Debbi said from the level of interest the company is receiving, it is evident that noise and low light performance are issues that previous approaches have not been able to resolve adequately.
“We expect that, over the next seven years, AI-based approaches to image noise reduction and performance improvement will become the new standard in the electronics industry.”
Reducing the noise contained in images is considered a key to effective edge-based AI processing, such as biometrics applications.
Gwangju Institute publishes image vision research
Another approach to denoise images was recently developed by the Gwangju Institute of Science and Technology, in collaboration with VinAI Research and the University of Waterloo (which has been at this topic for some time).
The new research describes a self-supervised, post-correction network that improves the denoising performance without relying on a reference.
According to a press release by the Gwangju Institute, the model is designed for scenarios in which the test image is substantially different from images used for training.
The method typically used to render high-quality and realistic images is known as “path tracing,” which relies on a Monte Carlo denoising approach based on supervised machine learning, according to a new research paper.
In this learning framework, the machine learning model is first pre-trained with noisy and clean image pairs and then applied to the noisy test image to be rendered.
However, associate professor Bochang Moon from the Gwangju Institute, one of the project researchers, says that existing methods fail on two accounts. First, test and train datasets are very different, and second, they require a long time to acquire the training dataset for pretraining the network.
“What is needed is a neural network that can be trained with only test images on the fly without the need for pre-training,” Moon says.
The researcher also claimed the team’s approach is the first one that does not rely on pretraining with an external dataset and can reportedly be trained on the fly to produce high-quality images in roughly 12 seconds.
“This, in effect, will shorten the production time and improve the quality of offline rendering-based content such as animation and movies,” he says.
The collaboration between the Gwangju Institute and the University of Waterloo comes roughly two years after VinAI Research developed facial recognition technology with mask-detection capabilities, one of the first companies to do so at the beginning of the pandemic.
Wondering who was the last? Arguably Apple, which released the iOS 15.4 beta with Face ID to be used with a mask and without a Watch for biometric authentication in January 2022.