FB pixel

Chooch AI speeds up AI vision for facial biometrics with Nvidia, edge processing

Chooch AI speeds up AI vision for facial biometrics with Nvidia, edge processing
 

Chooch AI, a provider of technology and services for visual AI, announced that it has achieved several breakthroughs in the use of real-time AI on video streams in partnership with chip firm Nvidia and systems integration partner Convergint Technologies. The advancements portend improvements in the speed and accuracy of biometric identification systems.

Among the developments, Chooch AI said that edge AI deployments can now achieve “extreme response-time performance” of twenty milliseconds on multiple video feeds with extremely high accuracy for a wide variety of use cases.

Pre-trained models for workplace safety, fire detection, crowd analysis and more are available to customers and do not require customer data. The company notes that there are over 8000 AI classes and eight models running on Nvidia Jetson Nano which achieve accuracy of over 90 percent; models can inference on up to five camera streams simultaneously with the Nvidia Jetson AGX Xavier products. Also of note: the AI models are deployable to devices from a single dashboard, which is also where real time alerts and reports can be configured.

Biometric Update interviewed Emrah Gultelkin, co-founder and CEO of Chooch AI to find out more about the company’s technology and partnership with Nvidia. This interview has been edited for length and clarity.

What is driving some biometric identification systems to use AI models at the “edge?”

The basic here is why we moved the inferencing to the edge. The grand theme here is about the cost and being able to do streaming of all these videos on the edge. If you do inferencing in the cloud, it’s just impossible, on a large scale. That’s what we have seen. Bringing inferencing to the edge, without any loss of accuracy, or even making it more accurate because you can track better on the edge has been a focus of our work.

The second issue here is being able to do the training very quickly. When we talk about pre-trained models or pre-trained perceptions, as we call them, that is key. We’re down to few hours of training for new models, so it’s almost like a pre-trained model. At the same time, you need to be able to tweak the models or you need to be able to train new models very quickly; you need to feed the AI to learn these things on a grand scale. So, you constantly need to train. And we’re doing that on the back end as well and we’ve brought down the training process, including data collection, adaptation, everything down to hours and not weeks or months.

How has the market changed this year, post-pandemic. Have you responded? How have you adjusted the product and marketing strategy?

The pandemic has accelerated public safety based on cameras, so we’ve gotten a lot of traction on the PPE front. Touchless boarding and touchless check-in have come in post-COVID. We’re seeing the general acceleration of interest in AI deployments across the field just because people are more remote, and you know either you’re looking for assistance to do, so it has been a good ride for us post-COVID. But at the same time, you know some of our clients are also wondering if they’re going to survive—like the airlines or retail—so it’s kind of a double-edged sword in that sense.  We think our technology can help with efficiencies in the workplace. And we’re constantly deploying it.

You introduced a Mobile SDK last year. What traction have you been seeing for that in 2020?

That [SDK] was made for developers on our platform. A lot of the traction there is biometric related, such as facial authentication with liveness detection as well as general biometric with liveness detection. It was easy for us to put that on the edge initially. A lot of the facial software and the facial platforms are more developed than just general object detection or even OCR, so we were able to put a lot of the biometric stuff on the edge quite early.

Can you address how you have adapted your business with the advancements in chip technology and this move to the edge?

Chooch AI has been using Amazon Web Services T4 instances on the cloud. We were pushed on to the edge because of our clients—basically it’s a very market driven issue. The accuracy of the previous processors on the edge were so low, like below 60 percent, that we were we were not using them. Now with from anywhere from the Nvidia Jetson Nano to the AGX Xavier, you’re able to deploy the exact same accuracy as you would have in the cloud. That’s been a huge revolution.

The business model has been built around charging customers for custom training, the onboarding fee and then we would charge for API calls. We’re still doing that for our cloud business. But anywhere you have very heavy video streaming, you need to put these models on edge processors, either on the edge [device] or on premise. Now you still have the some of the custom training and, if necessary, integration, but it’s no longer an API call; you’re charging a per camera and per device licensing fee. So, the business model has changed a lot, and it’s also not costing us anything once the software is embedded in the machine.

In terms of applications and deployment, what is the biggest challenge right now?

The processing power has increased phenomenally over just over the last year. And that has made it easier for us to deploy on the edge, albeit some of the components are still kind of messy, and we’re working to make sure that these are all packaged properly and that people can just flash them onto their edge devices immediately. We’re in the process of making that very easy for our clients and we’re working with Nvidia on that as well. These things will resolve over the next 12 to 24 months and we’re working hard with our partners to resolve them.

It seems that the crux of the issue is now that we’ve had all these tremendous advances in hardware processing power that the next step is getting the different software components—the operating system and applications and other drivers and whatnot—integrated properly. And then, of course, managing the process of flashing the devices is maybe another issue as well.

That’s exactly it. I mean, you have these powerful processors but without the software or if the software is not really integrated properly the whole system doesn’t work. That has been a challenge, but it’s normal for this to happen because these are very new processors out there. And then you have the link between the camera—the camera feed—and the device to the cloud which needs to be synced up as well for updates and so forth.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Most Read This Week

Featured Company

Biometrics Research

Biometrics White Papers

Biometrics Events

Explaining Biometrics