Is edge-to-cloud the future of facial recognition systems?

The increasing demand for edge computing technology is driven by its integration of various cutting-edge advancements, such as complex AI algorithms, intelligent edge devices, and compatibility with wireless connectivity options like 5G and cellular networks. Take facial recognition, for instance; edge computing offers a distinct advantage to operators by creating a streamlined ecosystem that leverages the capabilities of AI-based video analytics, all executed directly on the edge device.
Businesses have a need for applications that must operate in remote far-edge locations, where the installation of lengthy fiber connections is impractical. Edge computing enables these companies to deploy smart (4K) cameras capable of processing images through AI algorithms and transmitting only essential data to centralized servers using advanced wireless connections.
However, when examining various architectural approaches for deploying facial recognition applications at the edge, it’s crucial to consider factors such as hardware costs, deployment expenses, network requirements, and bandwidth constraints. All of these aspects must also prioritize security and maintenance, as most facial recognition tasks are mission-critical for operators.
In cases involving straightforward image processing or object detection applications, the biometric processes can be handled either on smart cameras or on edge devices like AI boxes or industrial PCs equipped with robust graphics processing capabilities.
Differences between the architecture for deploying facial recognition applications
In the conventional approach, a centralized server handles tasks such as video decoding, face detection, template extraction, and template matching, with security cameras sending their images to the server for processing. In public spaces where security operations are in place, numerous cameras are strategically deployed, generating a substantial amount of data every second. However, when it comes to expanding the number of cameras, this creates a significant bottleneck in the network infrastructure, demanding higher bandwidth, which translates to increased costs for the operator.
One solution to tackle this issue involves shifting all biometric operations to be processed at the edge, utilizing edge devices like smart cameras or AI boxes. By doing so, the considerable volume of real-time images generated by security cameras can be processed locally, with only small amounts of information transmitted to the centralized server, responsible for database management and monitoring tasks. This approach alleviates the burden on the network and bandwidth, particularly crucial in edge computing scenarios with limited resources. This edge-to-cloud approach equips operators to construct their infrastructure using various edge devices and distributed edge resources.
Cost savings: Choose the right architecture for your facial recognition deployment
Looking at it from a cost perspective, let’s consider a facial recognition system with 100 cameras deployed in the field. In the case of a centralized server setup, processing the image stream from these 100 cameras requires 100 processing cores, necessitating a cluster of servers that comes with a substantial price tag, potentially reaching tens of thousands of dollars.
In contrast, if we examine the same system using smart cameras or edge devices, each of the 100 cameras possesses its own biometric processing capabilities, and the cost of each of these units is a fraction of what a centralized server would cost. In an edge scenario, the CCTV cameras can connect to an AI box or industrial PC, which incorporates powerful graphics cores, such as those offered by Nvidia.
In such facial recognition systems, the cost difference can be as much as twice the cost. To put it precisely, the centralized edge server can be twice as expensive as the combined cost of the smart cameras and edge devices with AI boxes. This concept can serve as a valuable framework for understanding the financial dynamics of large-scale facial recognition deployments, especially in public safety projects.
During a recent webinar hosted by Innovatrics, the company presented various deployment scenarios, providing approximate cost estimates and details about the hardware platforms employed. The objective was to illustrate the disparities in costs across different facial recognition systems. Based on the information shared in the webinar, let’s explore the various deployment alternatives.
Parameters | Centralized Server | Smart Camera | Edge Device (AI Box) |
Server/Device | $20,000 | $500 | $1000 |
100x CCTV cameras | $400 | N/A | $400 |
Total | $100,000 | $50,000 | $50,000 |
Assumption:
- Centralized server – Intel Xeon Gold 6242 (40 cores)
- Smart camera – Hanwha with Ambarella CV22
- Edge device – Axiomtek xRSC-101 with Hailo-8 (1 device per 10 CCTV cameras)
In a recent implementation of a facial recognition system by Corsight, approximately 800 cameras were installed throughout the city’s public transit system. The software analyzed the footage, cross-referencing it with a database of over five thousand individuals subject to active court orders within the city. Within this system, let’s examine the cost-saving aspect related to different infrastructure choices.
Parameters | Centralized Server | Smart Camera | Edge Device (AI Box) |
Server/Device | $20,000 | $500 | $1000 |
800x CCTV cameras | $400 | N/A | $400 |
Total | $720,000 | $400,000 | $400,000 |
In this particular scenario, it becomes evident that the centralized server approach may not result in exactly twice the cost of employing smart cameras and edge servers. Nonetheless, it still incurs a substantial hardware deployment expense. In mission-critical applications, such as using facial recognition to identify individuals with active court orders, a hybrid approach proves advantageous, as it offers centralized security and monitoring capabilities.
Innovatrics, a software company specializing in biometric solutions for government and enterprise applications, discussed implementations of SmartFace in the webinar. SmartFace is the company’s software toolkit tailored for the development of embedded biometric and computer vision solutions, particularly focusing on face detection and object recognition. According to Innovatrics, the SmartFace software has a compact footprint while delivering high performance, making it well-suited for deployment at the network edge on OEM devices.
Furthermore, SmartFace adopts an edge-to-cloud architecture wherein video streams undergo pre-processing on edge devices located at each camera site. This approach reduces bandwidth demands and conserves server resources. Because of its cascaded architecture, the solution can be scaled with maximizing the utilization of server resources.
Innovatrics offers customers the flexibility to leverage a customized approach to the SmartFace software solution. The company says its in-house engineers are available to customize the software kit according to the specific requirements of each client.
Conclusion
Edge-to-cloud doesn’t impose a rigid infrastructure that restricts operators to a specific way of deploying facial recognition applications. The hybrid approach, which involves distributing tasks between the centralized server and edge devices, can be essential for certain companies seeking more centralized control.
However, for organizations requiring uninterrupted system operation, opting for smart cameras or edge deployment becomes advantageous, as a failure in one edge device won’t lead to a complete system failure.
In the coming years, businesses are likely to adopt a hybrid edge-to-cloud approach for facial recognition, as network and bandwidth usage has become an important consideration, along with simplifying hardware deployment, and reducing critical infrastructure costs.
About the author
Abhishek Jadhav is a Master’s graduate in Electrical Engineering and a technology and science writer at EdgeIR,
Article Topics
biometrics | biometrics at the edge | cloud services | edge cameras | edge computing | facial recognition | Innovatrics | video analytics | video surveillance
This is one of the more in-depth analyses of edge computing, but it only considers the hardware aspects of the design. There are other significant aspects that also need consideration such as the cost and effectiveness of the software components. You normally don’t get face detection, quality assurance, template generation and 1:many matching deployed across 800 devices for a distributed system for the same costs as a centralised server based approach. Rarely, if ever are the face recognition algorithm capabilities the same in the embedded systems as quite often these devices tend to run a cut down version of the algorithm in order to run on the embedded processor. There is also the evidentiary requirements of recording and storing the original video for court purposes. If you need to transmit the original video as well to the central VMS then there may be no reduced bandwidth requirements compared to a centralised system. There is much more to consider than just the hardware infrastructure, Face recognition software costs as well as whether the embedded algorithms are of the same accuracy as the centralised server deployment need also to be considered.
Excellent approach to comparing on premise VMS with AI edge solutions.
An additional consideration is for the Edge to be the Camera itself. That eliminates a single point of failure of any edge box.
Today’s camera chipset do not cover all the AI of an AI Gateway box, but ROI may depend on the needs of a project. People and Vehicle detection are already being done exceptionally well on higher end camera chipset like Ambarella and now are even supported well on M-Star and Novatec. Cameras are getting more and more intelligent and can handle many projects real needs without a an extra box. Platforms like https//IPTECHVIEW.com can be used to centralize cameras management to get the results needed. This can be done either with the cloud VMS of the platform or using local on premise solutions.