Newsletter
Exclusive expert tips, customer stories and more.
On 2 August 2024, the European Artificial Intelligence Act (AI Act) came into force – the world's first comprehensive law regulating artificial intelligence (AI). The AI Regulation foresees a staggered timetable until 2026 for the application of its individual regulatory sections. This article looks at how the new legal framework will affect video security technology and biometric facial recognition, and what specific requirements companies need to consider.
What is the EU AI Act?
The EU AI Act is a law that regulates the use of AI systems in the EU. It focuses on risk classification, which classifies AI systems as unacceptable, high, limited and minimal. Depending on the level of risk, different requirements must be met to ensure the safe and ethical use of AI technologies. In risk class A (unacceptable risk), AI systems are banned.
Video security technology and facial recognition: an overview
The use of AI-based facial recognition systems is a specific point of the EU AI law. A basic distinction is made between real-time remote identification and retrospective identification. Both applications fall into the “high risk” category, which means that strict requirements have to be met.
1. Biometric facial recognition
The use of remote biometric identification systems in public places for law enforcement purposes is generally prohibited, unless specific exceptions apply. These include scenarios such as searching for missing persons or preventing terrorist attacks. These exceptions are clearly defined and subject to strict conditions:
Biometric facial recognition outside law enforcement:
For any other use of biometric systems, further regulations apply, in particular the GDPR (here in particular Article 9 GDPR – processing of special categories of personal data). Biometric data are generally subject to strict data protection requirements, which are supplemented by the EU AI Act.
2. “Normal” video security technology and AI-based video analytics
Non-biometric AI applications in video security, such as AI-based video analytics, are also regulated by the EU AI law. These systems are usually classified as high risk (risk class “high”), which means that they are subject to extensive testing before being placed on the market:
Video security and AI for critical infrastructure
Companies operating in the critical infrastructure sector need to be particularly careful. AI systems used in this area are by definition “high risk” and must meet the cybersecurity, data protection and data quality requirements outlined above.
Implications for video security companies
The EU AI Act is an important milestone for companies that develop or operate video security systems. On the one hand, the strict regulations and requirements mean more effort when developing and deploying new systems; on the other hand, they provide a clear guideline for the trustworthy and ethical use of AI technology. Compliance and documentation are becoming key issues for vendors and operators. It is essential to adapt internal processes to meet the requirements of the EU AI Act. This includes precise logging of all processes, transparent documentation and rigorous security checks.
Conclusion: Businesses need to act
Companies using AI technologies in video security need to take a close look at the EU AI law. The requirements are high, but they also offer an opportunity to position themselves in the market with a high level of transparency and security. In order to meet the new legal requirements, companies should take early steps to adapt their products and internal processes.
Strategic recommendations:
Companies should prepare for the upcoming changes to avoid legal risks and hefty fines for non-compliance, and to reap the benefits of legally compliant and ethical use of AI. The EU AI Act is a clear step towards the responsible use of AI and an opportunity to increase trust and transparency in the use of video security technologies.
Other sources of information and tips: