...and the impact of the European Union's AI Act on the use of artificial intelligence. The new rules on the use of artificial intelligence (AI Act) recently approved by all EU member states aim to protect fundamental rights, democracy and the rule of law from high-risk AI systems. At the same time, they are intended to boost innovation in this area.
The regulation sets out certain obligations for AI systems, depending on the potential risks and effects. There will be prohibited applications. These include those that threaten citizens' rights, e.g. biometric categorization based on sensitive characteristics and the untargeted reading of facial images from the internet. Emotion recognition systems, including in the workplace, will also be banned in future. Certain obligations are also envisaged for other high-risk AI systems. AI systems that are used in the areas of critical infrastructure, education or employment are classified as high-risk. AI systems for services - such as in healthcare or banking - are also considered high-risk. Such systems must assess and mitigate risks, keep usage logs, be transparent and accurate and be supervised by humans. General-purpose AI systems and the models on which they are based must meet certain transparency requirements, including compliance with EU copyright law and the publication of detailed summaries of the content used for training. The new regulations also provide for the promotion of innovation and targeted support for small and medium-sized enterprises. Real-world laboratories must be set up in the member states and tests must be carried out under real conditions. These must be accessible to small and medium-sized enterprises and start-ups so that they can develop and train innovative AI systems before they are launched on the market. Once formally adopted by the Council, the regulation will enter into force 20 days after its publication in the Official Journal of the EU.
Further development and clarification of the AI Act
Initial reactions to the regulation have come from Swiss company Spitch AG, a provider of conversational AI with business customers within the EU, among others. Spitch estimates that the AI Act will have a predominantly positive impact on the use of AI systems and recommends that companies and authorities scrutinize their own AI applications under the new risk aspects of EU regulation and make adjustments where necessary. Spitch also assumes that AI applications will have to be re-examined 'from time to time' in the future for their compliance with the legal requirements, as the AI Act is expected to be further developed and clarified by the European standardization bodies. The company believes that none of today's typical areas of application for voice and text dialog systems based on AI (conversational AI) in customer service per se fall under the established risk categories. However, chatbots, voice analytics or knowledge databases could fall into the 'limited risk' or 'minimal risk' categories.
For most areas of application of conversational AI, the additional requirements must be met within 12 to 36 months after the AI Act comes into force, probably in May 2024. Providers of AI systems must now ensure that this deadline is met.