top of page
Logo 1 FINAL Orizontal 2.png

European Perspectives: Assessing the Readiness of European Companies for the EU AI Act

ree

Now that the European Union Artificial Intelligence Act has officially come into force as of 1 August, 2024, let's explore this further into what this means for European organizations within the Union and beyond.


Reminder takeaways from the regulation:


  • Risk-based approach: AI systems will be regulated based on their potential risks and impact. High-risk AI applications will face stricter scrutiny and obligations.

  • Focus on fundamental rights: The regulation prioritizes the protection of fundamental rights, democracy, the rule of law, and environmental sustainability.

  • Boosting innovation: While ensuring safety, the regulation also aims to foster innovation and establish Europe as a leader in AI.


EU AI Act: Key Restrictions and Requirements


The EU's new AI Act bans applications that threaten citizens' rights, like biometric categorization and untargeted facial recognition. It also forbids emotion recognition in specific settings such as systems designed for mass surveillance, social scoring (credit scores), and any AI-based systems that manipulate behavior and breach human rights according to Article 2 of the Treaty of European Union ratified by member countries on February 7, 1992 in Maastricht, the Netherlands.


Law enforcement's use of biometric identification is heavily restricted, requiring authorization in specific cases. High-risk AI systems, like those used in critical infrastructure or healthcare, must undergo risk assessments and ensure transparency.


Additionally, the Act mandates transparency for general-purpose AI, including compliance with copyright law and risk mitigation for powerful models. Deepfakes must be clearly labeled.


Finally, the Act supports innovation through regulatory sandboxes for SMEs and startups to test AI before market release.


Current readiness levels according to Deloitte's recent survey reveals that many European companies are still in the early stages of preparing for the AI Act.


Here are some key findings from the report:

  • Lack of Awareness: A significant number of companies remain unaware of the specifics of the AI Act. Approximately 56% of respondents in Deloitte's survey indicated that they had limited knowledge regarding the legislation and its implications for their operations.

  • Limited Preparedness: Only 15% of surveyed organizations reported being fully prepared for the AI Act. Many companies are still assessing their current AI systems and determining how they align with the new regulations.

  • Investment Challenges: Companies expressed concerns about the potential costs associated with compliance. Nearly 40% of respondents indicated that they lacked the necessary budget to implement changes required by the AI Act.

  • Focus on Governance: There is a growing acknowledgment of the need for robust AI governance frameworks. Many organizations are beginning to invest in developing ethical guidelines and risk management strategies to meet the expected regulatory standards.


Challenges Ahead

As companies gear up for compliance, several challenges loom:


ree
Deloitte Survey: On a scale of 1 to 5, how intensively is your company engaging with the "AI Act"?
  1. Regulatory Complexity: The multifaceted nature of the AI Act may require businesses to navigate complex legal landscapes, especially those operating across different EU member states. Companies need to ensure they understand not only the EU-wide regulations but also any additional local laws.

  2. Technical Implementation. Let me expand a little more here. Implementing the European Union Artificial Intelligence Act poses a variety of technical challenges for organizations. As businesses strive to align their AI systems with new regulatory requirements, they encounter several hurdles that require careful planning and execution. Below are some of the key technical challenges:

  • Legacy Systems & Cloud and Big Data Laggards

Globally and in Europe large enterprises operate using legacy systems (20 years or older) that may not be fully compatible with modern AI technologies or the requirements outlined in the AI Act. In its latest report The State of the Digital Decade, European businesses' adoption of AI, cloud, and big data technologies in 2023 fell significantly short of the 75% target set by the Digital Decade initiative. Current projections indicate that by 2030, only 64% of businesses will utilize cloud computing, 50% big data, and a mere 17% AI.


Integrating advanced AI capabilities into these existing systems can be a complex and resource-intensive process, requiring significant technical expertise. Organizations must evaluate whether to upgrade their legacy systems or build new architectures that comply with the AI Act.

  • Data Quality and Management

The quality and management of data are critical to the effectiveness of AI systems. The AI Act emphasizes the importance of training AI models on high-quality, unbiased data. Companies face challenges in ensuring that their data is accurate, representative, and free from bias. This process may involve extensive data cleaning, preprocessing, and validation efforts to meet regulatory standards.

  • Risk Assessment and Classification

The AI Act requires organizations to assess their AI systems to categorize them based on risk levels. This classification process involves developing robust risk assessment methodologies and tools to evaluate potential impacts on safety, privacy, and fundamental rights. Companies may need to invest in specialized software solutions or collaborate with experts to develop effective risk assessment frameworks.

  • Transparent AI Algorithms

Transparency is a cornerstone of the AI Act. Organizations are expected to ensure that their AI algorithms are interpretable and explainable. However, many AI models, particularly those based on deep learning, operate as black boxes, making it difficult to provide clear explanations of their decision-making processes. Companies must invest in research and development to adopt techniques that enhance algorithm transparency, such as explainable AI (XAI).

  • Monitoring and Continuous Compliance

The AI Act mandates ongoing monitoring of AI systems to ensure compliance over time. Organizations face challenges in implementing systems that can continuously evaluate AI performance, detect anomalies, and provide real-time reporting. This may involve deploying sophisticated monitoring tools and establishing feedback loops that allow for timely adjustments to AI models and processes.

  • Cross-Border Data Sharing and Compliance

For companies operating in multiple EU member states, navigating cross-border data sharing can be a significant challenge. The AI Act may impose different requirements depending on local regulations, necessitating a deep understanding of both EU directives and member-state laws. Organizations must develop comprehensive data-sharing protocols that comply with the AI Act while respecting regional data protection regulations, such as the General Data Protection Regulation (GDPR).

  • Resource Allocation and Skills Gap

Implementing the AI Act requires specialized knowledge and skills, which may be lacking in many organizations. Companies must allocate resources for training existing staff, hiring new talent, or engaging external experts to assist with compliance efforts. This resource allocation can strain budgets, especially for smaller organizations that may not have the financial flexibility to invest heavily in compliance initiatives.

  • Developing Robust Documentation Practices

The AI Act requires organizations to maintain comprehensive documentation related to their AI systems, including design choices, training data sources, risk assessments, and compliance measures. Establishing effective documentation practices can be challenging, as it requires meticulous record-keeping and clear communication across teams. Companies must implement processes that facilitate the creation and maintenance of accurate documentation to meet regulatory requirements.


Data Protection Concerns: With the act emphasizing transparency and accountability, companies must ensure they have robust data management practices in place. This includes effectively managing personal data used in AI systems, in compliance with the six-year old comprehensive regulation on data privacy - the General Data Protection Regulation (GDPR).


Steps Towards Readiness

To successfully navigate the challenges associated with the AI Act, companies can take the following steps:


  • Increase Awareness and Education: Organizations should prioritize educating their employees across an organization about the AI Act and its implications. Workshops and training sessions can help bridge knowledge gaps and foster a culture of compliance.

  • Conduct Risk Assessments: Companies need to conduct thorough assessments of their AI systems to determine which applications fall under the high-risk category. This assessment will guide their compliance efforts.

  • Develop Governance Frameworks: Investing in AI governance frameworks is crucial. Establishing clear ethical guidelines and accountability structures will help businesses align their operations with regulatory expectations.

  • Collaborate with Experts: Engaging with legal and AI experts can provide valuable insights and guidance on navigating the complexities of the AI Act. Consulting firms can offer tailored advice on compliance strategies.


Conclusion

The EU AI Act represents a significant leap towards regulated AI practices in Europe. While many European companies are still grappling with the implications of the legislation, there is a growing recognition of the need for proactive measures. By enhancing awareness, investing in governance, and conducting thorough risk assessments, European companies can better prepare for the changes ahead. The journey towards compliance may be challenging, but it ultimately paves the way for responsible and ethical AI deployment that aligns with societal expectations.


 
 
 

Recent Posts

See All
Why AI Ethics Oversight Can't Wait ⏰ Hello!

According to this article in Hackernoon 97% of CIOs and CTOs are worried about unethical AI use at their companies. Yet only 1 in 3 have oversight in place. https://lnkd.in/grYgPMjF The cost of opacit

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Logo mark.png

Ⓒ 2021 Nestor Global Consulting. All rights Reserved.

Website by Daiana Schefler with Dysign.

bottom of page