top of page
AI.jpg

AI & Partners

Helping you achieve EU AI Act compliance

About Us

Helping you de-risk and cut costs.

AI & Partners is already working with clients since the initial publication of the EU AI Act in 2021. AI & Partners has built a reputation as the go-to AI-focused professional services firm in Europe. Each member of our team serves as an subject matter expert in their niche. They remain updated on developments in their respective industries, so that you receive the most specific and accurate state-of-the-art service available. Regardless of your needs, you will always receive standards above and beyond the market norms.

Our vision is to be Europe’s most professional professional services company; to provide a means of assurance for companies where they can develop, distribute, use, import, deploy, operate and interact with artificial intelligence systems, with both confidence and certainty.

As an ethical and fair professional services firm, AI & Partners holds the most robust standards when it comes to interacting with AI.  AI is a strong technology that can help people and businesses when developed, deployed and implemented responsibly.  Our internal and external teams follow the EU AI Act Principles internally and encourage clients using our AI regulatory platform to follow these as well.

What is the EU AI Act?

A European legal framework for AI to address fundamental rights and safety risks specific to the AI systems comes into force in Q1 2024. 

The regulatory framework aims to address related problems (e.g. safety and security, legal uncertainty for companies etc.) in order to ensure the proper functioning of the single market by creating conditions for the development and use of AI systems.

 

Specific objectives are:

  1. ensuring that AI systems placed on the market and used are safe and respect the existing law on fundamental rights and Union values;

  2. supporting legal certainty to facilitate investment and innovation in AI;

  3. enhancing governance and effective enforcement of the existing law on fundamental rights and safety requirements applicable to AI systems; and

  4. facilitating the development of a single market for lawful, safe and trustworthy AI systems and prevent market fragmentation.

What Are The Key Objectives?

Providing the required assurances for your AI systems across five key areas:

  1. Safety: demonstrate to external parties their security

  2. Compliance: obtain evidence of compliant innovation

  3. Governance: implement market-recognised governance & enforcement mechanism

  4. Development: safeguard lawful, safe and trustworthy applications

  5. Commercial: continued use without constraints

How Can We Help You?

  1. Providing advisory services: We provide advisory services to help our clients understand the EU AI Act and how it will impact their business.  We do this by identifying areas of the business that may need to be restructured, identifying new opportunities or risks that arise from the regulation, and developing strategies to comply with the EU AI Act.

  2. Implementing compliance programs: We help our clients implement compliance programs to meet the requirements of the EU AI Act.  We do this by developing policies and procedures, training employees, and creating monitoring and reporting systems.

  3. Conducting assessments: We conduct assessments of our clients' current compliance with the EU AI Act to identify gaps and areas for improvement.  We do this by reviewing documentation, interviewing employees, and analysing data.

  4. Providing technology solutions: We also provide technology solutions to help our clients comply with the EU AI Act.  We do this by developing software or implementing new systems to help our clients manage data, track compliance, or automate processes.

What Are Your Obligations?

From 2024 onwards, horizontal mandatory requirements come into force for any developer, provider and/or operator/user of AI systems. Different requirements apply depending on the type of AI system and its risk category. These must be satisfied for any high-risk AI system to be permitted or otherwise put into service. ​

Examples

Providers

  • Ensure compliance with the AI requirements: do a conformity assessment to demonstrate compliance

  • Registration: register AI systems in a public database

  • Post-market monitoring: implement post-market monitoring systems

Users

  • Human Oversight: continuous monitoring of the AI system

  • Documentation: keep minimal documentation with input data in case of self-learning systems

  • Data Protection Impact Assessment: use the information given by the Provider as an input to perform a Data Impact Protection Assessment (when required)

Why Now?

A few reasons drive this need: 

  1. Firms have to prepare now to reach compliance by the end (if not before) the end of the transition period in 2025.

  2. Society expects rapid answers to the rapid disruption caused by generative AI.

  3. Firms need to build critical mass now in order to have maximum market impact.

  4. Firms have a first mover advantage in getting their compliance infrastructure in a state fit for the EU AI Act.

What Are Our Clients' Needs and Concerns?

As a company: 

  1. Prepare compliance with the EU AI Act.

  2. Differentiate from competitors.

  3. Prevent risks to reputation.

  4. Build trust and engagement with customers, employees and stakeholders.

  5. Progressively follow good practices through concrete levers for progress.

  6. Align practices within an ecosystem (i.e. AI system interoperability).

  7. Easily specify required product characteristics RFQs.

What Do Our Services Cover?

Our pre-audits cover the EU AI Act, the most ground breaking regulation since the GDPR. It represents a coordinated European approach on the human and ethical implications of AI together with ensuring a well-functioning internal market for AI systems, applying to all in-scope businesses by 2024

 

Specifically, it:

  • sets harmonised rules for the development, placement on the market and use of AI systems in the EU following a proportionate risk-based approach

  • proposes a single future-proof definition of AI

  • establishes specific restrictions and safeguards in relation to certain uses of remote biometric identification systems

  • lays down a solid risk methodology to define “high-risk” AI systems that pose significant risks to the health and safety or fundamental rights of persons

  • sets out horizontal mandatory requirements for AI systems to comply with

What Do People Think About Us?

bottom of page