Artificial intelligence has rapidly gone from an experimental technology to a business essential. In just three short years AI has been adopted by 78% of organisations for at least one business function, and 84% of CEOs plan to increase their investment in 2025. Against this backdrop, the European Union has introduced the world’s first comprehensive regulatory framework for AI — the EU AI Act.
The Act marks a decisive step toward balancing innovation with accountability and provides a legal foundation for how AI systems are developed, deployed, and governed, directly underpinning organisational trust and responsible use. For businesses, this is both a challenge and an opportunity: those that act now to align with its requirements can turn compliance into a competitive advantage, while those that don’t risk financial penalties of up to €35 million or 7% of global turnover.
What the EU AI Act is and why it matters
The EU AI Act regulates how AI systems are designed, placed on the market, and used in the EU, with extraterritorial reach wherever their outputs are used in the Union. It sits alongside GDPR: GDPR governs lawful personal data processing; the AI Act governs the AI system lifecycle — data quality, documentation, human oversight, robustness, transparency, and post-market monitoring. In practice, most organisations will need to demonstrate compliance with both whenever AI processes personal data.
‘The EU AI Act requires organisations to ensure their staff have a sufficient level of AI literacy, taking into account their technical knowledge, experience, and the context in which AI is used. This means employees need to understand what AI is, how the specific systems they use work, the opportunities and risks involved, and their legal and ethical implications,’ says Integrity360 CTO Richard Ford.
To operationalise this, deliver role-based AI literacy programmes to all stakeholders, ensuring clear understanding of ethical, legal, and operational risks.
The AI Act entered into force on 1 August 2024. Prohibitions on unacceptable risk uses began on 2 February 2025, with broader obligations phasing in from 2 August 2026 and some high-risk requirements extending toward 2027. Use the current window to build inventories, controls, testing, and evidence so you are audit ready as enforcement tightens.
Unacceptable and high risk: what is in scope?
Unacceptable risk (prohibited outright)
Banned applications include:
• Cognitive behavioural manipulation of people or specific vulnerable groups, for example a voice-activated toy that encourages dangerous behaviour in children
• Social scoring, classifying people based on behaviour, socio-economic status, or personal characteristics
• Biometric identification and categorisation of people
• Real-time and remote biometric identification systems in public spaces, such as facial recognition
Law-enforcement carve-outs are narrow. Real-time remote biometric identification can be permitted only for a limited set of serious cases. Post remote biometric identification can be used to prosecute serious crimes and only with court approval. Digital Strategy
An example of this would be where an AI monitors live signals like faces, voices, messages and biometrics to infer staff emotions during work, raising significant privacy, accuracy and bias concerns.
High risk (strictly regulated)
High-risk systems are those that could negatively affect safety or fundamental rights. They fall into two buckets:
- AI used in products covered by EU product-safety law, for example toys, aviation, automotive, medical devices, and lifts
- AI used in specific areas that must be registered in an EU database: management and operation of critical infrastructure; education and vocational training; employment, worker management and access to self-employment; access to essential private and public services and benefits; law enforcement; migration, asylum and border control management; and assistance in legal interpretation and application of the law
All high-risk systems require lifecycle controls and need to be assessed before market entry and throughout their life. Individuals will be able to lodge complaints with national authorities.
The other risk categories are:
limited risk: AI that triggers basic transparency duties — e.g. tell users they’re interacting with AI and label AI-generated or manipulated content (such as chatbots and deepfakes) — without heavier conformity or registration obligations.
minimal risk: AI with negligible impact — like spam filters or simple recommenders — faces no additional legal duties under the Act, though providers are encouraged to follow voluntary codes of conduct and good practice.
How it compares to global approaches
Globally, the EU is setting the benchmark while other jurisdictions move toward interoperability rather than strict equivalence. The UK favours a regulator-led, principles-based model for now while the United States remains a mix of federal direction, agency enforcement, and state laws. For multinationals handling EU citizen data, converging on AI Act-aligned controls is the simplest way to stay ahead.
Preparing your organisation for compliance
Readiness begins with visibility. Organisations need to identify where AI is being used across departments, map out data flows, and assess risk exposure. Establishing strong data governance practices — including data classification, lifecycle tracking, and third-party assurance — will be essential.
Equally important is the ability to demonstrate explainability and accountability. This requires investment in AI observability tools capable of tracking model performance, detecting bias, and logging decision-making processes. Such measures not only support compliance but also drive innovation by helping teams understand how and why AI systems behave as they do.
Organisations should also integrate compliance into their broader security strategy. Identity management, access control, and supply chain security all play a role in safeguarding the integrity of AI operations. Partnering with experts to conduct readiness assessments can help align governance models with the evolving regulatory landscape.
The role of trust, data governance, and identity security
Trust is the licence to operate at scale. The Act embeds trust through transparency, oversight, and market surveillance. Practically, that means:
• Data governance: lawful sources and licensing, quality rules, bias controls, lineage from ingestion to inference, and retention policies that match the intended purpose
• Identity and access: strong authentication, least-privilege access to models, datasets, and pipelines, and tamper-evident audit trails
• Human oversight: clear intervention points, escalation paths, and authority to suspend or roll back models when risk thresholds are hit
Where ISO 42001 fits
ISO/IEC 42001 is the management-system standard for AI. It turns policies into practice by defining scope, roles and controls, much like ISO 27001 does for information security. ISO 42001 provides a framework for managing AI risks, which helps organisations meet the EU AI Act's compliance requirements by creating the necessary infrastructure for documentation, risk assessment, and transparency.
In practical terms, a 42001-aligned AI management system gives you: an inventory of AI uses mapped to risk; documented data lineage and quality controls; model cards and release gates; human oversight plans; post-market monitoring; and auditable incident and corrective-action logs.
Gain a competitive advantage
While compliance may initially appear as a cost, readiness under the EU AI Act offers a strategic edge. Businesses that can prove responsible AI practices will gain a clear market differentiator, particularly in industries where transparency and ethical standards are critical.
Establishing a culture of accountability will not only reduce the risk of non-compliance penalties but also foster innovation. Adaptive governance, real-time data observability, and continuous improvement cycles will enable organisations to scale AI confidently and securely.
In an age where AI drives competitiveness, those prepared for regulation will gain an advantage over their competition. The EU Data and AI Act is not just a legal requirement; it is a framework for trust, innovation, and the responsible evolution of artificial intelligence.
How integrity360 can help
AI readiness assessment
We baseline your current AI estate against the AI Act, map risks, and create a prioritised remediation plan with clear ownership.
Governance, risk and compliance
We design and implement an AI risk management framework, harmonised with your existing security, privacy, and SDLC processes. That includes policies, controls, evidence plans, and playbooks.
Data governance and identity security
We classify data, define lineage and retention, and harden access to models, datasets, tooling, and pipelines with robust identity controls.
Third-party and supply chain assurance
We catalogue providers and models, flow down contract clauses aligned to the Act, and establish ongoing assurance.
If your organisation requires assistance with defending against AI threats and compliance with the EU AI Act get in touch with the experts at Integrity360.




