A year ago, "AI regulation" was mostly hypothetical from a practical compliance standpoint. The EU AI Act had been passed but not yet in force; U.S. executive orders on AI were substantial in ambition but thin in enforcement mechanism; international coordination was aspirational. In early 2026, that has changed materially. The compliance clock is running, and builders who have been treating AI governance as a future problem are finding it has arrived.
This post is a practical breakdown of the current regulatory landscape — where it's binding now, where it's binding soon, and what it actually requires of AI developers and deployers.
The EU AI Act: Where Things Stand
The EU AI Act entered into force in August 2024, with a phased implementation timeline. As of early 2026, the following obligations are live or imminent:
Prohibited practices (August 2024): Certain AI applications are banned outright in the EU. Social scoring by public authorities, real-time biometric identification in public spaces by law enforcement (with narrow exceptions), subliminal manipulation, and AI that exploits vulnerable groups. If you're building in any of these categories and operating in EU markets, you are already non-compliant.
General-Purpose AI (GPAI) model obligations (August 2025): Providers of GPAI models — essentially foundation models available via API or open release — must provide technical documentation, comply with EU copyright law (respecting opt-outs under the TDM exception), and publish summaries of training data. For GPAI models designated as having "systemic risk" (trained with more than 10^25 FLOPs, roughly GPT-4 scale and above), additional obligations apply: adversarial testing, incident reporting to the European AI Office, and cybersecurity measures.
High-risk AI system obligations (August 2026): This is the most consequential category for most enterprise builders. AI systems used in sectors including healthcare, employment, education, credit scoring, insurance, law enforcement, and administration of critical infrastructure are classified as high-risk. These systems require conformity assessments, technical documentation, risk management systems, data governance requirements, transparency obligations to users, and registration in the EU database. This deadline is approaching and the compliance work is substantial.
The practical implication: if you're selling AI-enabled products into EU markets in the high-risk categories, you need to be deep into compliance work now. Conformity assessments aren't a box-checking exercise — they require documented evidence of testing, bias evaluation, data quality assessment, and technical architecture documentation that takes months to produce properly.
The EU AI Office (established under the Act) is standing up its supervisory capacity, and enforcement is expected to become real in 2026–2027. The fines structure — up to 3% of global annual revenue for violations of most provisions, up to 6% for violations of prohibited practices requirements — is serious at the scale of major technology companies.
The U.S. Position: Significant Pivot
The U.S. AI regulatory environment has undergone a meaningful policy shift. President Biden's 2023 Executive Order on AI — which directed federal agencies to develop safety guidance, established reporting requirements for frontier model developers, and tasked NIST with developing an AI risk management framework — was rescinded by the Trump administration in early 2025.
The stated rationale was that Biden's AI governance approach was hampering American AI competitiveness relative to China, and that safety requirements were being implemented by the federal government without sufficient congressional authority. The replacement executive order emphasized AI deployment for national security and economic competitiveness, with reduced emphasis on safety standardization.
This does not mean AI is unregulated in the United States. Federal agencies retain existing authority under sector-specific laws: the FDA regulates AI-enabled medical devices, the CFPB has authority over AI in consumer financial products, the EEOC has issued guidance on AI in employment decisions, and the FTC has rulemaking authority over deceptive AI practices. The absence of a comprehensive federal AI law means that enforcement is fragmented, sector-specific, and dependent on how aggressively different agencies exercise existing authority under a given administration.
State-level activity has accelerated to fill the federal gap. California's AI legislation has been the most prolific: SB 1047 (which would have required safety testing for frontier models) was vetoed by Governor Newsom, but other California bills targeting AI in employment, healthcare, and high-stakes decisions have passed. Colorado, Texas, and several other states have enacted or are considering their own AI governance requirements. The emerging patchwork of state laws is creating compliance complexity for companies operating nationally.
For AI builders in the U.S., the practical picture is: reduced federal coordination on AI safety, increased state-level fragmentation, and ongoing sector-specific enforcement by agencies that have existing authority. The absence of comprehensive federal law does not equal a permissive environment — it means uncertain, inconsistent enforcement by multiple authorities.
China's Approach: Specific, Rapid, Centralized
China has taken a different approach to AI regulation — not comprehensive risk-based frameworks, but a series of specific regulations targeting particular AI application types. The Generative AI Regulation (effective August 2023), the Deep Synthesis Regulation (targeting synthetic media), and the Recommendation Algorithm Regulation have collectively created a detailed compliance framework for Chinese AI deployments.
Key features of China's approach that builders targeting Chinese markets need to understand:
Security assessments are required for generative AI services offered to the public in China, administered by the Cyberspace Administration of China (CAC). Training data must comply with data localization requirements. There are content requirements ensuring AI outputs support "core socialist values" — which has significant implications for what topics models can engage with.
China's regulatory approach is faster-moving and less predictable than the EU's structured legislative process, but it is very much enforced. Western AI companies operating in China or providing services to Chinese users face genuine compliance challenges, and the technology sovereignty dimension — China's preference for domestically produced AI systems in sensitive applications — creates additional market access barriers.
International Coordination: Aspirational but Limited
The G7 Hiroshima AI Process, the UK AI Safety Summit series (starting with Bletchley Park in 2023, continued in Seoul and Paris), and UNESCO's Recommendation on the Ethics of AI have all produced international soft-law frameworks and voluntary commitments. The Bletchley Declaration's commitment to cooperative safety evaluation of frontier models has led to the establishment of AI Safety Institutes in the UK, U.S., Canada, Japan, and Korea.
The AI Safety Institutes (ASIs) represent meaningful international coordination on one important question: the safety evaluation of frontier models before public release. The UK AISI and U.S. AISI (now operating under changed political conditions in Washington) have developed evaluation frameworks for dangerous capability assessment — particularly biosecurity, cybersecurity, and CBRN (chemical, biological, radiological, nuclear) risks from advanced models.
However, international AI governance remains fundamentally voluntary and limited in scope. There is no international AI governance treaty, no equivalent of IAEA for AI systems, and the geopolitical competition between the U.S., China, and EU makes coordinated governance politically difficult. This fragmented global landscape creates genuine compliance challenges for AI companies operating across jurisdictions.
What Builders Actually Need to Do
For organizations building AI products in 2026, the regulatory landscape translates to a concrete set of obligations and prudent practices.
Know your risk classification under the EU AI Act if you're selling into European markets. The high-risk category list is specific — read it carefully and honestly. If you're in a gray area, document your classification reasoning. Misclassifying a high-risk system as lower-risk is a compliance failure.
Build documentation infrastructure now. The EU AI Act's technical documentation requirements, and the informal documentation best practices that U.S. sector regulators increasingly expect, require systematic record-keeping: training data provenance, model validation results, bias testing, security assessments, and ongoing performance monitoring. Retrofitting this documentation onto a production system is painful and expensive — build the practices during development.
Establish a legal basis for training data. The EU AI Act's copyright requirements, combined with ongoing litigation from content creators and publishers against AI companies (the New York Times v. OpenAI lawsuit and numerous others), mean that "we scraped the internet" is not a stable legal foundation. Understand what data you're training on, under what license, and whether your use is defensible under fair use, licensing agreements, or statutory exemptions.
Plan for transparency obligations. Multiple jurisdictions now require disclosure when users are interacting with AI, and this requirement will expand. Design your AI applications to surface this information clearly and contextually, not in fine print.
Do not treat compliance as a one-time project. AI governance frameworks are evolving, model capabilities are evolving, and your compliance posture needs to evolve with them. Build ongoing regulatory monitoring into your AI governance function.
The regulatory landscape for AI is genuinely complex and still developing. The organizations navigating it best are those treating compliance not as a legal tax on innovation, but as a design constraint that produces better, more trustworthy products. The companies that will win in regulated AI markets are those that have made reliability, transparency, and accountability core product values — not those looking for the minimum viable compliance posture.
Explore more from Dr. Jyothi



