The requirements for governance, model transparency/notification, began in August 2025. General applicability of most of the AI Act including obligations for “high-risk” AI systems begin in August 2026. Obligations for high-risk AI systems that are part of safety components in regulated products become applicable in August 2027.

By introducing a tiered, risk-based regime, the law bans certain AI practices outright, layers strict obligations on high-risk systems, and establishes unprecedented oversight for general-purpose AI and foundation models, especially those with systemic impact. Its reach is global—any company placing AI into the EU market must comply and its penalties are steep.

For businesses and their counsel, compliance isn’t optional. It’s now the price of entry into one of the world’s largest markets, and it will influence how innovation is documented, safeguarded, and patented.

It’s important to understand the AI Act’s most consequential provisions for patent strategy, particularly how regulatory documentation duties intersect with data provenance, inventorship disputes, trade secret versus patent trade-offs, claim drafting under new compliance constraints, and geo-strategic filing decisions.

The AI Act distinguishes between four key categories of AI systems:

Minimal-risk systems (AI-enabled video games, spam filters)

Limited-risk systems (chatbots interacting with users)

High-risk systems (AI in health care, education, employment, infrastructure, biometric identification)

General purpose and foundation models

While minimal and limited risk systems carry only modest obligations, the regulation of high-risk and general-purpose/foundation models forms the core of the AI Act and presents the most significant patent implications.

Before entering the market, high-risk systems must clear a conformity assessment and provide detailed technical documentation that includes:

System description and intended purpose

System architecture, algorithms, and datasets used

Risk management, testing, and validation protocols Transparency and human oversight measures

Properly managed, these records can help distinguish human inventive contributions from AI-assisted outputs, providing valuable support for defending inventorship and reinforcing patent validity in a contested landscape.

Documentation and dataset summaries may further reduce the viability of trade secret protection. Both the EU Trade Secrets Directive and US law require that information be kept confidential through reasonable protective measures.

By mandating disclosures, the AI Act can erode this confidentiality, making secrecy harder to sustain in practice. This shift alters the calculus for IP strategy. In some cases, patenting may become the safer path, as certain technologies no longer can be reliably safeguarded as trade secrets.

  • A_norny_mousse@feddit.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    50 minutes ago

    G O O D

    AI simply needs regulation. Not sure this is the best there is, but just doing something is a step forward.

    I also read today that some leaders are pushing for AI sovereignty, i.e. being independent of US corpos and running their own.

    On the downside, they want to fully embrace that shit for “preemptive data storage”.

    Also many go with Palantir regardless, and I don’t understand how that goes together with the call for sovereignty.