Russia’s AI Regulation Bill: Overview and Proposed Improvements for Neural Network Governance

Russia has prepared a draft federal law titled “On the Foundations of State Regulation of Artificial Intelligence Technology Applications in the Russian Federation” (the “Bill”). This is the first attempt to establish a unified legal framework governing the development, deployment, and use of AI technologies in Russia, including large neural network models, generative AI services, and AI systems used in critical infrastructure.

The Bill addresses a wide range of issues: from definitions and regulatory principles to intellectual property, deepfake labeling, and requirements for AI models used in government systems. Despite its ambitious scope, the text contains several controversial provisions and potential conflicts with existing legislation that require revision.

Below is an analytical overview of the Bill and a set of systemic proposals for regulating neural networks, with particular attention to key problematic provisions, including Article 13 and Article 17(2).


1. General Assessment of the Bill

The Bill is a timely and ambitious initiative. It establishes the foundation for AI regulation at the federal level and sets the direction for future secondary legislation. Its significance lies in the fact that it is the first attempt to create a comprehensive governance model for AI in Russia.

Key Strengths of the Bill

First, the Bill introduces a structured legal terminology framework, defining artificial intelligence, AI models, AI services, and the main actors involved in AI-related activities.

Second, it establishes a clear chain of participants in the AI market:
model developer → system operator → AI service owner → user.
This structure is important for distributing responsibilities.

Third, the Bill guarantees citizen rights, including:

  • the right to be informed when AI is used,
  • the right to refuse autonomous AI decisions (in cases determined by the Government),
  • the right to challenge AI-driven decisions out of court,
  • the right to compensation for damages caused by improper AI use.

Fourth, the Bill adopts a risk-based regulatory approach (Article 5) and introduces a “trusted AI models” regime for government systems and critical information infrastructure (Article 8).

Fifth, it contains a progressive rule allowing text and data mining (TDM) from protected works for AI training when lawful access exists. This could significantly reduce legal barriers for AI model development.

Sixth, the Bill provides support measures for computing infrastructure (Article 20), including special regulation for data centers and supercomputers, reflecting the real technological needs of the industry.


2. Main Risks and Weaknesses of the Bill

Despite its clear strengths, the current version contains provisions that may lead to legal uncertainty and slow down industry development.

2.1. Overly Broad Definition of AI

The definition of AI in Article 3(1) is excessively broad. It may cover not only general-purpose neural networks but also basic machine learning functions such as spam filters or photo enhancement.

As a result, legal obligations designed for high-risk AI could formally apply to millions of devices and everyday software tools, making compliance unrealistic.


2.2. Unrealistic Criteria for “Sovereign AI Models” (Article 7)

Article 7 introduces sovereign and national AI models and requires that training be conducted using datasets formed exclusively in Russia by Russian entities.

In practice, foundation models (LLMs) rely on global knowledge bases, including scientific, technical, and cultural materials. Therefore, such territorial restrictions could prevent even leading Russian LLMs from being recognized as “sovereign” if they rely on international corpora.


2.3. Unclear Liability Standard (Article 11)

The wording “knew or should have known” about the possibility of unlawful output creates a presumption of liability for developers and service owners.

This approach is particularly problematic for generative AI, where outputs are probabilistic and cannot always be fully predicted, even with filtering mechanisms. The provision increases litigation risks and creates regulatory pressure on innovation.


2.4. Conflict with the Russian Civil Code on Intellectual Property (Article 13)

Article 13 recognizes IP protection for works created by automated systems if they meet originality requirements.

However, Article 1228 of the Russian Civil Code defines the author exclusively as a natural person whose creative labor produced the work. This creates a legal contradiction: a work may be “protectable” but lack a legally recognized author, meaning exclusive rights may not arise. As a result, contractual transfers of rights could be legally questionable.


2.5. A Loophole in Deepfake Labeling Rules (Article 12)

Article 12(5) allows users to waive human-readable labeling of synthetic content through contractual terms.

While the waiver must be “specific, informed, conscious, and unambiguous,” in practice it may be reduced to ticking a checkbox. This creates a legal pathway for large-scale distribution of unlabeled deepfakes and undermines trust in digital content.


2.6. “Rubber Clause” in Article 17(2)

Article 17(2) states that cross-border AI technologies may be restricted or prohibited in cases established by Russian legislation, without providing any criteria.

This effectively allows the restriction of any foreign AI technology without transparent legal grounds, creating uncertainty for business and international cooperation.


3. Systemic Proposals for Neural Network Regulation

3.1. Refining the AI Definition and Excluding Basic ML Functions

A more practical approach would be to introduce an autonomy criterion, similar to the EU AI Act:

“AI is a set of technological solutions capable of imitating human cognitive functions, including autonomous decision-making or content generation without direct human involvement…”

Additionally, the law should exclude from its scope:

  • embedded ML functions of general purpose (autocorrect, noise reduction, spam filtering),
  • systems used solely for scientific research,
  • open-source AI models used for non-commercial purposes.

Without such exclusions, the Bill risks becoming technically unenforceable.


3.2. Revising Sovereign Model Requirements (Article 7)

The territorial dataset requirement should be replaced by a control-based approach:

“datasets must be managed by Russian legal entities or Russian citizens (storage, filtering, governance), regardless of their original source.”

The law should explicitly allow foreign data sources provided that:

  • final filtering and validation occur in Russia,
  • the model is fine-tuned with priority given to Russian cultural, linguistic, and legal context.

3.3. Strengthening Citizen Rights: Opt-Out and the Right to Explanation

The right to refuse AI-driven decisions should not be left entirely to Government decrees. The law should itself define areas where non-AI alternatives must be available:

  • public services,
  • healthcare (diagnosis and treatment),
  • education (assessment and recommendations),
  • judicial and administrative decisions affecting rights.

For other sectors, the Bill should require:

  • disclosure that AI is being used,
  • the right to receive an explanation of the key factors influencing an automated decision.

3.4. Reforming Liability Rules (Article 11)

The phrase “should have known” should be removed and replaced with a causation-based standard:

liability arises only if the party violated obligations under Article 10 and such violation is directly linked to the resulting harm.

The vague term “exhaustive measures” should also be removed. Instead, a safe harbour mechanism should be introduced: the Government approves a list of minimum compliance measures which, if implemented, protects the developer/operator from liability.

Additionally, mandatory liability insurance for high-risk AI operators (finance, healthcare, transport) should be considered.


4. Article 12: Synthetic Content Labeling

The contractual waiver of human-readable labeling should be prohibited for audiovisual content (video, audio, photorealistic images).

For technical internal-use content, contractual waiver may remain possible, but machine-readable labeling (metadata) should always be mandatory.


5. Article 13: Intellectual Property (A Critical Area)

5.1. Conflict with the Civil Code

The optimal approach would be to amend Part IV of the Russian Civil Code by introducing a legal category similar to neighboring rights: a person who organizes and ensures the creation of a result using AI.

A compromise approach within the Bill would be to recognize copyrightability only when a human makes a substantial creative contribution (unique prompt design, selection, editing, compilation).

The law should also clarify that service agreement provisions on ownership apply only to rights legally recognized under Russian law.


5.2. Risks in the TDM Exception (Article 13(5))

The phrase “available for analysis” may be interpreted as allowing training on any published content without permission.

To address this, the Bill should:

  • introduce an opt-out mechanism through machine-readable metadata,
  • impose an obligation on developers to check and respect such restrictions,
  • establish a fair remuneration mechanism for rightsholders in commercial model training,
  • require disclosure of key dataset sources,
  • clarify that the exception does not apply when outputs reproduce protected content in substantial volume.

6. Article 17(2): The Need for Clear Criteria

To avoid arbitrary restrictions, the Bill should include an explicit list of grounds for prohibiting cross-border AI technologies, such as:

  • national security threats,
  • violation of data localization requirements,
  • non-compliance with trusted model requirements for critical infrastructure,
  • systematic generation of illegal content without effective filtering mechanisms.

Without such criteria, the clause remains overly broad and unpredictable.


7. Major Gaps: Issues the Bill Should Cover

7.1. Personal Data and AI Training

The Bill does not properly address personal data issues under Russian Federal Law No. 152-FZ. It should regulate:

  • lawful grounds for processing personal data during AI training,
  • limitations of “right to be forgotten” for already trained models,
  • data protection impact assessments (DPIA) for high-risk AI systems.

7.2. Open-Source Models and API Integration

The Bill should separate liability between:

  • the developer of a base open-source model,
  • the entity conducting fine-tuning,
  • the service operator deploying the model.

It should also establish clear responsibility rules for API-based model usage.


7.3. Regulatory Sandboxes

The Bill should include provisions on experimental legal regimes (regulatory sandboxes) for high-risk AI sectors, allowing innovation under controlled legal conditions.


7.4. Institutional Regulator Design

The Bill introduces trusted model registries and expert bodies, but does not define a unified regulator or clearly allocate authority between the Ministry of Digital Development, FSB, FSTEC, Roskomnadzor, and other agencies.

Such distribution must be explicitly stated to avoid regulatory conflicts and duplication.


8. Conclusion: Key Priorities for Revision

The Bill has the potential to become the foundation of AI regulation in Russia. However, without amendments it may become either purely declarative or a barrier to AI industry growth.

The key revision priorities include:

  • narrowing the AI definition and excluding basic ML functions,
  • replacing territorial dataset criteria with control-based criteria (Article 7),
  • reforming liability standards and introducing a safe harbour framework (Article 11),
  • resolving the intellectual property conflict with the Civil Code (Article 13),
  • closing the labeling loophole for deepfake content (Article 12),
  • clarifying cross-border AI restriction grounds (Article 17),
  • adding provisions on personal data, open-source models, and API usage,
  • introducing regulatory sandboxes,
  • defining institutional roles of regulators.

If properly revised, the Bill could create a workable legal environment balancing citizen protection, technological sovereignty, and innovation. Without revision, it risks becoming another example of legislation that fails to reflect the technological realities of modern neural networks.