Confidentiality and Risks: How to Safely Use Neural Networks in Legal Practice

Neural networks are becoming a tool, but the responsibility for their use lies entirely with the lawyer, not the machine.

Artificial intelligence offers lawyers unprecedented opportunities: from document analysis and case law research to drafting procedural documents. However, the flip side of these technologies involves serious legal risks—leaks of confidential information, violations of professional ethics, and even court sanctions. This article will help you understand the key dangers and implement effective security measures in your practice.

The Nature of the Risk: Why Neural Networks Are Dangerous for Lawyers

Neural networks, particularly large language models (LLMs), are not thinking entities. They do not understand the meaning of text; they function as statistical mechanisms, selecting the most probable word sequences based on training data. It is this feature that creates fundamental problems:

  • Hallucinations: Models can generate entirely fictional references to legal norms, court decisions, or facts with a high degree of confidence. There are known cases where lawyers, relying on ChatGPT, submitted documents to court containing non-existent precedents and faced disciplinary action as a result.

  • Liability: A neural network is not a legal entity. For any document, output, or advice generated by AI and used in their work, the lawyer is responsible to the client and the court. The AI developer bears only contractual obligations to the user company.

  • Data Confidentiality: Most public neural networks (ChatGPT, DeepSeek, etc.) save and use entered data for training their models by default. By submitting case information to such a service, you are potentially disclosing it to third parties—developers, cloud platform administrators, and even government agencies.

Key Legal Risks of Using Neural Networks

Using AI in legal practice creates a complex of interrelated threats. The main ones are systematized in the table below.

Risk Category Specific Manifestations Legal Consequences
Breach of Confidentiality and Data Leak Transfer of personal data, trade secrets, information constituting attorney-client privilege to public neural networks for analysis or text generation. Administrative liability under Art. 13.11 of the Russian Code of Administrative Offenses (fines up to 6 million rubles for legal entities), website blocking by Roskomnadzor, criminal liability under Art. 137 of the Russian Criminal Code, client lawsuits for violation of professional ethics.
Violation of Data Localization Requirements Processing personal data of Russian citizens through foreign services (e.g., ChatGPT) whose servers are located abroad. Violation of Clause 5, Art. 18 of the Russian Law "On Personal Data": data of Russian citizens must be stored in Russia. Fines for officials: 100–200k rubles; for companies: 1–6 million rubles.
Professional Incompetence and AI "Hallucinations" Uncritical trust in generated texts, citing non-existent norms or judicial acts, providing clients with legally incorrect conclusions. Violation of the principle of competent representation. Reputational damage, disciplinary sanctions from bar associations, material claims from clients, sanctions from the court for providing knowingly false information.
Violation of Ethical Norms and Lack of Transparency Concealing from the client the fact of using AI in preparing legal positions or documents. Charging fees for "AI work" as for a lawyer's labor. Violation of proper client informed consent. The American Bar Association (ABA) in its Formal Opinion 512 explicitly states that a template clause in a contract is insufficient—informed client consent for processing their data by AI is required.

Practical Security Guide: How to Minimize Risks

1. Create a Clean and Controlled Work Environment

  • Isolated Browser: Use a separate browser (Chrome, Firefox) exclusively for working with neural networks. It should not have saved passwords, logged-in corporate email, banking services, or cloud storage.

  • Separate Account: Create a new email and account to be used only for registering with AI services.

2. Implement a Mandatory Data Anonymization Procedure

Before uploading any document to a neural network (even to "protected" corporate versions), it is necessary to completely remove all sensitive information.

  • Automate the Process: Use NER (Named Entity Recognition) technologies or simple macros in Word ("Find and Replace").

  • What to Replace: All full names, addresses, passport data, phone numbers, document details (contract number, case number), names of counterparty companies. Replace them with abstract labels: "Client-1", "Address-A", "Counterparty-B".

  • Simple Test: If the anonymized document can be safely sent by email or uploaded to public cloud storage, its security level is sufficient for transfer to AI.

3. Choose Professional Tools with Appropriate Certification

  • Avoid Public Chatbots: Do not use ChatGPT, DeepSeek, Gemini, and similar services for working with legal data.

  • Look for "Zero Data Retention": Choose solutions that guarantee your data is not stored and is not used for training. An example is Thomson Reuters' CoCounsel, which holds ISO/IEC 42001 certification and features a zero-retention architecture for client data.

  • Verify Jurisdiction: For working with data of Russian citizens, ensure the service provides storage and processing on Russian territory.

4. Develop an Internal Policy and Train Your Team

  • Clear Regulations: Define which tasks can be delegated to AI (initial analysis, finding practice, drafting templates) and which absolutely cannot (final legal conclusions, direct communication with the client).

  • Mandatory Verification: Any result obtained from AI must be thoroughly double-checked by a lawyer for accuracy, relevance, and timeliness. Use cross-verification: a result from one neural network can be loaded into another with a prompt to verify the relevance of citations.

  • Train in Prompt Engineering: Teach your team how to formulate queries correctly. A good prompt includes: Role ("You are an experienced corporate lawyer"), ContextSpecific TaskLimitations ("Do not invent information"), and Output Format.

  • Working with Clients: Include a clear clause in the client agreement about the use of AI tools, the purposes of processing their data, and confidentiality guarantees. Obtain separate informed consent.


Artificial intelligence in law is no longer the future but the present. However, the path to its effective and safe implementation lies not through blind trust but through conscious risk management. The main principle remains unchanged: if there is an unacceptable risk of data leakage or loss of control over information—the tool should not be used.

A neural network is a powerful but blind tool. It can save time but cannot assume responsibility. Ultimately, it is the professionalism, critical thinking, and ethical responsibility of the lawyer that turn technological potential into a real advantage for the client.