Legal

AI Transparency

Last updated: February 24, 2026

This page explains how Lexense uses AI, what the analysis can and cannot do, and which transparency measures we apply for Users.

1. Which AI model we use

Lexense uses the GPT-5.2 model to analyze document content and user questions. The model is used as a support tool for analysis and drafting, not as a system that makes legal decisions on behalf of the User.

2. How document analysis works

Analysis is based on processing document content and question context, then generating a summary, highlighting risks, suggesting next steps, and answering user questions. The process is probabilistic: the model predicts the most likely responses based on language patterns and the provided context.

3. What AI checks during analysis

Depending on the document type, AI may identify, among other things: - key clauses and their practical meaning, - potential legal and operational risks, - missing or unclear provisions, - possible inconsistencies and information gaps, - recommended follow-up actions (e.g., clause updates, expert consultation).

4. AI system limitations

AI may make mistakes, miss relevant context, or produce imprecise conclusions. Results may become outdated due to changes in law, market practice, or case law. AI outputs are informational and supportive. They do not constitute legal advice or a binding interpretation of law.

5. When to consult a lawyer

For complex, high-risk, or high-impact legal matters, we recommend consulting a licensed lawyer. This applies especially to litigation, high-value negotiations, documents containing sensitive data, or situations requiring an individual legal strategy.

6. Private mode (on-device processing)

Lexense provides a private mode (on-device processing), where analysis is performed locally on the User's device. In this mode, document content is not transferred to an external AI model provider. Available features may depend on device capabilities, plan, and selected workflow.

7. Risk classification and transparency obligations

Lexense classifies its current AI use case as limited risk: a B2C informational tool that does not make autonomous legal decisions for the User. Under EU AI Act transparency obligations (Article 50), applicable from February 2, 2025, we provide clear information about AI usage, model behavior, and limitations. The risk classification assessment is documented and maintained in internal compliance documentation, in line with requirements to document the assessment (Article 6(3)).

Questions about AI and privacy: support@lexense.ai.