Last Updated: March 2026 Version: 2.0
This page explains how Versa uses AI to process your data — what models are involved, what they do with your information, and how decisions are made. This information is provided pursuant to Articles 13, 14, and 15(1)(h) GDPR.
Versa uses AI for three purposes:
When you practice a training scenario, you speak with an AI persona that responds in real time. The AI receives your speech (converted to text) and generates responses based on the scenario instructions your trainer has set up.
What the AI sees: Your speech (as text), the scenario description, the AI persona's role and instructions, and the conversation history within that session.
What the AI does NOT see: Your name, email, account details, or data from other users' sessions.
Providers (your organisation or you select which to use):
| Provider | Location | How data is handled |
|---|---|---|
| Microsoft Azure (GPT Realtime) | EU (Sweden Central) | Data processed in EU. Not used for Microsoft's model training. |
| Google Gemini Live | EU (configurable) | Data processed in EU. Not used for Google's model training. |
| Hume AI (EVI) | USA | Audio processed for emotion-aware responses. Not stored after processing. Not used for Hume's model training. Hume derives vocal characteristics (tone, pace, pitch) but does not perform speaker identification or voiceprint extraction. |
| xAI (Grok) | USA | Text processed for conversation. Deleted within 30 days. Not used for xAI's model training (User Content). xAI may create de-identified usage derivatives. |
For US-based providers (Hume, xAI), data transfers are protected by Standard Contractual Clauses. Your organisation can choose which providers are available.
After a training session, Versa's AI reviews your performance and generates feedback. This is the core of the training experience.
How it works:
What the feedback is based on:
What the feedback is NOT based on:
The feedback model: Google Vertex AI (Gemini 2.5 Pro), processed in the EU (europe-west1). Google does not use your data to train its own models.
Important: AI feedback is educational guidance, not an automated decision. Scores and feedback are tools for learning, not final judgments. Your trainer reviews AI feedback and can correct inaccuracies. If your organisation uses feedback scores for formal assessments (grading, employment decisions), your organisation is responsible for ensuring appropriate human review and safeguards under Article 22 GDPR.
Trainers can chat with an AI assistant (Vi) to build training scenarios. Vi helps structure the scenario, define feedback criteria, and configure AI personas.
What Vi sees: The trainer's instructions and any content the trainer provides (including URLs fetched via Jina AI).
The model: Google Vertex AI (Gemini 2.5 Flash), processed in the EU (europe-west1).
Because AI feedback affects your training experience, here is a more detailed explanation of how scores are produced:
Step 1 — Input assembly. The system assembles your conversation transcript, the feedback criteria (titles and descriptions), and the scenario context (what the AI persona was simulating).
Step 2 — Criterion-by-criterion assessment. For each feedback criterion (e.g., "Active Listening", "Clear Communication"), the AI reads the relevant parts of your conversation and assesses whether the criterion was met, partially met, or not met. The AI produces a score and a text explanation for each criterion.
Step 3 — Coaching generation. Based on the criterion scores and the conversation content, the AI generates specific coaching recommendations — what you did well and what to practice.
Step 4 — Output. The scores, explanations, and coaching recommendations are presented to you and your trainer. The full conversation transcript is available alongside the feedback so you can verify the AI's reasoning.
Limitations:
Access: You can view all AI-generated feedback, scores, and coaching recommendations in your session history. The underlying conversation transcript is always available.
Explanation: This page provides information about the logic involved in AI feedback. If you need further details about how a specific score was generated, contact your trainer or privacy@versa.training.
Object: You can object to the use of your data for AI model improvement at any time by contacting privacy@versa.training. Opting out does not affect your access to the platform or the quality of your training experience.
Erasure: You can delete individual sessions (including transcripts and feedback) at any time. Account deletion removes all your data within 30 days.
Human review: If AI feedback is used for a decision that significantly affects you (academic grading, employment evaluation), you have the right to request human review of that decision from your organisation.
When you provide feedback on your training experience (ratings, thumbs up/down), this feedback — along with the associated conversation context — may be used to improve Versa's AI models. This is explained in detail in our Privacy Policy (Section 5).
Key protections:
For organisations (B2B): the terms of AI data use are governed by the Data Processing Addendum (DPA) and the feedback licence in the Terms of Service. All data use mechanisms are negotiable.
| Provider | Role | Location | Training policy |
|---|---|---|---|
| Google (Vertex AI) | Feedback, scenario creation, realtime | EU | Does not train on customer data |
| Microsoft (Azure) | Realtime conversation | EU | Does not train on customer data |
| Hume AI | Emotion-aware realtime conversation | USA (SCCs) | Does not train on API data |
| xAI (Grok) | Realtime conversation | USA (SCCs) | Does not train on User Content; may use de-identified derivatives |
| Supabase | Storage of transcripts and feedback | EU | Does not train on customer data |
Full sub-processor list: versa.training/legal/subprocessors
Questions about AI processing? Contact privacy@versa.training