Digital shield protecting audio data during a multilingual event
AIPrivacyRSAIGDPREnterprise

Your Data Is Safe. But Maybe Not from Who You Think.

2026-04-09
Converso
6-8 min

Everyone asks us: "Does the AI use our data for training?" The answer is no. But the real risk to your event's confidentiality is somewhere nobody is looking.


For the past few months, every time we present RSAI — our simultaneous interpretation system powered by artificial intelligence — the question comes up. Sometimes whispered, sometimes stated bluntly, sometimes delivered by the company's legal department in a three-page email:

"Is what's said during our event used to train your AI?"

The question is legitimate. The answer is no. But before we explain why, we want to tell you something.

Twenty-Five Years of Trust, Then One Word

We've been managing the audio for your events since 2001. Remotely since 2017. Every word spoken at your conferences, assemblies, and European works councils passes through our servers, our app, our control rooms. In 25 years, no one ever asked us: "Do you record everything?"

Because trust was there. And it still is.

Then we added artificial intelligence to the service — largely at the request of the very clients who are now asking for guarantees. Because we deeply believe in the value of human interpreters, but we can't ignore the needs of a market that's evolving. And the question became sudden and urgent. Yet the infrastructure is the same. The contractual guarantees are the same. In fact, they're more detailed than ever: we have an AI Annex in our General Terms & Conditions that specifies exactly how we handle data.

The fear doesn't come from our service. It comes from the confusion between the AI everyone uses every day — free chatbots, online translators, voice assistants — and the enterprise AI we use. They are two different worlds. With different rules.

The Problem Isn't Free vs. Paid

This is the most important thing in this entire article, and you don't need technical jargon to understand it.

When you use a chatbot or an online translator — even a paid one — the content you enter might be used to train the model. In many consumer services, this happens by default: you need to dig into the settings, find the option, and switch it off. Who does that? Almost nobody. And even those who do — how do you verify it actually works?

The point isn't whether you pay for the service or not. The point is whether a contract guarantees what happens to your data — and whether non-training is the baseline rule, not an option buried in a submenu.

When we use AI for your event, it's different. We use enterprise APIs from international technology providers — professional interfaces with specific contracts. In these APIs, non-training is the default behaviour: there's nothing to deactivate. It's the rule, not the exception.

If you want a picture: it's like the difference between talking on speakerphone on a train — where the person next to you is hearing your financial data along with their podcast — and talking in a private meeting room, door closed, nobody listening.

The content is the same. The protection isn't.

How It Actually Works (No Jargon, We Promise)

Here's what really happens when we use AI for your event:

Converso does not develop artificial intelligence. We don't have our own model to train. Our system is an architecture that orchestrates, in real time, AI services from leading international technology providers. Each one specialises in their own area: speech recognition, translation, voice synthesis.

Since there is no "Converso model" to feed, we have no interest — neither technical nor economic — in collecting, storing, or reusing the contents of your events.

Data passes through the system; it doesn't stay inside. Audio is processed in real time and is not retained beyond the delivery of the service. Intermediate transcriptions only serve the processing in progress and are deleted when the event ends. Translated audio and subtitles are streamed to participants and are not archived.

Our value doesn't lie in your data. It lies in making all of this work — in real time, with a technician monitoring, and with a backup plan if something goes wrong.

The Seatbelt and the Fraying Anchor

So far, so clear: you asked us for guarantees, we gave them to you, and they're solid. But here's the part of the article you might not expect.

Protecting an event's data is like a seatbelt: buckling up isn't enough. Every anchor point needs to hold. If you ask your interpretation provider for guarantees — and you're right to — but nobody checks what tools the other people handling the event's content use every day, you have a seatbelt buckled to an anchor that might not be as solid as you think.

The feeling of security is there. The actual security? Maybe not.

Let us explain with a few scenarios. These aren't proven facts — they're plausible situations that, after 25 years in this industry, we know are far more common than anyone thinks.

Five Scenarios Nobody Is Thinking About


1. A manager preparing a presentation

An executive needs to present at an international event. Short on time, they upload the slides to a chatbot to translate and improve them. The slides contain unpublished financial data, strategies, patent information.

That chatbot operates under consumer policies: content is used for training. It's not recoverable. It's not deletable.


2. A design agency laying out materials

A graphic designer receives confidential content for the event's visual materials. They use AI features built into their editing software — "improve text", auto-translate — without checking where that data goes.

Many creative tools with integrated AI operate in consumer mode and send content to third-party servers. The designer probably doesn't know. The client doesn't either.


3. An interpreter preparing terminology

A professional, even under NDA, receives confidential preparatory documents. To extract a glossary or understand technical terminology, they use a chatbot.

The confidential documents end up in a consumer service that uses them for training. The interpreter is acting in good faith. The NDA offers no retroactive protection: the data has already been ingested.


4. A consultant transcribing recordings

After the event, someone needs to write a report. They upload the full recordings to a free or low-cost AI transcription service.

Several of these services declare they use recordings to train their models — after "de-identification" that the client cannot verify and that doesn't eliminate the informational content anyway.


5. An assistant translating the meeting notice

They need to send the notice for a Board meeting or European Works Council in multiple languages. They copy the text — with agendas, names, sensitive topics — into an online translator.

No guarantee of non-retention. The Board agenda becomes material processed by a consumer service.


The Pattern Is Always the Same

The mechanism is identical in all five scenarios:

  1. A person acting in good faith uses a consumer AI service for an operational task
  2. The service operates under consumer policies: data is used for training by default
  3. Nobody checks, nobody notices
  4. The risk is invisible, silent, and irreversible

None of these scenarios involve malice or negligence. They simply require that the distinction between consumer AI and enterprise AI — which is the heart of the matter — remains unknown to the vast majority of people.

And that creates a blind spot.

The Right Questions to Ask (Of Everyone)

Many of the organisations we work with already have solid structures in place: security policies, certifications, NDAs with every supplier. Some ask us to sign very detailed compliance documents — and rightly so.

But the distinction between consumer AI and enterprise AI is still poorly understood, even within highly structured organisations. It's not a matter of competence: the topic is new and changing at lightning speed. That's why we think it's useful to share a few key questions.

The point isn't free vs. paid. Even a paid service can use your data for training if the settings allow it. The right question isn't "how much does it cost?" but "what happens to my data? And where is that written in the contract?"

AI policies should cover everyone who touches the content. If your organisation already has data security policies, it's worth checking whether they also cover the use of AI tools by suppliers, consultants, and external collaborators.

Training is the best protection. Most risks arise from unawareness, not from ill intent. Just explain the difference — service with contractual guarantees vs. consumer service without guarantees — and people get it.

One simple question that changes everything. Not just "Are my data safe with you?" but "What tools do you use to work on our content?" It's a question few people ask, and it matters across the entire supply chain.

Our Guarantees, in Brief

For clarity, here is what we guarantee:

Guarantee
Your event's content is not used to train any AI model
Audio data is processed in real time and not retained beyond the delivery of the service
Intermediate transcriptions are deleted at the end of the event
Technology providers do not use enterprise API data for model training
Contractual guarantees formalised in the AI Annex and General Terms & Conditions
Full GDPR compliance — processing on European infrastructure

If you want to go deeper, we have a detailed technical document that explains everything. Ask your account manager or get in touch.

In Conclusion

You asked the right question. And the answer is: with us, your data is safe.

But real protection isn't a single anchor. It's a complete seatbelt where every point holds. And it might be worth checking the other ones too.

If you'd like help understanding how to protect your event's content across the board — not just during interpretation, but throughout the entire supply chain — we're here. It's what we've been doing for 25 years: not just translating, but taking care of the entire event.


Converso® is a registered trademark of ABB S.r.l. — Innovators by Tradition since 2001.


This article is for informational purposes only and does not constitute legal advice or a binding contractual statement. For binding guarantees, please refer to our General Terms & Conditions and the AI Annex. For specific questions about data privacy, contact us at verso@verso.it.

Want to learn more?

Contact us for a free consultation about your next event.