Skip to content
Home » Article


AI, doppiaggese and Peppa Pig 

Earlier this year, I had the pleasure of attending ITS: Localisation 2024, an exceptionally interesting and thought-provoking conference on the impact of AI in our industry. What made it so remarkable was the fact that the speakers came from all corners of the localisation and entertainment industry: from dubbing script writers to representatives of cutting-edge technology companies, from content holders to LSPs and everything in between. The diversity certainly made the panels lively, but most importantly, it gave the audience an invaluable opportunity to see things from completely different points of view. There was so much to take in, so many conflicting interests and ethical issues to untangle, but what’s clear is that AI is here to stay, and it’s inevitably going to be employed more and more throughout the localisation process. 

 A few days passed, allowing all these notions to settle, and I had a chance to reflect on what was said, particularly regarding the use of AI in dubbing – a topic that, coming from Italy, is undeniably very close to my heart. I fast-forwarded a few years and asked myself what would happen to our language when a high percentage of the dubbing scripts are created with AI. As discussed at the conference, it’s safe to say that human post-editing will still be required for the foreseeable future. However, it was also pointed out that working on a translation that’s already been created is proven to limit a person’s creativity. Additionally, budget and time constraints will limit the amount of editing that a translator can perform on an AI-produced dubbing script. Before I continue, it’s important to note that I am not here to talk about quality; in fact, I am taking for granted that by the end of the translation process the quality will be “good enough” (with the definition of “good enough” being very much project-specific). 

So, what are we talking about if the quality is “good enough”? We are talking about a phenomenon that in Italy we call doppiaggese (“dubbese” in English). This term refers to a variety of the language used specifically in dubbing, characterised by a significant number of Anglicisms, lexical and syntactic calques from English, a flattening of register, style and regional variations, artificial formality, and translational stock phrases. The main reason behind these linguistic choices is very practical: it’s lip-syncing. Each dubbed line needs to be as close as possible to the original, both in length (number of syllables) and phonetic structure (closed vs. open vowels, for instance), to avoid being left with actors moving their lips without producing any sound or talking with their mouth closed. As imperfect as this might look under scrutiny, dubbing is how we consume audiovisual entertainment in Italy – it’s entirely accepted and represents something we are extremely proud of as a country. 

Recent studies have shown how this variety of Italian has “contaminated” the everyday written and spoken language. Phrases such as assolutamente sì (instead of certamente, from “absolutely”), or ci puoi scommettere! (instead of senza dubbio!, from “you can bet!”), or verbs such as spoilerare are becoming more and more common in real-life interactions. This is no surprise considering that dubbing is how we access audiovisual entertainment from a very young age. However, doppiaggese is still a human creation! It certainly sounds less natural than standard Italian; some may even argue that, over time, it can contribute to making certain words and phrases sound obsolete and fall into disuse. But it still comes from dubbing script creators and adapters, so in a way, its influence could still be seen as part of the natural evolution of the language. 

So here is the question: Will an AI version of doppiaggese influence the way we speak? Peppa Pig and Bluey will speak that language, and our children will be exposed to it (even more than we used to, judging by the ever-growing amount of audiovisual material children have access to nowadays). As the sponges they are, they will learn from it. Can this still be considered as part of the “natural” evolution of the language, simply because it’s a result of the “natural” evolution of technology? And what can we expect from it?  

A translation can be “good enough” and still not create the same effect as the original in the target audience. That was actually the topic of my MA thesis, when I had the insane idea of comparing the entire dubbing of Mel Brooks’ Young Frankenstein in different languages. A translation could even be “very good”, create the same effect, but fail to use the huge amount of lexical variation that a language can offer, based on heavily contextual situations and meanings. And we are back to my original question: Is an AI version of doppiaggese going to be better or worse than us at maintaining this?

Are children (and adults) going to be exposed to a wider or narrower variety of terminology and structures compared to what we are used to today? Will generative AI create Anglicisms, calques and neologisms in general, just like we do? Will it even need to create new words, when it could probably just change the way the lips move on the screen to achieve perfect translation and perfect lip-syncing at once? In the long run, what impact is this going to have on the evolution of a language? 

If you have any of the answers, please do feel free to spoilerare

Written by Martina Mambriani – Head of Client Services