Lyrebird Is An AI That Can Accurately Imitate Any Voice
Even in the best AI interfaces such as Microsoft's Cortana and Amazon's Alexa, natural-sounding voice emulators have long been an unachievable goal for developers. But Lyrebird, a new AI out of Montreal, has just come on the scene with surprisingly accurate imitations of real people's voices. Now there's just the little problem of avoiding identity fraud.
A Computer That Could Run For Office
In order to create an accurate impersonation, Lyrebird needs to hear its subject speak for several hours. So when revealing their success, the programmers chose the voices of people known for being longwinded: U.S. politicians. On Lyrebird's demo page, you can hear Barack Obama, Hillary Clinton, and Donald Trump talking about exactly how amazing the technology is. But what really sets these imitations apart is the fact that they are not simply canned assemblages of words. Every time you run a phrase through, the voice will enunciate it in a different way, with a different emotional weight.
The way most text-to-speech systems work is pretty simple, but very labor intensive. Both Siri and Cortana are voiced by living, breathing humans, and the phrases they come up with are cobbled together from countless hours in the recording booth. That's why they have pretty much the exact same intonation every time. But Lyrebird won't just switch up the intonation organically—it can also be calibrated to speak along an emotional spectrum. That means that although the technology is still in its infancy, it could eventually recreate an individual's voice with incredible accuracy.
Think Of The (Criminal) Possibilities
Obviously this technology is a tremendous triumph for Lyrebird, and one that they should be proud of. But they're the first to admit that it has ethical implications. It could easily be used by people with nefarious goals—but that's the point, say the developers. Voice recordings are often used as evidence in court cases, for example. Lyrebird might be the best and newest audio manipulation software out there, but it's certainly not the first. "Our technology questions the validity of such evidence as it can easily manipulate audio recordings," the developers explain in French, "We hope that everyone will soon be aware that such technology exists and that copying the voice of someone else is possible." The question is, can online security measures, especially those surrounding identity theft, advance fast enough to address this issue before it becomes a serious problem? Only time will tell.