WaveNet, by comparison, uses machine learning to generate audio from scratch. It actually analyzes the waveforms from a huge database of human speech and re-creates them at a rate of 24,000 samples per second. The end result includes voices with subtleties like lip smacks and accents. When Google first unveiled WaveNet in 2016, it was far too computationally intensive to work outside of research environments, but it’s since been slimmed down significantly, showing a clear pipeline from research to product.The Verge has embedded some samples in their report to see how WaveNet sounds.
Google is launching a new AI voice synthesizer, named Cloud Text-to-Speech, that will be available for any developer or business that needs voice synthesis on tap, whether that’s for an app, website, or virtual assistant. The Cloud Text-to-Speech service is being powered by WaveNet, software created by Google’s UK-based AI subsidiary DeepMind. The Verge explains why this is significant: First, ever since Google bought DeepMind in 2014, it’s been exploring ways to turn the company’s AI talent into tangible products. So far, this has meant using DeepMind’s algorithms to reduce electricity costs in Google’s data centers by 40 percent and DeepMind’s forays into health care. But, directly integrating WaveNet into its cloud service is arguably more significant, especially as Google tries to win cloud business away from Amazon and Microsoft, presenting its AI skills as its differentiating factor. Second, DeepMind’s AI voice synthesis tech is some of the most advanced and realistic in the business. Most voice synthesizers (including Apple’s Siri) use what’s called concatenative synthesis, in which a program stores individual syllables — sounds such as “ba,” “sht,” and “oo” — and pieces them together on the fly to form words and sentences. This method has gotten pretty good over the years, but it still sounds stilted.