Google is making it harder to pick out fake voices

Google’s DeepMind AI is learning how to talk like a person. DeepMind has many learning projects going on right now, but the newest one to catch our ears seems to be an increasingly realistic voice and speech pattern system that eliminates more and more of the inhuman, robotic patterns we use to identify computers.
 
Imagine if Siri, Cortana, or Alexa started having inflection, variances, and realistic breathing patterns. Instead of sounding like this, it might sound like this. By playing existing recordings for the AI system, the researchers will help the system to overtime incorporate those patterns and inflections into its speech.
 
Machines are getting better at a lot of things. MIT released results from a new AI program today that turns images into video by knowing how a video of that image is supposed to move (though the videos are kind of frightening if you look too close).
 
So sooner than later, when you hear a voice on a phone, it may be harder to tell if you’re hanging up on a telemarketing person or computer. And I’m fine with that. But let’s just hope Google’s AI doesn’t start hearing voices telling it to do things.