Google’s AI smarts are available for everyone

May 21, 2018


For most people, Artificial Intelligence sounds like some far-off science fiction concept, but it’s actually behind a lot of things we encounter in our daily lives. For the past several years, Google has pursued research that reflects its commitment to making AI available for everyone. Recently, the company rebranded the whole of its Google Research division as Google AI, the old Google Research site now directing to a newly expanded Google AI site. The move signals that Google has increasingly focused a lot of its R&D on breaking new ground across the many facets of AI.

From computer vision to healthcare to AutoML, Google has increasingly put emphasis on implementing machine learning techniques in nearly everything they do. Researchers focused on the development and integration of these systems into Google products and platforms.  

Google Duplex – when a virtual assistant sounds like a human on the phone

Google recently showed off a new digital assistant capability that’s meant to improve your life by making simple boring phone calls on your behalf. The new Google Duplex feature is designed to pretend to be human, with enough human-like functionality to schedule appointments or make similarly inane phone calls. The technology is directed towards completing specific tasks, such as scheduling certain types of appointments. The system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine.

Several phone conversations were shown during Google I/O 2018. In one of them, Duplex scheduled a hair salon appointment. In another one, Duplex called a restaurant to book a reservation. While sounding natural, these and other examples are conversations between a fully automatic computer system and real businesses. According to Google, at the core of Duplex is a recurrent neural network (RNN) designed to cope with these challenges, built using TensorFlow Extended (TFX). To obtain its high precision, researchers trained Duplex’s RNN on a corpus of anonymized phone conversation data. For the assistant to sound natural, Google used a combination of a concatenative text to speech (TTS) engine and a synthesis TTS engine (using Tacotron and WaveNet) to control intonation depending on the circumstance.

New AI projects powering Google products

Plenty of AI functionalities are already found in Google apps we use daily, but the company is now updating many of its services with newer machine learning tools.

Google Maps is using Augmented Reality technology to help guide users to their destination. The new AR features combine Google’s existing Street View and Maps data with a live feed from the phone’s camera to overlay walking directions on top of the real world and help you figure out which way you need to go.

Gmail’s new Smart Compose is now available as an experimental feature. It uses AI to predict and suggest what you may write next, effectively making the sentence writing process quicker. Google says this feature will save time on repetitive writing and reduce the risk of making grammatical mistakes.

Google’s digital assistant will get six new voices, including one based on that of singer John Legend, later this year. The company is also unveiling ways to let you issue multiple commands without having to say “Hey Google” each time. The voices aim to sound more natural and will include pauses that convey meaning.  

Google is redesigning the News feature to present five stories you need to know, plus others that it thinks will be most relevant to you. IT combines elements found in Google’s digital magazine app, Newsstand, as well as YouTube, and introduces new features like “newscasts” and “full coverage” to help people get a summary or a more holistic view of a news story.

Google Photos will add more AI-powered fixes, including colorization of black-and-white photos. The company is making it even easier to fix photos with a new version of the Google Photos app that will suggest quick fixes and other tweaks – like rotations, brightness corrections, or adding pops of color, for example – right below the photo you’re viewing.

Android P will infuse basic functions with AI smarts. In order to conserve energy, the battery will now adapt to how you use apps. A “shush” mode will automatically turn on the “Do Not Disturb” mode when the phone is turned face down on a table.  

Artificial Intelligence is now at the core of all Google products, as the company aims to make its apps more and more efficient and productive for users.