How Artificial Intelligence Solves Everyday Problems

By Centific Editors

Technology becomes mainstream when it solves everyday problems. That is how artificial intelligence (AI) is gaining a foothold in society – by improving healthcare with smarter data to help physicians treat patients, improving driver safety with smart sensors in cars, and so on. A case in point: how Google is improving the world with AI, sometimes in ways that people don’t even notice.

Google Rolls Out Improvements in AI

At Google’s annual developer conference (Google I/O), the company shared some compelling examples of how the company is applying AI to make everyday life better in the digital world.

Making Search Better

Google is the world’s largest search engine. Thanks to AI, Google is improving how we search for information we need. For example, Google has improved the popular wayfinding application Google Maps by making it more intuitive and immersive.

Thanks to advances in computer vision that allow Google to fuse together billions of Street View and aerial images, Google is making Maps a more rich, digital model of the world. With Google’s new immersive view, a searcher be able to experience what a neighborhood, landmark, restaurant or popular venue is like — and even feel like they are right there before they ever set foot inside.

As Google noted,

Say you’re planning a trip to London and want to figure out the best sights to see and places to eat. With a quick search, you can virtually soar over Westminster to see the neighborhood and stunning architecture of places, like Big Ben, up close. With Google Maps’ helpful information layered on top, you can use the time slider to check out what the area looks like at different times of day and in various weather conditions, and see where the busy spots are. Looking for a spot for lunch? Glide down to street level to explore nearby restaurants and see helpful information, like live busyness and nearby traffic. You can even look inside them to quickly get a feel for the vibe of the place before you book your reservation.

This kind of immersion can make people more comfortable and safer, too. For instance, if a traveler is visiting a city and is unfamiliar with a neighborhood, they can do a more comprehensive evaluation of the surroundings before deciding whether to check out a new locale.

With AI, Google is also using wayfinding to support sustainability. Google said it has recently launched eco-friendly routing in the U.S. and Canada, which lets user see and choose the most fuel-efficient route when looking for driving directions (helping people save money on gas, too). Since then, people have used it to travel 86 billion miles, saving more than an estimated half a million metric tons of carbon emissions — equivalent to taking 100,000 cars off the road. Google says it is on track to double this amount as it expand to more places, like Europe.

To recommend eco-friendly routes, Google must process a mountain of data. With insights from the U.S. Department of Energy’s National Renewable Energy Lab, Google has built a new routing model that optimizes for lower fuel consumption based on factors like road incline and traffic congestion. This is only possible with AI that processes countless variables faster and more accurately than a human being could do.

Improving the Virtual Workplace

As people embrace virtual working, Google has responded by improving Workspace, its technology for collaborating remotely. Google Workspace is a collection of cloud computing, productivity and collaboration tools, softwarem and products developed and marketed by Google. It was first launched in 2006 as Google Apps for Your Domain and rebranded as G Suite in 2016, then rebranded again in 2020 as Google Workspace.

Google is improving Workspace with AI in many ways, including:

  • Portrait restore uses Google AI technology to improve video quality, so that even if someone is using Google Meet in a dimly lit room using an old webcam (or if they have got a bad WiFi connection) their video will be automatically enhanced.
  • Introducing portrait light. This feature uses machine learning to simulate studio-quality lighting in a user’s video feed. The user can even adjust the lighting position and brightness.
  • Live sharing  syncs content that’s being shared in a Google Meet call and allows participants to control the media. Whether a user is in an office or at home, sharing the content or viewing it, participants will see and hear what’s going on at the same time.
  • Using advancements in natural language processing, Google recently introduced automated summaries in Google Docs. In coming months, Google is extending built-in summaries to Spaces to provide a helpful digest of conversations. Summaries allow users to catch up quickly and easily on what they’ve missed from conversations in Spaces.

These may not seem like dramatic, game-changing breakthroughs. But they matter at a time when virtual meetings are the only way workers connect with each other in a post-pandemic world. Here is a post from Google that shares more detail.

Making Google Assistant More Conversational

Google Assistant is an increasingly popular voice assistant that helps people perform tasks such as finding recipes, checking the weather, and doing searches online. Google Assistant powers Google products such as its smart speakers. But speaking commands to Google Assistant is not always as intuitive as it could be. Asking Google Assistant to retrieve information requires a person to say, “Hey Google” or touch a screen embedded in a smart speaker.

Google is changing that.

For instance, with a new feature, Look and Talk, a person can simply look at the screen on a Google device and ask for what they need. All someone needs to do is glance at their screen and ask for information (“Find me a plumber”) without the “Hey Google” prompt. There’s a lot going on behind the scenes for Google recognize whether a user is making eye contact with their device rather than just giving the device a casual glance. Google says it takes six machine learning models to process more than 100 signals from both the camera and microphone — like proximity, head orientation, gaze direction, lip movement, context awareness, and intent classification — all in real time.

Google says it is also making it possible for Google Assistant to account for natural pauses, inflections, and casual utterances that characterize the way people talk in real life. As Google pointed out, “In everyday conversation, we all naturally say ‘um,’ correct ourselves and pause occasionally to find the right words. But others can still understand us, because people are active listeners and can react to conversational cues in under 200 milliseconds. We believe your Google Assistant should be able to listen and understand you just as well.”

So, Google is building new, more powerful speech and language models that can understand the nuances of human speech — like when someone is pausing, but not finished speaking. Google is approaching the fluidity of real-time conversation with the Tensor chip, which is custom-engineered to handle on-device machine learning tasks rapidly. Google Assistant will soon be able to better understand the imperfections of human speech without getting tripped up — including the pauses, “umms” and interruptions — making someone’s interactions feel much closer to a natural conversation.

These are significant improvements. When talking with a voice assistant feels obtrusive more natural, we don’t even think about the interaction we’re having. In other words, the device becomes a more unconscious part of our lives, which makes us more comfortable having them in our homes. By contrast, devices embedded with virtual reality continue to deploy a clunky user interface, which hampers virtual reality’s uptake.

Making AI More Inclusive

AI needs to work for everyone to realize its potential, not just for a people of privilege who can afford smart speakers. That’s why Google has been making AI more inclusive, such as making online search recognize more skin tones. In a recent blog post, we discussed these developments in more detail. Being inclusive remains a challenge but also an opportunity for AI to solve more problems for people of all backgrounds.

Contact Centific

At Centific, we’re applying AI everyday to help businesses become more effective, such as making businesses more cybersecureimproving drone deliveryexpanding a skills-based learning program, and more.  Learn more about our work here. And contact us to get started.