What’s New with Voice Assistant Devices? A Q&A with MediaTek’s Mark Odani

What’s New with Voice Assistant Devices? A Q&A with MediaTek’s Mark Odani

Nov 5, 2019 - 9:30 AM - Exec Talk

More than 3.25 billion voice assistants are in use today, according to Juniper Research. Consumers increasingly rely on the voice assistants on their smartphones and the array of smart home voice assisted and connected devices. The world will tell you that voice assistants can make you more productive, whether it’s setting an appointment reminder or checking tomorrow’s weather forecast. But the truth is, it’s just the best user interface to get done what you want to do. Just say it. No opening an app, tapping through a menu and scrolling to find the right setting. Nothing is quite as effortless as telling a device to do your bidding.

As many consumers are now finding out, MediaTek has long led the voice assistant revolution. We are the world’s No. 1 chipmaker for voice assistant devices (VAD), and our chipsets power the most popular voice assistant products on the market, including the Amazon Echo Dot, the Fire TV Stick 4K and devices from multiple OEMs that run Google Voice Assistant and Alibaba.

Mark Odani, AVP Sales and Business Development at MediaTek, took a few minutes to sit down and discuss the voice assistant market and the role voice technology and MediaTek play in this industry, along with what to expect in the coming years


What does MediaTek bring to the voice technology space?

Today MediaTek powers a wide variety of devices with voice assistants, from smart speakers and smart TVs to makeup mirrors, sweeping robots and wine coolers and many other connected appliances. Working with the top brands has given us a wealth of insights into consumers’ demands and what the next generation of devices require. What is unique about MediaTek in this space is how we have designed solutions that deliver a variety of features while still being extremely power efficient – a must for smart devices. We’re now seeing a surge of interest in smart home products that do not just integrate voice assistants, but also touch displays – that way consumers have even more ways to interact with their devices.

Our advanced Edge AI technology lets our partners push the envelope even further. With our power efficient chipsets, our partners can integrate AI features into small smart devices without using a large battery. And with AI information processed locally, these devices can support some AI-augmented voice features even when they’re disconnected from the Internet. All this translates into more options for our partners to design products that further enrich the lives of consumers.

The MT8516, our current high-volume shipping product is ideal for a wide range of voice assistant devices and audio applications. The integrated hardware and software solution features a quad-core Arm Cortex-A35 application processor, operating up to 1.3 GHz to process user inputs faster. Additionally, the next-gen MT8518 AI voice SoC offers a big breakthrough in battery life with 10x longer standby time and 2x longer playback time compared to previous-generation solutions. The chip’s low power design makes it ideal for battery operated devices like portable speakers. We also want to make sure consumers get the best sound possible, so our MT8518 includes our PowerAQ, a powerful audio tuning tool used by many brands to achieve superior audio quality.

To help accelerate the integration of Alexa voice service in connected devices, we recently announced our MT8516 2-mic development kit. As consumers bring a variety of devices into their homes, it’s important that these gadgets work together seamlessly. With Amazon multi-room music (MRM) technology, you can stream your favorite tunes over multiple Alexa-equipped devices at one time. Additionally, MediaTek’s far-field algorithms are integrated into the application processor – this eliminates the need for an additional digital signal processor (DSP), reducing costs and further speeding up the design process.


There’s a lot of talk in the industry about Edge AI. What is Edge AI and how does it impact the user experience?

Edge AI means that AI functions are processed locally on a device rather than being sent to the cloud or over the Internet. Edge AI has some big advantages over cloud or remote processing. It is faster to do some of the processing on the device itself, giving consumers the information they want instantly. Plus, processing the data is much more secure than sending it over the cloud so people’s privacy is better protected.

When voice assistants were first introduced, some devices only processed a limited number of words – “wake words” – at the edge. As technology has become more advanced, voice assistants can now process way more information at the edge and even predict consumer behaviors to interact with users more seamlessly. For example, if you tell your smart speaker to turn off the lights at night, it might suggest turning on your home alarm.


How will voice technology continue to evolve over the next five years?

Over the next few years, we expect to see voice assistants integrated into a wider array of devices in the home and beyond. For example, we’ll see microwaves, clocks, dishwashers and other appliances that can execute commands by the sound of your voice. Basically, any task that can be done more easily with a voice command – whether that is a light switch or a lawn mower – will see voice technology being added to it. We’ll also see better integration between different types of devices, so you’ll see new categories of smart devices that work with existing devices. This will be particularly important for consumers who use devices with competing voice assistant platforms. In short, you can buy devices from different brands and expect them to all communicate with one another.

The automotive space is another growth area for voice computing. In the next few years we’ll see more voice assistants integrated into cars, so drivers can more easily manage their entertainment comfort –heating, cooling, seat adjustments – and navigation in the car while keeping their hands on the steering wheel. As vehicles become more autonomous, we can see a point where users might control their entire car and driving experience just with the sound of their voice.