There’s a lot of excitement about all the new artificial intelligence technology hitting the market — like Amazon’s Echo and Google’s Home smart speakers, among many other products. However, there’s also growing concern over how these devices may invade our privacy.
These AI devices are designed for use throughout our homes — and despite the companies’ assurances to the contrary, some people worry that they could be listening to every word we utter. Granted, they have to listen when we want them to give us answers or carry out tasks, but what if they are listening in at other times as well?
To what extent are they are invading our privacy? Even given a worst-case eavesdropping scenario, that might depend on who you are and what you consider an invasion of privacy. For some, having a conversation while a smart speaker is listening might be no different from having a discussion in a public place.
However, most people talk about private matters in private. If an AI device is sitting right next to us and possibly listening when we don’t know it, then the space is no longer private. Worse is the possibility that the device may be recording everything — the companies’ assurances notwithstanding — which would make it impossible for us to distance ourselves from our utterances.
Having a conversation next to an AI device is different from having one in public. In public, there is seldom anyone on the periphery who cares about what you are saying, and it’s highly unlikely anyone would be secretly recording every word you uttered for potential future use.
Devices like Amazon’s Alexa-powered Echo and Google’s Assistant-powered Home might be capable of hearing, recording and forever storing every word we utter, even though that’s not how they’re actually supposed to function. Still, how can we be sure that whatever we say won’t come back to hurt us in the future?
Can We Trust the Companies?
An early reviewer’s discovery of a hardware flaw that affected a small number of Google Home Mini devices — those handed out at Google events prior to its general release — recently brought this issue into the spotlight.
Google quickly disabled the functionality that allowed the inadvertent spying to take place and removed all recorded information from its servers. It removed the functionality permanently, so there would be no question of it malfunctioning on any of the units shipped to consumers.
However, the incident raised huge privacy alarms, with people like Kurt Knutsson, the CyberGuy, telling Fox News viewers that the Google Home Mini records everything, 24/7. If that were true, it obviously would not be OK with many people. He also raised concerns about Amazon Look, an Echo device with a camera, which is meant to take pictures of people modeling their clothes so they can decide what to buy or wear.
The problem, according to the CyberGuy, is that the Echo Look likely would be placed in the user’s bedroom. I don’t know about you, but for me this is as creepy as it gets. What if the Echo Look should malfunction? Do you really want pictures of yourself in various states of dress or undress stored on Amazon’s servers?
How can we trust these companies to protect our privacy when so many can’t even protect their own data? All of this innovation seems to cross a lot of lines. AI is great in some ways, but is it really worth sacrificing your privacy? It could have an impact on you for the rest of your life, so give serious thought to it.
But Wait, There’s More
Amazon’s Echo and Google’s Home are not the only privacy risk points when it comes to AI. There is now technology with the potential to record every stroke on a computer keyboard. Companies have the capability to save very email, text message and voice mail ever sent.
It’s not just computers that pose privacy risks. Every wireless phone does too. Cable TV and pay-TV services can record our viewing habits. Some car navigation systems can keep a record of every place we drive to and where we stop.
Our privacy is being chipped away, bit by bit, with every new technology and every new device. Sometimes it may seem that a machine is watching every step we take, every day, and that we no longer have any privacy.
AI is the future, but we must protect ourselves, our security and our privacy as it becomes an increasing factor in our lives. The responsibility to guard consumer privacy ultimately lies with the tech industry, but the pressure to make that happen must come from the user community. It’s up to us to start making some noise in order to protect our privacy.