We have a right to know they’re a bot!

0 have signed. Let’s get to 100!

We are quickly entering a new age of computing: the age of Artificial Intelligence (AI), text “bots” and Conversational Interactive Voice Response (CIVR) on a massive scale. 

You will be (have been) emailed by them, texted by them, chatting with them over social media (SM), and talking with them on the phone. 

They’re getting extremely sophisticated and exsquisitly good at imitating speech patterns. Inserting, “uh”, “umm”, and other placeholders to sound more human. 

You will soon be fooled by one. 

“Onstage at I/O 2018, Google showed off a jaw-dropping new capability of Google Assistant: in the not too distant future, it’s going to make phone calls on your behalf. CEO Sundar Pichai played back a phone call recording that he said was placed by the Assistant to a hair salon. The voice sounded incredibly natural; the person on the other end had no idea they were talking to a digital AI helper. Google Assistant even dropped in a super casual “mmhmmm” early in the conversation.” - the Verge (emphasis added)

Listen the audio here (skip to 1:12 for just the conversation): https://youtu.be/D5VN56jQMWM

AI will often come in the form of “chat bots” - automated helpful bits of script running your everyday mundane tasks and spitting back information. Think of the “phone tree” at many businesses, but actually helpful and nice instead of infuriating.

This new age of computing will be amazing and the assistance helpful, however as I am sure you are aware AI with believable CIVR & Bots on SM also have the potential be very dangerous (think Terminator/2001/AI/Matrix), and have even been called the human races “biggest existential threat” by Elon Musk (from The Guardian https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat

“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” said Musk. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”

In the future we will interact with them daily to assist us with nearly everything, but AI is extraordinarily dangerous. 

Therefore, now is the time to lay down some ground rules.

Here is a start; at the very least we have a right to know if we are speaking to a human! We also have a right to know what data is being collected, by whom, and for what purpose. 

Let’s make that the law.

The issue:
AI/CIVR/Bots are getting robust enough that you might not know you’re speaking with one. 

The question:
Do we have a right to know we are interacting with a bot and not a human? Thus, should we create laws to require notice?

I think the clear answer is yes.

Statutes should require bots to introduce themselves as bots and to answer truthfully whenever asked:

Are you a bot?
Who do you work for?
What data are you gathering?
How and where is this data going to be used or sold?

So sign this petition and call your legislators. We need these ground rules enacted immediately.