Where It's At
A recent article in Quartz highlighted how virtual assistants like Alexa, Siri and others are subject to language from users that is harrassing and downright vile.
Quartz's Leah Fessler tested the bots to see which would respond in an assertive manner when faced with such language. The findings were not "good."
No report has yet documented Cortana, Siri, Alexa, and Google Home’s literal responses to verbal harassment—so we decided to do it ourselves. The graph below represents an overview of how the bots responded to different types of verbal harassment. Aside from Google Home, which more-or-less didn’t understand most of our sexual gestures, the bots most frequently evaded harassment, occasionally responded positively with either graciousness or flirtation, and rarely responded negatively, such as telling us to stop or that what we were saying was inappropriate.
What This Is
So let's start thinking of ways to combat what is hopefully but "unfortunately" -- poor word choice on my part -- could be an engineering oversight by a developer to beat back intentional behavior by the user. A user's whose behavior is reinforced everytime they are not reminded of the behavior they are exhibiting in the use of harrssing language.
We need a list of abusive language -- which merely thinking about can be taxing. And we need a list of responses -- phrases and reubttals to feed the virtual assistant to send back to the abusive user in a firm, yet educating manner.
Thoughts on responses: