The other day my smart speaker piped up ... thinking I had said the trigger word when I'd done nothing of the sort. Reflexively I snapped, "shut up, Alexa.". In that moment I was chatting with my daughter, Layla (who graduated from UCF with a political-science degree, and a psych minor, criminal-profiling cert, and has one foot in law school), and joked about the AI remembering who was nice to it and who was mean. She pointed out that often times, criminals that go on to do much worse things, often start with lower risk deviant behavior (think the proverbial serial killer that starts by hurting animals). Getting comfortable with things that would otherwise be considered rude, offensive, or entirely unacceptable may be a ticking timebomb that might start seeping into other aspects of people's lives … rehearse anger on a safe target today, aiming it at a person tomorrow.
It got me curious, and I went to EmergentMind to see what I could find (you can see the results here):
https://www.emergentmind.com/research/b3948298b09438505ce89165
The paper found a significant association: users who were abusive towards the robots were significantly more frequently abusive in their general tweeting (M=.15 frequency of abuse in general tweets) compared to users who were non-abusive towards the robots (M=.03 frequency of abuse in general tweets). There was a significant main effect of "user type" on the frequency of dehumanizing content in users' broader Twitter communications.
And while the ultimate conclusion by Emergent Mind is that it doesn't have access to enough research to draw a conclusion (very responsible response, it's focused on technology/computer research) Design choices have the potential to add fuel. Some companies lean into friendly avatars and playful banter … others, warn against heavy anthropomorphism and push for clear human-tool boundaries. Most AI products muddle through somewhere in the middle without really thinking about it, so users don’t know whether to say please or just bark commands.
What if AI Agents had Boundaries?
What if every assistant came with a light social contract … 'That was rude, please apologise before we continue.' No sermon, just a pause. Music or answers resume once respect returns. It might annoy a few users, yet a quick apology feels like a small price if it keeps casual abuse from becoming a habit. I think it's on us, the people building this technology, to do so responsibly, in a way that adds, not detracts from the world.