From “How to Make a Bot That Isn’t Racist,” by Sarah Jeong in Motherboard (3/25/16):
â€œMost of my bots donâ€™t interact with humans directly,â€ said Kazemi. â€œI actually take great care to make my bots seem as inhuman and alien as possible. If a very simple bot that doesnâ€™t seem very human says something really bad — I still take responsibility for that — but it doesnâ€™t hurt as much to the person on the receiving end as it would if it were a humanoid robot of some kind.â€
So what does it mean to build bots ethically?
The basic takeaway is that botmakers should be thinking through the full range of possible outputs, and all the ways others can misuse their creations.
â€œYou really have to sit down and think through the consequences,â€ said Kazemi. â€œIt should go to the core of your design.â€
For something like TayandYou, said Kazemi, the creators should have â€œjust run a million iterations of it one day and read as many of them as you can. Just skim and find the stuff that you donâ€™t like and go back and try and design it out of it.â€
â€œIt boils down to respecting that youâ€™re in a social space, that youâ€™re in a commons,â€ said Dubbin. â€œPeople talk and relate to each other and are humans to each other on Twitter so itâ€™s worth respecting that space and not trampling all over it to spray your art on people.â€
For thricedotted, TayandYou failed from the start. â€œYou absolutely do NOT let an algorithm mindlessly devour a whole bunch of data that you haven’t vetted even a little bit,” they said. “It blows my mind, because surely they’ve been working on this for a while, surely they’ve been working with Twitter data, surely they knew this shit existed. And yet they put in absolutely no safeguards against it?!â€
According to Dubbin, TayandYouâ€™s racist devolution felt like â€œan unethical eventâ€ to the botmaking community. After all, itâ€™s not like the makers are in this just to make bots that clear the incredibly low threshold of being not-racist.