Regular service is resuming after a few weeks between issues. April was essentially workshop and talk month, but things are calming down now. Of the nine talk-workshop events, six of these touched on bots in some shape or form—not surprising as the topic is quite popular at the moment. Most were to audiences who had only a passing familiarity with the concept of bots, which led to some good discussions about ethics, desirability, and accessibility of the technology without the encumbrance of zero-sum arguments.
One thing that surprised most people was the extent to which their own behavior, speech and other activity provide the basis of how bots interact with them (really, Google is a global nest of bots, which we have fed for 17 years). The idea of things like Markov bots, taking past conversation (for example) as training data, explodes the notion that bots are something that grow in a lab somewhere else. One way or another, we tell them what to say—explicitly in instances where bots’ designers want them to seem more naturalistic and familiar to us, more “human”. Some, like Facebook
, are feeding them children’s books (Is that an approved curriculum? Do they get to read “Handmaid’s Tale” as well as “Peter Rabbit”? Is there an ethics board? Do I get to have a say in which socio-cultural norms
the bots I will interact with are fed?) Some tell us we have a responsibility to teach the bots around us.
So is the public, for now, a charm school for bots? Do we teach them how to behave (like we know) or just let them mine us for chains of legible language? Or is it really assisted living—where we carry out their needs for occupational therapy? Yet we don’t teach them morality, only decision trees. Even complex decisions derived by AI, however quickly or incomprehensibly made, aren’t truly, independently moral in nature, only a reflection of the parameters we’ve set, and data we’ve introduced.
It’s interesting to speculate about bots and AI as “other,” but the public discussion is, in a way, getting ahead of itself, projecting a lot of the fear of what we would create if we could. For now, we’re simply talking to ourselves through bots, like a complicated form of “telephone.” Maybe that’s what we’re worried about.