You Are a Charm School for Bots

Regular service is resuming after a few weeks between issues. April was essentially workshop and talk


May 2 · Issue #20 · View online
A periodic look into research threads on critical futures, strategy, post-normal innovation, providing a look over the shoulder of the team at Changeist. Each issue includes brief analysis, links, updates, and occasional auditory hallucinations.

Regular service is resuming after a few weeks between issues. April was essentially workshop and talk month, but things are calming down now. Of the nine talk-workshop events, six of these touched on bots in some shape or form—not surprising as the topic is quite popular at the moment. Most were to audiences who had only a passing familiarity with the concept of bots, which led to some good discussions about ethics, desirability, and accessibility of the technology without the encumbrance of zero-sum arguments. 
One thing that surprised most people was the extent to which their own behavior, speech and other activity provide the basis of how bots interact with them (really, Google is a global nest of bots, which we have fed for 17 years). The idea of things like Markov bots, taking past conversation (for example) as training data, explodes the notion that bots are something that grow in a lab somewhere else. One way or another, we tell them what to say—explicitly in instances where bots’ designers want them to seem more naturalistic and familiar to us, more “human”. Some, like Facebook, are feeding them children’s books (Is that an approved curriculum? Do they get to read “Handmaid’s Tale” as well as “Peter Rabbit”? Is there an ethics board? Do I get to have a say in which socio-cultural norms the bots I will interact with are fed?) Some tell us we have a responsibility to teach the bots around us.  
So is the public, for now, a charm school for bots? Do we teach them how to behave (like we know) or just let them mine us for chains of legible language? Or is it really assisted living—where we carry out their needs for occupational therapy? Yet we don’t teach them morality, only decision trees. Even complex decisions derived by AI, however quickly or incomprehensibly made, aren’t truly, independently moral in nature, only a reflection of the parameters we’ve set, and data we’ve introduced.
It’s interesting to speculate about bots and AI as “other,” but the public discussion is, in a way, getting ahead of itself, projecting a lot of the fear of what we would create if we could. For now, we’re simply talking to ourselves through bots, like a complicated form of “telephone.” Maybe that’s what we’re worried about.

How to Future
Just a quick promo: we’re taking the wraps off How to Future, a new project of ours to create straightforward workshops, tools and text to answer the question hear often: “Can you suggest a good intro on to how to use futures in my work?” We’re starting off with a 1-Day Workshop which can be designed for groups from 12-30 people as a fast-moving intro to applied futures. We can work with a single organization, or as an intro for mixed teams or a few dozen brave individuals. 
Workshops can be used to tackle the future of a particular market or topic, or used more generically to familiarize groups with vocabulary, basic tools and methods to move from a fuzzy tangle of signals about the future to creative scenarios and exploratory prototypes. 
If this sounds interesting, contact Susan Cox-Smith for more information and to discuss organizing a workshop. Check the blog or follow a spanking new Twitter account to stay up to date on How to Future as it unfolds.
On The Agenda
mediafutureweek on Twitter: "@changeist Scott Smith talking about bots among us."
Scott Smith | Video | CCCB LAB
Theorizing the Web 2016
3 Books Weekly #8
Charm School for Bots
Why you can’t teach human values to artificial intelligence.
Chatbots: Why should we be nice to them? - Technology & Science - CBC News
Datasets Over Algorithms
Human eyes assist drones, teach machines to see
The Hidden Dangers of AI for Queer and Trans People by Alyx Baldwin | Model View Culture
The Network
The Chair Game – Live at the V&A | Smithery
What house is this? And who is that woman?
Fungal products won't win prizes for glamour but will be greener | New Scientist
/End Chat
As always, if this is not of interest, feel free to unsubscribe. If you think a friend or colleague would benefit from what we share, please pass this on or recommend.
Follow Changeist on Twitter, inject yourself on Medium, log us on the Web, toggle the Instagram, or render us an email.
Another happily empty box of Artefact Cards.
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here
Powered by Revue
Changeist / A-Lab: Lab 101 / Overhoeksplein 2 / 1031 KS Amsterdam / The Netherlands