You Are a Charm School for Bots

Regular service is resuming after a few weeks between issues. April was essentially workshop and talk
Changeist
You Are a Charm School for Bots
By Changeist • Issue #20
Regular service is resuming after a few weeks between issues. April was essentially workshop and talk month, but things are calming down now. Of the nine talk-workshop events, six of these touched on bots in some shape or form—not surprising as the topic is quite popular at the moment. Most were to audiences who had only a passing familiarity with the concept of bots, which led to some good discussions about ethics, desirability, and accessibility of the technology without the encumbrance of zero-sum arguments. 
One thing that surprised most people was the extent to which their own behavior, speech and other activity provide the basis of how bots interact with them (really, Google is a global nest of bots, which we have fed for 17 years). The idea of things like Markov bots, taking past conversation (for example) as training data, explodes the notion that bots are something that grow in a lab somewhere else. One way or another, we tell them what to say—explicitly in instances where bots’ designers want them to seem more naturalistic and familiar to us, more “human”. Some, like Facebook, are feeding them children’s books (Is that an approved curriculum? Do they get to read “Handmaid’s Tale” as well as “Peter Rabbit”? Is there an ethics board? Do I get to have a say in which socio-cultural norms the bots I will interact with are fed?) Some tell us we have a responsibility to teach the bots around us.  
So is the public, for now, a charm school for bots? Do we teach them how to behave (like we know) or just let them mine us for chains of legible language? Or is it really assisted living—where we carry out their needs for occupational therapy? Yet we don’t teach them morality, only decision trees. Even complex decisions derived by AI, however quickly or incomprehensibly made, aren’t truly, independently moral in nature, only a reflection of the parameters we’ve set, and data we’ve introduced.
It’s interesting to speculate about bots and AI as “other,” but the public discussion is, in a way, getting ahead of itself, projecting a lot of the fear of what we would create if we could. For now, we’re simply talking to ourselves through bots, like a complicated form of “telephone.” Maybe that’s what we’re worried about.

How to Future
Just a quick promo: we’re taking the wraps off How to Future, a new project of ours to create straightforward workshops, tools and text to answer the question hear often: “Can you suggest a good intro on to how to use futures in my work?” We’re starting off with a 1-Day Workshop which can be designed for groups from 12-30 people as a fast-moving intro to applied futures. We can work with a single organization, or as an intro for mixed teams or a few dozen brave individuals. 
Workshops can be used to tackle the future of a particular market or topic, or used more generically to familiarize groups with vocabulary, basic tools and methods to move from a fuzzy tangle of signals about the future to creative scenarios and exploratory prototypes. 
If this sounds interesting, contact Susan Cox-Smith for more information and to discuss organizing a workshop. Check the blog or follow a spanking new Twitter account to stay up to date on How to Future as it unfolds.
On The Agenda
For the fourth year running, the nice people at Media Future Week invited me to talk, and then spend time with student teams hearing about their projects and giving feedback. A summary and video of the talk is here
While we were in Barcelona running a workshop on exhibition futures with CCCB, folks from the Lab interviewed me about futures, culture, climate change, innovation, and why I look so serious. 
Natalie flew to New York to attend and speak at Theorizing the Web again this year. Her talk, on Means Well Technology, is second in the linked clips above.
Nat also took time to give Matt Webb’s new project, Machine Supply, her book picks. Among her picks were a Bradbury and a Shirley Jackson, so she’s still in the club. :)
Charm School for Bots
The faulty logic behind trying to recreate human intelligence like for like, extends to training machines to “human” values.
Do we have a responsibility to do right by code?
One researcher says the thing slowing down progress in machine learning isn’t the algorithms, but the training data. 
Via Stephanie Rieger (whose newsletter, Twill, I endorse), another form of what Nicolas Nova (whose newsletter, Lagniappe, j’approuve aussihas written about as hétéromatisation, or intentionally combined human-machine processes.
The Hidden Dangers of AI for Queer and Trans People by Alyx Baldwin  | Model View Culture
This link came via Lydia Nicholas, and points to ways that, in reflecting current culture, training for AIs can further encode biases.
The Network
John Willshire threatened the Chair Game in our Barcelona workshop recently, and then he went and did it—at the V&A of course. Have a look.
Madeline Ashby was called in on an interesting job a while back—to create a visual narrative to tell the story of an interesting house
Fungal products won't win prizes for glamour but will be greener | New Scientist
Sjef van Gaalen visited the Fungal Futures exhibition in Utrecht recently, and lived to tell the tale.
/End Chat
As always, if this is not of interest, feel free to unsubscribe. If you think a friend or colleague would benefit from what we share, please pass this on or recommend.
Follow Changeist on Twitter, inject yourself on Medium, log us on the Web, toggle the Instagram, or render us an email.
Another happily empty box of Artefact Cards.
Another happily empty box of Artefact Cards.
Did you enjoy this issue?
Changeist
A periodic look into research threads on critical futures, strategy, post-normal innovation, providing a look over the shoulder of the team at Changeist. Each issue includes brief analysis, links, updates, and occasional invisible hand gestures.
Carefully curated by Changeist with Revue. If you were forwarded this newsletter and you like it, you can subscribe here. If you don't want these updates anymore, please unsubscribe here.