undefined

Last night I was invited to the BBC Blue Room Artificial intelligence and Society event at the Radio Theatre at Broadcasting House. I met the Blue Room team through through a teaching gig at Warwick University. I wanted to attend, partly because I am interested in the influence digital technology is having on us right now (particularly in the context of mindfulness and well being) see Why McLuhan Was Right: http://www.creativesemiotics.co.uk/blog/2017/05/, partly because I'm writing a novel set in the future with some strong AI characters and partly, I admit because I'd love to do more work applying my thinking in this area.

It was a most stimulating evening. There were a number of speakers. At least a couple of them mentioned Nick Bostrom, at least in passing On reading Nick Bostrom’s dense but captivating book SuperIntelligence a few years ago [short version: http://www.nickbostrom.com/views/superintelligence.pdf] I was struck by what a terrifying thing a Super Intelligent agent could be - but still, how hard to imagine at all.

One programming principle given to a hypothetical super intelligent might be called an Coherent Extrapolated Volition- which essentially means that a reasoning agent could be licensed to make an inference of what its human makers would want it to do based on extrapolations from its other instructions. This is defined by Yudkowsky (2004) as “In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.” But Bostrom says that in theory such a Super Intelligent agent, with an objective to maximise resources, if left to its own devices could, in a worst case scenario, potentially reduce the human species to paper clips. Because its inferences might transgress the tacit rules that no human would ever question.  I was reminded that the most apparently intelligent acts can end up in stupid outcomes because they lack common sense.  https://wiki.lesswrong.com/wiki/Paperclip_maximizer

Einstein is attributed to have said: " Only two things are infinite, the universe and human stupidity. And I'm not sure about the former". I remember one of those Transport for London signs outside I think it was Angel station - they have a wittier class of aphorism - which read “Never mind about Artificial Intelligence, first we need to deal with Natural Stupidity”. We know that presently robots can do very narrow tasks but are brittle when things do not go quite as expected. Perfectly programmed robots can’t get up when they fall over. A bit like a certain Prime Minister is when she loses track of her internal autocue. #Maybot.

undefined

Ali Shah, Head of Emerging Technology, talked about how diverse the world of AI is and how it spans all the way from the ghoulish super intelligent sovereign all the way through to the most simple and banal phone app but the challenge is to make it universal and inclusive for the UK population – for example, his First Generation septuagenarian grandmother. He demonstrated the interpretive power of AIs when it comes to recognising images he drew an angel for Google Draw and it recognised it. He introduced the garbage in garbage out problem of humans training AIs wrong, with the thought experiment about angels being correlated with triangles. This made me think of cognitive schema and preference for prototypes and how this might play out to have ethical implications.  This left the audience with the question of what happens when we stultify an AI by feeding it the wrong information or correlations, my reading of his question was exactly how will that change the nature of truth and reference in the future? Parallel to the collective delusion of Filter Bubbles?  This was gone into later in greater detail by Lillian Edwards, the penultimate speaker, Internet Lawyer from Strathclyde University. 

undefined

Matthew Postgate, Chief Technology and Product Officer for the BBC said he saw AI revolutionising the future of Public Service Broadcasting and becoming ‘the new electricity’ and talked about the BBC’s legacy of pioneering technology including radio and TV and said the BBC had a unique role to play in helping to pioneer AI because it was in a position probe the direction AI is taking precisely from a more neutral position because it was neither purely commercial not purely governmental – citing the recent Horizon programme exploring understandable anxieties around driverless cars. http://www.bbc.co.uk/programmes/b08wwnwk He talked argues that the ideal AI incorporating the values of independence, accountability, impartiality and transparency in order to be a universal, not a privatised good. This very much reminded me of something I’d seen earlier this year, 23 beneficial principles endorsed by Elon Musk and Stephen Hawking: http://uk.businessinsider.com/stephen-hawking-elon-musk-backed-asimolar-ai-principles-for-artificial-intelligence-2017-2

undefined

Professor Peter Donnelly, Chair of the Royal Society Working Group on Machine Learning (a brand of AI) explained the difference between Artificial Intelligence and Machine Learning and that the latter was a way of training a machine to become the former. And that the advances in AI in recent years made it much more likely that it affect us in future years. And that these had driven by larger data sets, better algorithms and more powerful computers to run those algorithms through the data sets. The scariness of AI was once more brought to mind with the idea that there are 10 to the power 80 atoms in the known universe but over 10 to the power of 170 potential combinations on a GO board, and that an AI has now become, through its pattern recognition ability, able to beat the World Champion of Go by not only outthinking him but in a way traditional Go experts haven’t seen before. https://royalsociety.org/news/2017/04/machine-learning-requires-careful-stewardship-says-royal-society/

undefined

Matt McNeill, Head of Google Cloud platform, gave a speech looking at how AI is being used now. Showing for example how Google Translate can be combined with Google image recognition in order to identify what signs say through one’s smartphone when abroad. I liked the way he talked about his fascination with the first BBC Micro as a way of humanising technology and pre-empting, Google is evil connotations attached to his talk. He talked about neural networks and how they were being used to train for image recognition and the TensorFlow machine learning system.

undefined

Ali Parsa, CEO of Babylon Health gave an energetic demonstration of the powers of AI in his Babylon app and how it could save the NHS millions by to acting as a patient triage system to diagnose all sorts of different worrisome conditions through asking questions that led down a certain diagnostic path and how that same app could allow you to have a one to one with your doctor which would take half the time because all the questions and answers had already been done. This was impressive as all done on an app on his smartphone in real time with a female avatar who was tenacious in getting answers and vigilant around avoidance of taking meds! Ali started with an anecdote about a frog which would become a prince and grant any wish if the princess would only kiss it, but she deliberately refused to do so because she loved the novelty of having a talking frog in her bag was too good an opportunity to miss, the point being that users always determine the value of tech through usage, not its inventors. I noticed how all the speakers started with a humanising anecdote. I am guessing, because of the subject matter. https://www.babylonhealth.com/ This need to 'humanise' AI for us was brought into even sharper relief by the final speaker, Dr. Stephen Cave.

undefined

The last two speeches were the most interesting from the ethical and cultural perspective. Lillian Edwards, Professor of Internet Law at Strathclyde University gave an edifying and entertaining speech about the legal pitfalls of AI – she gave a really useful contrast between the hysterical fears of the legal violations in the AI Terminator view of public imagination – robots impersonating humans, replacing humans when we need human presence, being abused or abusing us, or ignoring us to death - and the more real, pernicious but more insidious issues around AI like for instance Data Bias in systems that we rely on that can determine outcomes, for instance in the justice system – citing the ProPublica report on the CMPS system that profiles potential criminals on the principle of recidivism – that previous offending rates would lead to and ends up profiling bias against African Americans. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing This, for her raises the question of how algorithms could be made transparent, but that this runs counter to the desire on the part of law enforcement and commercial entities to keep their core 'intel inside' secret if patented. When is it in the public interest to look under the hood of an AI to determine whether its values and procedures are impartial, fair and equitable? She also raised questions, as with the Facebook emotional contagion experiment, the ability of what I interpreted to be so called siren servers (as they are described in Jaron Lanier’s book Who Owns the Future) to use information asymmetry  to get even better at predicting how their users will behave. I particularly liked Lillian’s talk because she also outlined some plausible ways to counter these dangers, more transparency algorithms (EU’s ‘right to an explanation’ Law), mandatory humans in the loop or external auditing for AI systems. And she referred the audience to some Principles for Roboticists: https://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/principlesofrobotics/

undefined

Then lastly Dr. Stephen Cave, Executive Director of Leverhulme Centre for the Future of Intelligence – great job title indeed! - introduced us to our cultural resistances around AI and why we find them so strange. 4 main dichotomies that seem to mediate how we see AI – which, as he sees it are:

  1. Ease vs Obsolescence – we want robots to do our work for us, but we don’t want to be put out of a job. Both the left and the right have seized about the Robotic Revolution as a potentially positive game changer
  2. Dominance vs Subjugation – we want more power over our environment, but fear being usurped by the machines these ‘silicon silverbacks’ as he called us – which of course is the plotline of numerous science fiction films.
  3. Gratification vs Alienation – we want easy gratification – the perfect companion, whether carer or concubine, but we don’t want to feel the alienation encapsulated but the uncanny valley phenomenon of a slightly golem like humanoid or the intimacy violation experienced by Joaquin Phoenix’s character at the end of the film HER.
  4. Immortality vs Inhumanity – we want the transcendence from the pain of being human which has traditionally peddled by religions of various kinds. But we don’t want to become human, and to lose our sense of what it means to be human in the process.

As a philosopher, Cave was the most personally relevant to my interest in AI from spiritual POV. What I liked was that he told a human story (Descartes' touching attempt to resurrect his dead daughter through a primitive automaton)  and showed how our fears of tech are not new but atavistic. I think summarise his position around our fears by saying that artificial intelligence fuses the 'capriciousness' of ancient gods with relentlessness of the machine. He talked about AI as being both an intellectual challenge and a challenge of imagination. He cautions us against the obvious knee jerk reactions – perpetuating stereotypes – but at the time acknowledges that these are tough problems we need to take seriously. This is all good grist to my thinking around a novel set around sexuality, spirituality and different notions of human transcendence set in 2045 in Tokyo.  The timbre of moral character of an independent and conscious super intelligent AI around a crucial plot point is something I’ve thought a bit about. How about a working principle baked into all AI that nudges us towards kindness and compassion and mindfulness in our interactions, whether digital prostheses, cognitive enhancement, extended central nervous systems or no? I believe that conversations around spiritual technology (the two are always seen as in opposition, but need not be) and questions about the wisdom 'of' and the wisdom 'in' artificial intelligences and its imaginative possibilities are still being had only in science fiction and need to become part of public discourse. To paraphrase Albert Einstein "the true sign of intelligence is not knowledge but imagination". Playing Devil's Advocate I'd ask the question as to whether what we need isn't a Super Intelligence, but a Super Wisdom.