Questioning humanity and teetering on the edge of an existential crisis are not often anticipated outcomes of a seminar, and yet here we are. DeepMind’s MH Tessler and Jonathan Godwin’s masterful presentation and demonstration of human-like dialogue with state-of-the-art AI models led to 10-15% of the attendees wondering why they did so poorly at identifying the humans from the machines (including yours truly).
The presentation was introduced effectively with a parallel to the 1860 Huxley-Wilberforce debate on evolution by natural selection (reductively, distinguishing between humans and apes), also at Oxford’s Museum of Natural History, and began with a thought experiment on thinking about thinking.
The presenters explained Alan Turing’s method for judging whether a machine demonstrates intelligent behavior. If a human judge cannot differentiate between human-like and machine-like behavior, the machine may be considered ‘thinking’. This is particularly relevant now, with rapid advances in AI language models. The presenters then explained of how language models learn. Predicting words through their place in a sentence (co-locational meaning),exploring phenomena in language through vector computations, and understanding semantic changes in words over time are all useful in predicting coherent and meaningful strings of words.
Traditional models of text prediction have been replaced by neural networks that do not suffer from issues of limited contexts, redundancies, less exciting words, and unwieldy combinatorial explosions. Transformer neural networks have been trained by learning using the whole internet, with exposure to a variety of sources ranging from scientific outputs to Reddit (oh the horror). They can learn from example and have an attentional mechanism that allows for the ‘emergence’ of skills that previous models lacked. This allows them to write video games from description, work towards protein folding, and develop image sequences. The seminar participants were treated to an image of an astronaut riding a unicorn through space, created pixel by pixel.
Despite these remarkable advances, language models have a number of limitations: learning from text alone side steps the additional benefits of learning in multiple settings, which impacts social reasoning, consistency, and being able to differentiate between fact and fiction. This has meant that language models reflect and potentially amplify often biased historical and social discourse. It is vitally important to recognize that while language models have come a long way and can perform useful functions, they are still a starting point for exploration and application.
We were then invited to try and differentiate between human and machine responses to questions (a combination of pre-set and audience invited). While the response rate of identifying the human correctly improved, a distressing proportion of us still chose incorrectly. The demonstration transitioned to the dinner and discussion in the shadow of the dinosaurs, guided by prompts around ethics of language models, free will of AIs, and the importance of knowing if an entity one was in conversation with human or otherwise.
The conversations examined dense issues of privacy, transparency, context-dependence, intentionality of purpose, empathy, and consent. Rapporteurs from each table reflected on the difficulties of defining free will, wondering that if humans do not have free will, then how can it be imparted to things we design? Others considered thorny issues of consent and dating AIs, especially if AIs were to break up with their human partners. The more practical-minded, who were not questioning their humanity, thought about nationalizing AI companies, the efficiencies of training, and the purposes for which AIs are deployed.
The future of AI is exactly where humanity chooses to take it. How inclusive we make ‘humanity’ is another question entirely.
Saher Hasnain is a Research Fellow at Reuben College, within the Environmental Change theme. She is a researcher at the Food Systems Transformation Programme with the University of Oxford's Environmental Change Institute.