Could an Artificial Intelligence (AI) be created from your digital footprint?
Earlier this month Microsoft received approval from a patent it filed back in 2017 for speculative technology created by Dustin Abramson and Joseph Johnson Jr.
The inventors proposed using our ubiquitous public digital footprint to create a chatbot powered by Artificial Intelligence algorithms. Every social media post or online content we’ve published would form the basis to generate a chatbot supposedly inspired by the personality traits we’ve displayed.
On the one hand, given the prevalence of shorthand memes or snappy rejoinders, a chatbot might struggle to have much to work with. On the other, most virtual chatbots to date lack any sign of personality or much apparent intelligence, so I guess the concept has potential to humanise a usually bleak customer service experience.
The trickier point of ethics is the way they might use the tech without the permission of the human source. If it relies on public postings, then they could target anyone, living or dead, with an extensive public social media presence to create the personality of the ‘bot.
Imagine a company offering service from our “royal” or “presidential” chatbot (the latter could be a very interesting experience, couldn’t it?). Or come chat with our team of “pop princesses” or get “celebrity service”. Or discovering you were chatting with someone who uncannily echoed the speech patterns, language choices, and humour of your recently deceased partner?
The patent once again reflects the foresight of science fiction authors. If you’re a fan of Black Mirror, you may remember “Be Right Back” a 2013 episode where the main character was interacting with an AI created from her dead boyfriend.
Of course, there’s a long road to travel from a speculative patent to the real world. Microsoft representative, Tim O’Brien—General Manger of AI programs, has already confirmed via Twitter that they have no immediate plans for progressing the tech as it is disturbing. He noted the patent was applied for prior to the implementation of Microsoft’s current AI ethics review practices.