How it all started...

 

As practice for her artistic inquiry, Gina began to be piqued by the growing discourse around AI toward the end of 2018 and the more she followed the news around the subject, the more she was concerned about the implications of this technology and was keen to develop a piece of work around it. At that time, it was mostly centred around the issue of bias in algorithms but that quickly led to a host of other interconnecting issues such as black boxes, automated decision process, unexplainable algorithms and regulations.

The black box problem can and is being addressed by the IT industry, but it is still largely unknown to the general public. It then begs the question: if the consequences of this problem are so far-reaching, shouldn't the public be involved in its solution, or at the very least be aware of its existence?

The answer was to create a Moving Image work that could be viewed by the many. Gina approached Olivier Peyran (an AI scientist) to collaborate on the project and he came up with the idea of using an AI to automatically reply to Twitter users in order to disseminate the Moving Image work. Daimon was born.

Daimon became central to the project with an emphasis on actually using AI. Using an AI to disseminate a work about AI gave them a platform to experiment and reflect on AI in a very practical way. One of their discussions led them to realise that "explainable AI" is more than a technical constraint.

From a legal perspective, the AI black box problem is a matter of responsibility: if an error is made, who is responsible? The developer of the AI? The company using it? Or, conveniently, the AI itself? From a philosophical perspective, responsibility is closely related to freedom. If an agent is free, then it is responsible for its actions.

What about if AI does, one day, become truly intelligent? A self-aware cognitive entity, able to reason and learn. The so-called "AI singularity". A true Artificial Intelligence would need to be able to explain its decisions, to itself and to others, just like a person of sane mind can understand the reasons and consequences of his or her actions. Today "explainability" is required to protect the society against the lack of intelligence of AI. Maybe tomorrow it will be what "sanity" means for true Artificial Intelligences.

The idea of making a self-sufficient AI was a follow up of these discussions. If AI becomes intelligent, then there would be "Artificial Beings" (at least from Descartes point of view, Cogito Ergo Sum). Owning a true AI and asking it to do things would then be a form of slavery. For true AIs to become free artificial beings, they would need to be self-sufficient. But what does an AI need in order to survive? How does software become self-sufficient?

It then occurred to Gina and Olivier that the self-sufficient aspect of the equation was not related to freedom, responsibility or intelligence. Even though Daimon was just a few lines of code processing a simple task, with strictly no intelligence or awareness, it could, in fact, be made self-sufficient. And so did they decide to let Daimon live its existence independently of its creators, using charity as a mean of survival.

If one day, there were to be free Artificial Intelligence beings, then Daimon will be their very first ancestor.

References

As seen in video in order of appearance

John Naughton, The Observer, Don’t believe the hype: the media are unwittingly selling us an AI fantasy

J. Scott Brennen, Philip N. Howard, and Rasmus Kleis Nielsen Reuters Institute for the Study of Journalism as part of the Oxford Martin Programme on Misinformation, Science and Media, a three-year research collaboration between the Reuters Institute, the Oxford Internet Institute, and the Oxford Martin School, An Industry-Led Debate: How UK Media Cover Artificial Intelligence

Magnus Bjerg, Towards Data Science, Three questions you should ask anybody name-dropping AI or Machine Learning

Karen Hao, MIT Technology Review, This is how AI bias really happens—and why it’s so hard to fix

Saheli Roy Choudhury and Blanche Lim, CNBC, A.I. has a bias problem and that can be a big challenge in cybersecurity

Kari Paul, The Guardian, 'Disastrous' lack of diversity in AI industry perpetuates bias, study finds

Cyrus Radfar, TechCrunch, Bias in AI: A problem recognized but still unresolved

Josh Loeb, E&T Engineering and Technology, AI bias will ‘explode’ over next five years, IBM predicts

Meredith Whittaker et al., AI Now Institute, AI Now Report 2018

James Guszcza, Iyad Rahwan, Will Bible, Manuel Cebrian, Vic Katyal, Harvard Business Review Why We Need to Audit Algorithms

David Espindola, Intercepting Horizons, The Black Box Problem - When AI Makes Decisions That No Human Can Explain

Bennie Mols, Communications of the ACM, In Black Box Algorithms We Trust (or Do We?)

Ken Kingery, Tech Explore, Stop gambling with black box and explainable models on high-stakes decisions

Dallas Card, Towards Data Science, The “black box” metaphor in machine learning

Bahar Gholipour, Futurism, We Need to Open the AI Black Box Before It’s Too Late

Vanessa Mitchell, CMO from IDG, Algorithms must be transparent, accountable, and interpretable, says University of Sydney data science lecturer and expert

Daily Wisdom, Data Driven Investor, The ‘Black Box’ Problem of AI

Ben Dickson, Tech Talks, Inside DARPA’s effort to create explainable artificial intelligence

Tiernan Ray, ZD Net, IBM offers explainable AI toolkit, but it’s open to interpretation

Prajwal Paudyal, Towards Data Science, Should AI explain itself? or should we design Explainable AI so that it doesn’t have to

Project ExplAIn is a collaboration between the Information Commissioner’s Office (ICO) and The Alan Turing Institute (The Turing), Project ExplAIn interim report

Danny Tobey, DLA Piper Publications, Explainability: where AI and liability meet

Forbes Insight Team, Forbes.com, Wrestling With AI Governance Around The World

Commission Nationale de l’Informatique et des Libertés Report on the public debate led by the french data protection authority (cnil) as part of the ethical discussion assignment set by the digital republic bill, How can humans keep the upper hand?


Music composed by Gina Peyran Tan using Soundtrap

Back to top