AI: Ethics, Philosophy and Spirituality
This weekend Jac and I drove all the way across Germany, from Paris, and parked up on the banks of the Danube in Austria to cover ARS Electronica Fesitval , the festival for Art, Technology and Society in Linz. Over a thousand artists, scientists, techies, hackers, activists and entrepreneurs are in town to exhibit and perform around this year's theme of Artificial Intelligence. The festival examines our relationship with robots, our hopes and fears around AI and what it will mean for us - but it also plays on AI as the Andere Ich, the Other I - and probes what it means to be human, our strengths and our weaknesses. As we built AI it inherits our flaws too but may also be a key in transcending aspects of our nature.

Artificial Intelligence shares our prejudices, argues Dr. Joanna J. Bryson , as it was fashioned from data which has our inherent human biases built in. Dr. Bryson has just written a paper about Implicit Associations which reveals our biases based on gender and race, for instance. We might not think we're sexist but when asked to associate men, women, maths and reading, for example, tests show thousands of years of cultural conditioning are deeply ingrained. This skews the data analysis AI uses because through repeated association with other words the prejudices become embedded. Learning is not magic, Joanna argues, it’s just humans are good at sharing the knowledge they’ve acquired (culture by another name) and AI is built on the same knowledge base. However, humans don’t have to act upon their prejudices, someone may hold deep racist associations but they don’t necessarily act in a racist way, and Joanne argues we must build AI in such a way that it doesn’t stretch to Explicit Association by acting upon theses stereotypes either.

In the longer term, Dr. Bryson believes this deeper understanding of human behaviour, this new level of self-reflection and the ability to accurately predict our own behaviour will be as big a shift as the Copernican shift or the Darwinian shift and may also be rejected in the same way. Joanna believes this heightened understanding could help us find ways to reduce conflict and live sustainably. However, she says, "Knowing fully well what an individual person is likely to do in a particular situation is obviously a very, very great power." She says bad applications of this power include our being bullied and manipulated by big data companies and also politicians, where customers can be deliberately addicted to products or services and elections can be skewed.

If AI does fail us there’s little comfort in the current EU legislation argues Dr. Sandra Wachter, who points out this is not some faraway future world - algorithms and robotics are already used in the criminal justice system, (judges use algorithms to help decide if a person should go to jail) predictive policing (predicts areas where criminal behaviour is likely) our health system (robots are used in surgical procedures) and soon we will have autonomous cars driving us around. However Sandra thinks much of the information around AI application is opaque, for instance, she believes under the EU legislation if we are refused a job, although we have the right to ask for an explanation this is regarding the system, i.e. the data gathered about you in the decision making process and how that was used - but not the rationale of individual decisions, even though it’s been widely reported this is the case. Furthermore, the right only exists in cases where the process was entirely automated and that’s generally not the case, or a case can be manufactured so a human is involved at some point. If your self-driving car crashes, if you get sent to prison erroneously or if a robotic surgical procedure is performed incorrectly upon your body, wouldn’t you want to know precisely why?

And Dr. Wachter asks, is automation in the criminal justice system ethical? Currently, a judge may struggle with a decision to jail someone, is it easier to put them behind bars if it can be argued an algorithm said so and the human struggle is removed? While AI isn’t prejudiced it’s learning from biased data as we are human with human prejudices Dr. Wachter's stance is it’s important to keep humans in the loop as our use of AI unfolds and that we maintain transparency.

For Buddhist Monk Zenbo Hidaka, an expert on AI, artificial intelligence will surpass the subjectivity of human beings and like Dr Joanna Bryson he believes when we contemplate AI our first contemplation is of the culture which gave birth to it. "Technology inherently tends towards universality. If it were really universal, it will be free from the indigenousness of culture," he explains. While for Joanna understanding AI is deepening the understanding of human nature and behaviour, for Zenbo Hidaka the deeper understanding of ourselves may lead us to a transcendent existence, "To know myself as I am".

From a spiritual perspective, there are many levels of consciousness and the dialogue around natural and artificial intelligence and what it means for us to be human occurs on several levels simultaneously. Zenbo Hidaka says, "It may be difficult to definitely define singularity, but I believe it is time to deepen self-awareness by facing transcendental existence. The fact that AI goes beyond human ability may be a threat in the sense that it exceeds our control, but it may also be rediscovery of the sanctity once played by religion."

"What is intelligence?" He asks, for he believes it is not deeply understood. The monk explains there are two kinds of intelligence in Buddhism and thought and cognition are not the same. Thought is associated with wisdom and philosophy and is passive, this type of intelligence is received. Cognition is associated with knowledge and the sciences and is an active inquiry. So, he asks, can AI help us integrate both types of intelligence? Zenbo Hidaka believes it can be so.