The recent resurgence of AI presents a Janus-faced image of a broad but indeterminate set of practices that has come to be carried out under the rubric of AI. One face represents a miracle cure for many social and economic issues of our times – from world poverty to disease, drought, and economic disparity, while the other face projects AI as the cause célèbre of the same social ills and many more. The first image encourages the introduction of AI systems into more and more aspects of our life, while the second one holds AI responsible for a myriad of global crises ranging from the decline of liberal democracy to the loss of faith in scientific institutions. Neither image is accurate and realistic because AI systems can neither cause these crises, nor can they cure them on their own.Rather, they are expressions of existing problems, which can be either mitigated or irritated through the introduction of AI systems. The recent global pandemic brought this reality into light, exposing deeply entrenched fractures in our societies. AI systems have been inserted into these cleavages on the basis of eithermisguided trust in their abilities or extrinsic agendas, often reinforcing and amplifying the gaps. But they didn’t have to. The reason they do is that AI is currently in a state of paradigm crisis in the Kuhnian sense of the term. To avoid this and to put AI into positive and productive use in dealing with contemporary issues, a change of paradigm is needed. That this is the case can be demonstrated in almost any area of AI – from face recognition to language translation and from corporate recruiting to public services.
The common denominator in all of this is that technology and its capabilities is put at the center of AI research, driven by a thinking that can be best described as a “challenge-me-if-you-dare” paradigm. While that might have worked for a while, it is becoming increasingly clear that it doesn’t work anymore. Following decades of ups and downs, with false starts and hyped hopes, that paradigm has run its course, leading to a growing crisis in AI. To divert the crisis, we have to change the way we think about AI in deep ways – we need a paradigm shift, in other words. To shift the paradigm, we propose to reformulate its central question as “What are the conditions of possibility for computers to do X?”
But we have to go further and to explain how the paradigm crisis of AI gets entangled with broader social crises. We have to identify the mechanisms, in other words, that enable the insertion of AI in the preexisting fractures of the social system, as well as the conditions of possibility that give rise to such mechanisms. Many commentators have recently noted the flaws and fallibilities of AI systems, often framed in terms of the “ethics” of system design or the imposition of constraints on application domains such as military. Such measures, while valuable, are not adequate in explaining the underlying social, material, and cultural conditions of possibility for the emergence of the flaws and biases. We need to uncover and explain the conditions of possibility that have given rise to crisis both in AI and in the broader social arena.
We approach this question with a focus on the following:
This is what we seek to do in the AI Crisis Group. We are a group of researchers and practitioners from around the globe with various backgrounds (AI, human-computer interaction, informatics, law, political science, science and technology studies, etc.) and different degrees of engagement with AI. We invite anyone with an interest in these topics – activists, artists, intellectuals, policy makers, researchers, and members of the public – to join us in our conversations and in our efforts to understand and address these complex issues. You can reach us at hekbia@iu.edu if you are interested.