For understanding the concept of artificial intelligence (AI), many of us will have intuitive and shared concepts of artificiality, for example a computer that implements an algorithm or an autonomous robot. When it comes to intelligence, however, we face a multitude of definitions, some based on conceptual categorizations others on empirical, e.g. psychometric, criteria. These different concepts of intelligence, whether regarding humans or artificial systems, make it very difficult to identify commonalities between human intelligence and AI, but also to clearly demarcate differences between these intelligences.
In this lunch lecture series by the FRIAS Research Focus Responsible Artificial Intelligence, researchers from various academic disciplines—philosophy, psychology, computer science and others—will analyze the concept of intelligence. By following the lecture series, attendees will gain a multidisciplinary understanding of human and artificial intelligence.
State of the Art of Artificial Intelligence Research
Prof. Dr. Joschka Boedecker, Neurobotics Lab, Institute for Informatics, University of Freiburg
From its initial inception at the Dartmouth workshop in 1956 to current proclamations that Artificial Intelligence (AI) is „the new electricity“ which will transform countless business sectors around the world, the field of AI research has come a long way. In this short overview, I will retrace its eventful history, highlight some of the biggest successes in the field, sketch the current state of research, preview developments that can be anticipated, and mention some of the many opportunities where AI technologies may be brought to bear on extremely difficult problems, as well as challenges that lie ahead.
State of the Art of Human Intelligence Research
Prof. Dr. Evelyn Ferstl, Cognitive Science & Gender Studies, University of Freiburg
Everyone seems to know what intelligence is. When trying to define the concept, however, it is rather difficult to pinpoint exactly what it is that makes people smart: the knowledge someone has in their specialty area? their encyclopedic, general knowledge about any topic? or the ability to learn and to adjust to new circumstances? In this lecture I will give a short overview of psychological research on intelligence, including definitions, theoretical accounts and empirical findings.
Can and should we build AIs with social intelligence?
Prof. Dr. Johanna Seibt, School of Culture and Society, Aarhus University
One of the current goals in AI and robotics is to create artificial social intelligence. I will discuss in which respects this is a reasonable and responsible pursuit. Taking the perspective of ‘robophilosophy’, a new area of experimental and interdisciplinary philosophical research, I argue that there are certain forms of artificial social intelligence that are ethically problematic while others are not. For this purpose I introduce a descriptive framework (the Ontology of Asymmetric Social Interactions) in terms of which the capacities of an artificial ‘social’ agent can be characterized sufficiently precisely so as to interface the technological, scientific, and philosophical idioms relevant for the debate about responsible AI.
Dr. Marcello Ienca, Research Fellow at the Health Ethics & Policy Lab, ETH Zurich
In the history of planet Earth, humans and other animals have had a monopoly on intelligence. Today, a number of processes typically considered constitutive of intelligence can be executed not only by animals but also by machines. These include the ability to learn from experience, to solve problems, to adapt effectively to the environment, to engage in some form of reasoning and to process large volumes of complex information including visual information and natural language. Consequently, natural intelligence —the intelligence of biological organisms— is no longer the only form of intelligence on the planet, but it is being increasingly accompanied by artificial intelligence (AI): the intelligence of artificial systems. Furthermore, these two „kinds of intelligence“ are undergoing a process of hybridisation. Advances in neuroengineering and machine learning are enabling the development of increasingly reliable brain-machine interfaces (BMIs), that can record the neuroelectrical correlates of human intelligence and further analyse them via artificial intelligence. This presentation will discuss the similarities and differences between natural and artificial intelligence, and discuss the promises and challenges associated with their integration. In addition, this presentation will explore the ethical implications of recent advances in neuroscience, AI and neurotechnology, and make some normative suggestions about how to integrate the two intelligences in a manner that preserves fundamental rights and social equality.
Dr. Philipp Kellmeyer
The propensity to take „mental shortcuts“ (also known as heuristics) for judgement and decision-making, is an inherent feature of human cognition and serves important adaptive purposes in everyday life and human-human interaction. If these heuristics, however, produce systematic errors in our decision-making, they are called biases which, if accumulated over time, can produce substantial distortions of knowledge and behavior. Most artificial intelligence (AI) systems today are based on human-derived knowledge structures (ontologies) and/or annotated (big) data used for deep learning with artificial neural networks. Therefore, the human cognitive biases may be reproduced, inflated and disseminated by AI systems which could lead to a perpetuation of social injustices and discriminations that are based on human biases, e.g. with respect to ethnicity, gender and other social markers. In the talk, we will discuss this overlapping realm of human an artificial biases and ways of mitigating their negative social effects.
Prof. Dr. Mark Coeckelbergh, Department of Philosophy, University of Vienna
AI ethics is much debated inside and outside academia, and often attracts exotic ideas and fears about superintelligence. This talk moves away from this polarization and science fiction fantasies and instead focuses on concrete ethical issues raised by AI and related data science. In the talk there is a special emphasis on issues concerning responsibility and knowledge. The talk also gives an overview of some challenges for AI policy in the light of these problems.
Prof. Dr. Marco Ragni, Department of Computer Science, University of Freiburg
In the visible future, humans will more and more rely on highly developed AI systems to process and share information. But how do AI systems actually need to present information to avoid a misprocessing and misunderstanding of the information by humans? It becomes necessary that systems are able to “understand” the human reasoning process and not just apply normative laws of “correct reasoning” such as classical logic or probability theory, expecting that in general human reasoners will employ them. Whenever humans reason about information, their derived conclusions can strongly deviate from normative theories. This is, however, not caused by simple errors of attention or motivation, but it depends on the underlying mental representation. Despite progress of cognitive psychological theories a descriptive theory of human reasoning in general and for individual reasoners’ is missing. But what are the characteristics of human reasoning? How do we need to change psychological experimental research to better understand human reasoning and how good are state-of-the-art cognitive systems to predict an individual reasoner? Implications for human reasoning and cognitive systems are discussed.