My academic training is in philosophy. Before my current position I was a postdoctoral fellow in the McCoy Family Center for Ethics in Society at Stanford University where I spent part of my time at Apple on the ethics of machine learning and autonomous systems.
I work on issues in group agency, applied ethics, democracy and technology.
Currently, and together with a great team, I edit the forthcoming Oxford Handbook on the Governance of AI.
I work in moral and political philosophy, applied ethics, and philospohy of action.
My first research project investigates the nature of human and collective agency and how it relates to moral responsibility. In a second research project, I am interested in the ethics of emerging technologies, such as self-driving cars and autonomous weapons systems and in the philosophical foundations of data science.
Moreover, whenever I can, I write on applied political philosophy. I have examined what states owe to refugrees and how the digital economy affects global justice.
I recently spoke with the Communications of the ACM about my work on self-driving cars
This paper is on the problem of profligate omissions. The problem is that counter-factual definitions of causation identify as a cause anything that could have prevented an effect but that did not actually occur, which is a highly counterintuitive result. Many solutions of the problem of profligate omissions appeal to normative, epistemic, pragmatic, or metaphysical considerations. These existing solutions are in some sense substantive. In contrast to such substantive answers, this paper puts forward a technical proposal. I propose to weaken the centering condition of the semantics that is used to evaluate counterfactuals. This allows to distinguish between proximate and distant possibilities and requires the existence of a greater than singleton set of proximate possibilities relative to which the truth of conditionals is evaluated. This proposal captures an abstraction that is shared by many of the existing solutions: depending on how the distance ordering underlying the weak centering con-dition is constructed and interpreted, some of these existing solutions can be recovered.
co-authored with Matthew M Young, Justin Bullock and Kyoung-Cheol Kim
Artificial intelligence (AI) offers challenges and benefits to the public sector. We present an ethical framework to analyze the effects of AI in public organizations, guide empirical and theoretical research in public administration, and inform practitioner deliberation and decision-making on AI adoption. We put forward six propositions on how the use of AI by public organizations may facilitate or prevent unnecessary harm. The framework builds on the theory of administrative evil and contributes to it in two ways. First, we interpret the theory of administrative evil through the lens of agency theory. We examine how the mechanisms stipulated by the former relate to the underlying mechanisms of the latter. Specifically, we highlight how mechanisms of administrative evil can be analyzed as information problems in the form of adverse selection and moral hazard. Second, we describe possible causal pathways of the theory of administrative evil and associate each with a level of analysis: individual (micro), organizational (meso), and cultural (macro). We then develop both descriptive and normative propositions on AI’s potential to increase or decrease the risk of administrative evil. The article hence contributes an institutional and public administration lens to the growing literature on AI safety and value alignment.
The disappearing agent problem is an argument in the metaphysics of agency. Proponents of the agent-causal approach argue that the rival event-causal approach fails to account for the fact that an agent is active. This paper examines an analogy between this disappearing agent problem and the exclusion problem in the metaphysics of mind. I develop the analogy between these two problems and survey existing solutions. I suggest that some solutions that have received significant attention in response to the exclusion problem have seen considerably less attention in response to the disappearing agent problem. For example, one solution to the exclusion problem is to reject the exclusion assumption. Analogously, one solution to the disappearing agent problem could be to deny the claim that the agent-causal approach and the event-causal approach are mutually exclusive. Similarly, proportionality theories of causation, a solution to the exclusion problem, can be transferred to the disappearing agent problem. After establishing the plausibility of the analogy between the two problems, I examine how this latter solution in particular can be transferred from the one problem to the other.
A central dispute in social ontology concerns the existence of group minds and actions. I argue that some authors in this dispute rely on rival views of existence without sufficiently acknowledging this divergence. I proceed in three steps in arguing for this claim. First, I define the phenomenon as an implicit higher-order disagreement by drawing on an analysis of verbal disputes. Second, I distinguish two theories of existence – the theory-commitments view and the truthmaker view – in both their eliminativist and their constructivist variants. Third, I examine individual contributions to the dispute about the existence of group minds and actions to argue that these contributions have an implicit higher-order disagreement. This paper serves two purposes. First, it is a study to apply recent advances in meta-ontology. Second, it contributes to the debate on social ontology by illustrating how meta-ontology matters for social ontology.
The ongoing debate on the ethics of self-driving cars typically focuses on two approaches to answering such questions: moral philosophy and social science. I argue that these two approaches are both lacking. We should neither deduce answers from individual moral theories nor should we expect social science to give us complete answers. To supplement these approaches, we should turn to political philosophy. The issues we face are collective decisions that we make together rather than individual decisions we make in light of what we each have reason to value. Political philosophy adds three basic concerns to our conceptual toolkit: reasonable pluralism, human agency, and legitimacy. These three concerns have so far been largely overlooked in the debate on the ethics of self-driving cars.
Future weapons will make life-or-death decisions without a human in the loop. When such weapons inflict unwarranted harm, no one appears to be responsible. There seems to be a responsibility gap. I first reconstruct the argument for such responsibility gaps to then argue that this argument is not sound. The argument assumes that commanders have no control over whether autonomous weapons inflict harm. I argue against this assumption. Although this investigation concerns a specific case of autonomous weapons systems, I take steps towards vindicating the more general idea that superiors can be morally responsible in virtue of being in command.
The asylum system faces problems on two fronts. States undermine it with populist politics, and migrants use it to satisfy their migration preferences. To address these problems, asylum services should be commodified. States should be able to pay other states to provide determination and protection-elsewhere. In this article, I aim to identify a way of implementing this idea that is both feasible and desirable. First, I sketch a policy proposal for a commodification of asylum services. Then, I argue that this policy proposal is not only compatible with the right to asylum, but also supported by moral considerations. Despite some undesirable moral features, a market in asylum facilitates the provision of asylum to those who need it.
Related publications: This proposal also made it to this book Wenn ich mir etwas wünschen dürfte (Steidl 2017) on the occasion of German general elections, and to a discussion in the Change My View Subreddit here.
co-authored with Holly Lawford-Smith
Punishing groups raises a difficult question, namely, how their punishment can be justified at all. Some have argued that punishing groups is morally problematic because of the effects that the punishment entails for their members. In this paper we argue against this view. We distinguish the question of internal justice – how punishment-effects are distributed – from the question of external justice – whether the punishment is justified. We argue that issues of internal justice do not in general undermine the permissibility of punishment. We also defend the permissibility of what some call “random punishment.” We argue that, for some kinds of collectives, there is no general obligation to internally distribute the punishment-effects equally or in proportion to individual contribution.
Trolley cases are widely considered central to the ethics of autonomous vehicles. I caution against this by identifying four problems. (1) Trolley cases, given technical limitations, rest on assumptions that are in tension with one another. Furthermore, (2) trolley cases illuminate only a limited range of ethical issues insofar as they cohere with a certain design framework. Furthermore, (3) trolley cases seem to demand a moral answer when a political answer is called for. Finally, (4) trolley cases might be epistemically problematic in several ways. To put forward a positive proposal, I illustrate how ethical challenges arise from mundane driving situations. I argue that mundane situations are relevant because of the specificity they require and the scale they exhibit. I then illustrate some of the ethical challenges arising from optimizing for safety, balancing safety with other values such as mobility, and adjusting to incentives of legal frameworks.
Related publications: “The everyday ethical challenges of self-driving cars,” The Conversation, syndicated in The Boston Globe, and others.
This paper develops a taxonomy of kinds of actions that can be seen in group agency, human–machine interactions, and virtual realities. These kinds of actions are special in that they are not embodied in the ordinary sense. I begin by analysing the notion of embodiment into three separate assumptions that together comprise what I call the Embodiment View. Although this view may find support in paradigmatic cases of agency, I suggest that each of its assumptions can be relaxed. With each assumption that is given up, a different kind of disembodied action becomes available. The taxonomy gives a systematic overview and suggests that disembodied actions have the same theoretical relevance as the actions of any ordinarily embodied human.
This paper is about the status of collective actions. According to one view, collective actions metaphysically reduce to individual actions because sentences about collective actions are merely a shorthand for sentences about individual actions. I reconstruct an argument for this view and show via counterexamples that it is not sound. The argument relies on a paraphrase procedure to unpack alleged shorthand sentences about collective actions into sentences about individual actions. I argue that the best paraphrase procedure that has been put forward so far fails to produce adequate results.
Related publications: The paper prompted a discussion note, which you can find here.
co-authored with Jason McKenzie Alexander and Chris Thompson
This paper examines two questions about scientists' search for knowledge. First, which search strategies generate discoveries effectively? Second, is it advantageous to diversify search strategies? We argue pace Weisberg and Muldoon (2009) that, on the first question, a search strategy that deliberately seeks novel research approaches need not be optimal. On the second question, we argue they have not shown epistemic reasons exist for the division of cognitive labor, identifying the errors that led to their conclusions. Furthermore, we generalize the epistemic landscape model, showing that one should be skeptical about the benefits of social learning in epistemically complex environments.
Additional material: The model used for this article is written using NetLogo. The source code of our model is available here. It involves a swarm strategy, which draws on the model by Couzin et al. (2005) and the Boids model. You can find a simple simulation that I wrote to study the behaviour of this model here.
This chapter evaluates the global digital economy from the perspective of political, socio-economic, and intergenerational justice. I argue that the digital economy poses a problem of political injustice in the form of, broadly, illegitimate power.
The paper begins by arguing that “digital economy” should be defined as infrastructure that is provided or accessed online. This has several advantages. First, the economic analysis of the digital economy becomes a special case of the economics of infrastructure. Second, seeing the digital economy as about infrastructure allows to set aside ethical concerns that are orthogonal. It is argued that the digital economy is much broader than the data economy and its related concerns of privacy. Third, seeing the digital economy as digital infrastructure brings into clearer view the relevance of justice. Not only does the digital economy on this analysis global reach it is also facilitating the production of goods across all aspects of life.
I argue that the digital economy raises no distinctive concerns from the perspective of socio-economic or intergenerational justice. Instead, I argue that the crucial problem is one of political justice. The digital economy poses four problems (a) an abridgment of state power (b) a degradation of economic opportunities and political relations (c) support of authoritarian politics (d) leverage of international dominance. Each phenomenon is backed up by a political economic-analysis and illustrated with examples.
This chapter reviews and evaluates different ways in which digital technologies may affect democracy. Specifically, the chapter develops a framework to evaluate democratic practices that is rooted in the tradition of deliberative democracy. The chapter then applies this framework to evaluate proposals of how technology may improve democracy. The chapter distinguishes three families of proposals depending on the depth of the change that they affect. Mere changes, such as automatic fact checking on social media, augment existing practices. Moderate reforms, such as apps that enable and re-ward participation in local government, facilitate new practices. Radical revisions, such as using artificial intelligence to replace parliaments, are constitutive of new practices often replacing existing ones. This chapter then concentrates on three radical revisions — Wiki democracy, avatar democracy, and data democracy — and identifies meaning-ful benefits in the first and deep problems in the latter two proposals.
We are responsible for some things but not for others. In this thesis, I investigate what it takes for an entity to be responsible for something. This question has two components: agents and actions. I argue for a permissive view about agents. Entities such as groups or artificially intelligent systems may be agents in the sense required for responsibility. With respect to actions, I argue for a causal view. The relation in virtue of which agents are responsible for actions is a causal one. I claim that responsibility requires causation and I develop a causal account of agency. This account is particularly apt for addressing the relationship between agency and moral responsibility and sheds light on the causal foundations of moral responsibility.
Courses at LSE were taught as a teaching assistant; all other courses were taught as primary instructor.