Tuesday, May 24, 2022

Techno-Optimism: An Analysis, An Evaluation and A Modest Defence



Here's a new paper. This one was a bit of a labour of love. It is an analysis of what it means to be a techno-optimist and how one might defend a techno-optimistic stance. It is due out in Philosophy and Technology. I'll post the official version when it is available. For now, I've posted links to the final prepublication draft.


Title: Techno-optimism: an analysis, an evaluation and a modest defence

Links: Official; Philpapers; Researchgate

Abstract: What is techno-optimism and how can it be defended? Although techno-optimist views are widely espoused and critiqued, there have been few attempts to systematically analyse what it means to be a techno-optimist and how one might defend this view. This paper attempts to address this oversight by providing a comprehensive analysis and evaluation of techno-optimism. It is argued that techno-optimism is a pluralistic stance that comes in weak and strong forms. These vary along a number of key dimensions but each shares the view that technology plays a key role in ensuring that the good prevails over the bad. Whatever its strength, to defend this stance, one must flesh out an argument with four key premises. Each of these premises is highly controversial and can be subjected to a number of critiques. The paper discusses five such critiques in detail (the values critique, the treadmill critique, the sustainability critique, the irrationality critique and the insufficiency critique). The paper also considers possible responses from the techno-optimist. Finally, it is concluded that although strong forms of techno-optimism are not intellectually defensible, a modest, agency-based version of techno-optimism may be defensible.

 

The paper puts forward the following as an 'ameliorative' definition of techno-optimism:


Techno-optimism = A stance (set of beliefs, commitments, desires, intentions etc) that maintains that technology (broadly defined) plays a key role in ensuring that the good prevails over the bad.

 

I am a lot more precise about this in the paper, going on to argue that techno-optimism comes in a variety of different forms which vary along a number of dimensions. One of those dimensions is whether the techno-optimist is presentist or futurist in their outlook: i.e. thinks technology makes things good right now and/or will do so in the future.

One of key features of the paper is the argument template I map out for any defender of techno-optimism. In short, I claim that in order to defend a techno-optimistic stance one must defend an argument with five key premises:


  • (1) If (a) the good probably does or probably will prevail over the bad and (b) if technology probably plays a key role in ensuring this, then techno-optimism is the correct stance. 
  • (2) The probable current and/or future facts are F1…Fn [Facts Premise
  • (3) The agreed upon value criteria for determining whether the good prevails over the bad are V1…Vn [Value Premise
  • (4) The good probably prevails over the bad, given F1…Fn evaluated in light of V1…Vn [Evaluation Premise
  • (5) Technology probably plays a key role in ensuring that (4) is true [Technology Premise]. 
  • (6) Therefore, techno-optimism is the correct stance.

Different techno-optimists will flesh out these premises in different ways, particularly premises (2) - (4), which are the centrepiece of the argument.

Another key feature of the paper is a thorough review of some of the leading objections to techno-optimism and the possible replies that a techno-optimist could make. The table below summarises these objections and replies.




Obviously, I would encourage people to read the whole paper for a fuller picture.


Wednesday, May 18, 2022

Darwin's Logical Argument for Natural Selection


One of the things I occasionally like to do is to re-read books that had an early influence on my thinking. It is an instructive exercise. Sometimes, when you read a book early in life you are easily impressed by its ideas and arguments. Oftentimes, this because so many of them are new to you. They have, as a result, an outsized influence on your worldview. When you re-read them, you often find them less compelling. You will have learned so much in the intervening years that the ideas and arguments start to seem obvious and stale.

There are some exceptions to this trend. One example of this, for me at any rate, is Daniel Dennett’s book Darwin’s Dangerous Idea. I first read it in my late teens. I loved it at the time. I was new to debates about Darwinism, its scientific basis, and its philosophical implications. I lapped up everything Dennett had to say. Re-reading it now, I still find it compelling. To be clear, a lot of it is not as impressive as I thought at the time. For example, I used to like Dennett’s somewhat imperious and bitchy style of writing -- so critical and dismissive of his peers -- but I don’t like that so much anymore. Nevertheless, I was pleased to find that the book is still full of interesting metaphors and thought experiments: universal acid, skyhooks and cranes, the Library of Mendel, the Two-Bitser machine and so on. All of these get you to think about the world in a new way and many of them still resonate to this day.

That’s a long introduction — a mini-book review of sorts — to what is going to be a very simple post that doesn’t really have anything to do with Dennett’s book.

One of the things I re-read in Dennett’s book was the summary passage from Darwin’s Origin of Species in which Darwin sets out the logical argument for evolution by natural selection. Typical of a lot writing — particularly 19th century writing — Darwin expresses the argument in a convoluted style. Here it is in all its original glory:


If during the long course of ages and under varying conditions of life, organic beings vary at all in the several parts of their organisation, and I think this cannot be disputed; if there be, owing to the high geometrical powers of increase of each species, at some age, season, or year, a severe struggle for life, and this certainly cannot be disputed; then, considering the infinite complexity of the relations of all organic beings to each other and to their conditions of existence, causing an infinite diversity in structure, constitution, and habits, to be advantageous to them, I think it would be a most extraordinary fact if no variation ever had occurred useful to each being's own welfare, in the same way as so many variations have occurred useful to man. But if variations useful to any organic being do occur, assuredly individuals thus characterised will have the best chance of being preserved in the struggle for life; and from the strong principle of inheritance they will tend to produce offspring similarly characterised. This principle of preservation, I have called, for the sake of brevity, Natural Selection. 
(Darwin, Origin of Species, 1st Edition, pg 127)

 

Regular readers of this blog will know that one of my hobbies is to extract logical arguments from long prosaic summaries. Indeed, it is an exercise I often set for students in my classes. Reading through this passage, it seemed obvious to me that there is a much more straightforward and logically compelling way of expressing Darwin’s argument. I thought it might be interesting to show how to do this.

The first thing to note — which Dennett does in his book — is that the passage contains a series of ‘if…then…” statements (or conditional statements). As every first-year philosophy student knows, ‘if…then…’ statements are the building blocks of simple deductive arguments, such as:


(1) If X, then Y

(2) X

(3) Therefore, Y

 

Darwin’s argument consists of a chain of two “if…then…” arguments that build to his conclusion in favour of natural selection. Admittedly, some of the ‘if…then…’ statements that make up those two arguments are complex, and contain asides that are distracting, but it’s easy to see them in the text.

The first one is actually a double conditional statement contained in the first sentence. Here it is with the key bits highlighted:


If during the long course of ages and under varying conditions of life, organic beings vary at all in the several parts of their organisation, and I think this cannot be disputed; if there be, owing to the high geometrical powers of increase of each species, at some age, season, or year, a severe struggle for life, and this certainly cannot be disputed; then, considering the infinite complexity of the relations of all organic beings to each other and to their conditions of existence, causing an infinite diversity in structure, constitution, and habits, to be advantageous to them, I think it would be a most extraordinary fact if no variation ever had occurred useful to each being's own welfare, in the same way as so many variations have occurred useful to man.

 

To put this a bit more simply:


  • (1) If there is variation in organic beings, and if there is a severe struggle for life, then there must be some variations that are useful to surviving that struggle.

I have changed the bit after the ‘then’ in order to capture the essence of what Darwin is trying to say. If I had my druthers I would amend it even further to match modern terminology (e.g. “variations will be fitness enhancing”). The asides in the text are the claims that both of the conditions (variation and struggle) are met in reality. So the first part of Darwin’s argument, with the logical inferences filled in, works like this:


  • (1) If there is variation in organic beings, and if there is a severe struggle for life, then there must be some variations that are useful to surviving that struggle.
  • (2) There is variation in organic beings.
  • (3) There is a severe struggle for life.
  • (4) Therefore, there must be some variations that are useful to surviving that struggle (from 1, 2 and 3).


This brings us to the second part of Darwin’s argument, which occurs in the next two sentences of the quoted passage. Here they are with the relevant bits highlighted:


But if variations useful to any organic being do occur, assuredly individuals thus characterised will have the best chance of being preserved in the struggle for life; and from the strong principle of inheritance they will tend to produce offspring similarly characterised. This principle of preservation, I have called, for the sake of brevity, Natural Selection.

 

Okay, I highlighted a lot of that section because it is slightly less convoluted than the first sentence. But there is still a lot going on here. Tidying it up, here is what we get:


  • (5) If some variations are useful to surviving the struggle, and if there is a strong principle of inheritance, then useful variations will be preserved.
  • (6) There is a strong principle of inheritance (i.e. offspring are likely to resemble their parents) [implied not stated in the quoted passage]
  • (7) Therefore, useful variations will be preserved (from 4, 5 and 6).


And the preservation of useful variations is simply what Darwin calls ‘natural selection’.

In full, then, Darwin’s logical argument for natural selection, taken from the quoted passage, looks like this:


  • (1) If there is variation in organic beings, and if there is a severe struggle for life, then there must be some variations that are useful to surviving that struggle.
  • (2) There is variation in organic beings.
  • (3) There is a severe struggle for life.
  • (4) Therefore, there must be some variations that are useful to surviving that struggle (from 1, 2 and 3).
  • (5) If some variations are useful to surviving the struggle, and if there is a strong principle of inheritance, then useful variations will be preserved.
  • (6) There is a strong principle of inheritance (i.e. offspring are likely to resemble their parents) [implied not stated in the quoted passage]
  • (7) Therefore, useful variations will be preserved (from 4, 5 and 6).




There is a lot of detail packed into this argument. I have called it the ‘logical argument’ since no empirical evidence is adduced in the quoted passage in support of the key empirical claims (2, 3 and 6). The rest of the Origin of Species provides a lot of evidence in support of those claims. Darwin meticulously documents variation and inheritance in species and gives many examples of the struggle for life. Since Darwin’s time, the field of evolutionary biology has provided reams and reams of evidence in support of those claims, identifying, in much greater detail, the mechanisms of inheritance. In fact, one of Darwin's famous blindspots was the mechanism of inheritance: he knew it happened but didn't know why because he knew nothing about genetics. The amassing of evidence since the time of Darwin is one reason why the argument still holds up to this day.

If I were to make one amendment to the argument it would be to insist that the first premise include the phrase ‘if there is [a lot of] variation…”. Why? Because it seems obvious to me that if organisms vary only in one or two ways, an insufficient volume of variation will be produced to allow variations useful to the wide diversity of struggles for existence to arise. Fortunately, we know that there is a lot of variation in reality so this amendment is easily made.

Anyway, that's all I wanted to say in this post. I hope this logical reconstruction of Darwin's argument is of interest to some people.

Monday, April 25, 2022

Criticisms and Developments of Ethical Behaviourism




A few years ago, I developed a position I called 'ethical behaviourism' and applied it to debates about the moral status of artificial beings. Roughly, ethical behaviourism is a moral equivalent of the Turing test for artificial intelligence. It states that if an entity looks and acts like another entity with moral status, then you should act as if it has that status. More strongly, it states that the best evidence we have for knowing that another entity has moral status is behavioural. No other form of evidence (mechanical, ontological, historical) trumps the behavioural evidence.

My longest defence of this theory comes from my original article "Welcoming Robots into the Moral Community: A Defence of Ethical Behaviourism" (official; open access), but, in many ways, I prefer the subsequent defence that I wrote up for a lecture in 2019 (available here). The latter article clarifies certain points from my original article and responds to additional objections.

I have never claimed that ethical behaviourism is particularly original or insightful. Very similar positions have been developed and defended by others in the past. Nevertheless, for whatever reason, it has piqued the curiosity of other researchers.  The original paper has been cited nearly 80 times, though most of those citations are 'by the way'. More significantly, there are now several interesting and substantive critiques and developments on it available in the literature. I thought it would be worthwhile linking to some of the more significant ones here. I link to open access versions wherever possible.

If you know of other substantive engagements with the theory, please let me know.


  • "The ethics of interaction with neurorobotic agents: a case study with BabyX" by Knott, Sagar and Takac - This is possibly the most interesting paper engaging with the idea of ethical behaviourism. It is a case study of an actual artificial agent/entity. Ultimately, the authors argue that my theory does not account for the experience of people interacting with this agent, and suggest that artificial agents that mimic certain biological mechanisms are more likely to warrant the ascription of moral patiency.

  • 'Is it time for rights for robots? Moral status in artificial entities' by Vincent Müller - A critique of all proponents of moral status for robots that includes somewhat ill-tempered critique of my theory. Müller admits he is offering a 'nasty reconstruction' (something akin to a 'reductio ad absurdum') of his opponents' views. I think he misrepresents my theory on certain key points. I have corresponded with him about it, but I won't list my objections here. 

  • 'Social Good Versus Robot Well-Being: On the Principle of Procreative Beneficence and Robot Gendering' by Ryan Blake Jackson and Tom Williams - One of the throwaway claims I made in my original paper on ethical behaviourism was that, if the theory is correct, robot designers may have 'procreative' duties toward robots. Specifically, they may be obliged to follow the principle of procreative beneficence (make the best robots it is possible to make). The authors of this paper take up, and ultimately dismiss, this idea. Unlike Müller's paper, this one is a good-natured critique of my views.


  • 'How Could We Know When a Robot was a Moral Patient?' by Henry Shevlin - A useful assessment of the different criteria we could use to determine the moral patiency of a robot. Broadly sympathetic to my position but suggests that it needs to be modified to include cognitive equivalency and not just behavioural equivalency.



Another honourable mention here would be my blog post on ethical behaviourism in human-robot relationships. It summarises the core theory and applies it to a novel context.


Friday, April 8, 2022

How Can Algorithms Be Biased?


Image from Marco Verch, via Flickr

The claim that AI systems are biased is common. Perhaps the classic example is the COMPAS algorithm used to predict recidivism risk amongst prisoners. According to a widely-discussed study published in 2016, this algorithm was biased against black prisoners, giving them more false positive ‘high risk’ scores, than white prisoners. And this is just one example of a biased system. There are many more that could be mentioned, from facial recognition systems that do a poor job recognising people with darker skin tones, to employment algorithms that seem to favour male candidates over female ones.

But what does it mean to say that AI or algorithmic system is biased? Unfortunately, there is some disagreement and confusion about this in the literature. People use the term ‘bias’ to mean different things. Most notably, some people use it in a value-neutral, non-moralised sense whereas others use it in a morally-loaded pejorative sense. This can lead to a lot of talking at cross purposes. But also some people use the term to describe different types or sources of bias. A lot would be gained if we could disambiguate these different types.

So that’s what I will try to do in the remainder of this article. I will start by distinguishing between moral and non-moralised definitions of ‘bias’. I will then discuss three distinct causes of bias in AI systems, as well as how bias can arise at different stages in the developmental pipeline for AI systems. Nothing I say here is particularly original. I draw heavily from the conceptual clarifications already provided by others. All I hope is that I can cover this important topic in a succinct and clear way.


1. Moralised and Non-Moralised Forms of Bias

One of the biggest sources of confusion in the debate about algorithmic bias is the fact that people use the term ‘bias’ in moralised and non-moralised ways. You will, for example, hear people say that algorithms are ‘inherently biased’ and that ‘we want them to be biased’. This is true, in a sense, but then creates problems when people start to criticise the phenomenon of algorithmic bias in no uncertain terms. To avoid this confusion, it is crucial to distinguish between the moralised and non-moralised senses of ‘bias’.

Let’s start with the non-moralised sense. All algorithms are designed with a purpose in mind. Google’s pagerank algorithm is intended to sort webpages into a ranking that respects their usefulness or relevance to a search user. Similarly, the route-planning algorithm on Google Maps (or similar mapping services) tries to select the quickest and most convenient route between A and B. In each case, there is a set of possible outcomes among which the algorithm can select, but it favours a particular subset of outcomes because those match better with some goal or value (usefulness, speed etc).

In this sense it is true to say that most, if not all, algorithms are inherently biased. They are designed to be. They are designed to produce useful outputs and this requires that they select carefully from a set of possible outputs. This means they must be biased in favour of certain outcomes. But there is nothing necessarily morally problematic about this inherent bias. Indeed, in some cases it is morally desirable (though this depends on the purpose of the algorithm). When people talk about algorithms being biased in this sense (of favouring certain outputs over others), they are referring to bias in the non-moralised or neutral sense.

How does this contrast with the moralised sense of ‘bias’? Well, although all algorithms favour certain kinds of output, sometimes they will systematically favour outputs that have an unfair impact on a certain people or populations. A hiring algorithm that systematically favours male over female candidates would be an example of this. It reflects and reproduces gender-based inequality. If people refer to such a system as being ‘biased’ they are using the term in a moralised sense: to criticise its moral impact and, perhaps, to blame and shame those that created it. They are saying that this is a morally problematic algorithm and we need to do something about it.


There appear to two conditions that must be met in order for an algorithm to count as biased in this second, moralised, sense:


Systematic output: The algorithm must systematically (consistently and predictably) favour one population/group over another, even if it this effect is only indirect.
Unfair effect: The net effect of the algorithm must be to treat populations/groups differently for morally arbitrary or illegitimate reasons


These conditions are relatively straightforward but some clarifications may be in order. The systematicity condition is there in order to rule out algorithms that might be biased, on occasions, for purely accidental reasons, but not on a repeated basis. Furthermore, when I say that the systematic effect on one population may be ‘indirect’ what I mean is that the algorithm itself may not be overtly or obviously biased against a certain population, but nevertheless affects them disproportionately. For example, a hiring algorithm that focused on years spent in formal education might seem perfectly legitimate on its face, with no obvious bias against certain populations, but its practical effect might be rather different. It could be that certain ethnic minorities spend less time in formal education (for a variety of reasons) and hence the hiring algorithm disproportionately disfavours them.

The unfairness condition is crucial but tricky. As noted, some forms of favourable or unfavourable treatment might be morally justified. A recidivism risk score that accurately identifies those at a higher risk of repeat offending would treat a sub-population of prisoners unfavourably, but this could be morally justified. Other forms of unfavourable treatment don’t seem justified, hence the furore about the recidivism risk score that treats black prisoners differently from white prisoners. These are relatively uncontroversial cases. The problem is that there are sometimes grounds for reasonable disagreement as to whether a certain forms of favourable treatment are morally justified or not. This moral disagreement will always affect debates about algorithmic bias. This is unavoidable and, in some cases, can be welcomed: we need to be flexible in understanding the boundaries of our moral concepts in order to allow for moral progress and reform. Nevertheless, it is worth being aware of it whenever you enter a conversation about algorithmic bias. We might not always agree whether a certain algorithm is biased in the moralised sense.

One last point before we move on. In the moralised sense, a biased algorithm is harmful. It breaches a moral norm, results in unfair treatment, and possibly violates rights. But there are many forms of morally harmful AI that would not count as biased in the relevant sense. For instance, an algorithmic system for piloting an airplane might result in crashes in certain weather conditions (perhaps in a systematic way), but this would not count as a biased algorithm. Why not? Because, presumably, the crashes would affect all populations (passengers) equally. It is only when there is some unfair treatment of populations/groups that there is bias in the moralised sense.

In other words, it is important that the term ‘bias’ does not do too much heavy-lifting in debates about the ethics of AI.


2. Three Causes of Algorithmic Bias

How exactly does bias arise in algorithmic systems? Before I answer that allow me to indulge in a brief divagation.

One of the interesting things that you learn when writing about the ethics of technology is how little of it is new. Many of the basic categories and terms of debate have been set in stone for some time. This is true for much of philosophy of course, but those of use working in ‘cutting edge’ areas such as the ethics of AI sometimes like to kid ourselves that we are doing truly innovative and original work in applied ethics. This is rarely the case.

This point was struck home to me when I read Batya Friedman and Helen Nissenbaum’s paper ‘Bias in Computer Systems’. The paper was published in 1996 — a lifetime ago in technology terms — and yet it is still remarkably relevant. In it, they argue that there are three distinct causes of bias in ‘computer systems’ (a term which can covers algorithmic and AI systems too). They are:


Preexisting bias: The computer system takes on a bias that already exists in society or social institutions, or in the attitudes, beliefs and practices of the people creating it. This can be for explicit, conscious reasons or due to the operation of more implicit factors.
Technical bias: The computer system is technically constrained in some way and this results in some biased output or effect. This can arise from hardware constraints (e.g. the way in which algorithmic recommendations have to be displayed to human users on screens with limited space) or problems in software design (e.g. how you translate fuzzy human values and goals into precise quantifiable targets).
Emergent bias: Once the system is deployed, something happens that gives rise to a biased effect, either because of new knowledge or changed context of use (e.g. a system used in a culture with a very different set of values).

 

Friedman and Nissenbaum give several examples of such biases in action. They discuss, for example, an airline ticket booking system that was used by US travel agents in the 1980s. The system was found to be biased in favour of US airlines because it preferred connecting flights from the same carrier. On the face of it, this wasn’t an obviously problematic preference (since there was some convenience from the passenger’s perspective) but in practice it was biased because few non-US airlines offered internal US flights. Similarly, because of how the flights were displayed on screen, travel agents would almost always favour flights displayed on the first page of results (a common bias among human users). These would both be examples of technical bias (physical constraints and design choices).


3. The Bias Pipeline

Friedman and Nissenbaum’s framework is certainly still useful, but we might worry that it is a little crude. For example, their category of ‘preexisting bias’ seems to cover a wide range of different underlying causes of bias. Can we do better? Is there a more precise way to think about the causes of bias?

One possibility is the framework offered by Sina Fazelpour and David Danks in their article ‘Algorithmic Bias: Senses, Sources, Solutions’. This is a more recent entry into the literature, published in 2021, and focuses in particular on the problems that might arise from the construction and deployment of machine learning algorithms. This makes sense since a lot of the attention has shifted away from ‘computer systems’ to ‘machine learning’ and ‘AI’ (though, to be clear, I’m not sure how much of this is justified).

Fazelpour and Danks suggest that instead of thinking about general causal categories, we think instead about the process of developing, designing and deploying an algorithmic system. They call this the ‘pipeline’. It starts with the decision to use an algorithmic system to assist (or replace) a human decision-maker. At this stage you have to specify the problem that you want the system to solve. Once that is done, you have to design the system, translating the abstract human values and goals into a precise and quantifiable machine language. This phase is typically divided into two separate processes: (i) data collection and processing and (ii) modelling and validation. Then, finally, you have to deploy the system in the real world, where it starts to interact with human users and institutions.

Bias can arise at each stage in the process. To make their analytical framework more concrete they use a case study to illustrate the possible forms of bias: the construction of a ‘student success’ algorithm for use in higher education. The algorithm uses data from past students to predict the likely success of future students on various programs. Consider all the ways in which bias could enter into the pipeline for such an algorithm:


Problem specification: You have to decide what counts as ‘student success’ — i.e. what is it that you are trying to predict. If you focus on grades in the first year of a programme, you might find that this is biased against first generation students or students from minority backgrounds who might have a harder time adjusting to the demands of higher education (but might do well once they have settled down). You also face the problem that any precise quantifiable target for student success is likely to be an imperfect proxy measure for the variable you really care about. Picking one such target is likely to have unanticipated effects that may systematically disadvantage one population.
Data collection: The dataset which you rely on to train and validate your model might be biased in various ways. If the majority of previous students came from a certain ethnic group, or social class, your model is unlikely to be a good fit for those outstide of those groups. In other words, if there is some preexisting bias built into the dataset, this is likely to be reflected in the resultant algorithm. This is possibly the most widely discussed cause of algorithmic bias as it stems from the classic: ‘garbage in, garbage out’ problem.
Modelling and Validation: When you test and validate your algorithm you have to choose some performance criterion against which to validate it, i.e. something that you are going to optimise or minimise. For example, you might want to maximise predictive success (how many students are accurately predicted to go on to do well) or minimise false positive/negative errors (how many students are falsely predicted to do well/badly). The choice of performance criterion can result in a biased outcome. Indeed, this problem is that the heart of the infamous debate about the COMPAS algorithm that I mentioned at the start of this article: the designers tried to optimise predictive success and this resulted in the disparity in false positive errors.
Deployment: Once the algorithm is deployed there could be some misalignment between the user’s values and those embodied in the algorithm, or they could use it in an unexpected way, or in a context that is not a good match for the algorithm. This can result in biased outcomes. For example, imagine using an algorithm that was designed and validated in a small, elite liberal arts college in the US, in a large public university in Europe (or China). Or imagine that the algorithmic prediction is used in combination with other factors by a human decision-making committee. It is quite possible that the humans will rely more heavily on the algorithm when it confirms their pre-existing biases and will ignore it when it does not.


 

These are just some examples. Many more could be given. The important point, drawn from both Friedman and Nissenbaum’s framework, and the one suggested by Fazelpour and Danks, is that there can be many different (and compounding) causes of bias in algorithmic systems. It is important to be sensitive to these different causes if we want to craft effective solutions. For instance, a lot of energy has been expended in recent times on developing technical solutions to the problem of bias. These are valuable, but not always. They may not be targeted at the right cause. If the problem comes from how the algorithm is used by humans, in a particular decision-making context, then all the technical wizardry may be for naught.

Tuesday, April 5, 2022

97 - The Perils of Predictive Policing (& Automated Decision-Making)



One particularly important social institution is the police force, who are increasingly using technological tools to help efficiently and effectively deploy policing resources. I’ve covered criticisms of these tools in the past, but in this episode, my guest Daniel Susser has some novel perspectives to share on this topic, as well as some broader reflections on how humans can relate to machines in social decision-making. This one was a lot of fun and covered a lot of ground.

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).

Relevant Links

Tuesday, March 29, 2022

AI and the Future of the Work Ethic

That's the title of a talk I delivered to the IEET/UMass project on the future of work. You can watch it above. I look at the history of technological displacement in work and argue that, even if widespread technological unemployment does not happen, automating technologies will make work less valuable for most workers.

I also wrote a short article summarising the key arguments from the talk for the Institute of Arts and Ideas. You can read it here. (Unfortunately, this article, like most on the IAI website, seems to be periodically paywalled; if you are interested in reading the full text, contact me and I will send it to you).



Friday, March 11, 2022

Tragic Choices and the Virtue of Techno-Responsibility Gaps (New Paper)



I have a new paper coming out in the journal Philosophy and Technology. It's about responsibility gaps and why, on some occasions, they are good thing and we shouldn't always try to plug them. More specifically, it has how one of the benefits of autonomous machines is that they enable a reduced cost form of moral delegation. More details below.


Title: Tragic Choices and the Virtue of Techno-Responsibility Gaps

Links: Official; Philpapers; Researchgate

Abstract: There is a concern that the widespread deployment of autonomous machines will open up a number of 'responsibility gaps' throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on 'plugging' or 'dissolving' the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace certain kinds of responsibility gap. The argument is based on the idea that human morality is often tragic. We frequently confront situations in which competing moral considerations pull in different directions and it is impossible to perfectly balance these considerations. This heightens the burden of responsibility associated with our choices. We cope with the tragedy of moral choice in different ways. Sometimes we delude ourselves into thinking the choices we make were not tragic (illusionism); sometimes we delegate the tragic choice to others (delegation); sometimes we make the choice ourselves and bear the psychological consequences (responsibilisation). Each of these strategies has its benefits and costs. One potential advantage of autonomous machines is that they enable a reduced cost form of delegation. However, we only gain the advantage of this reduced cost if we accept that some techno-responsibility gaps are virtuous.