Saturday, October 20, 2018

Episode #47 - Eubanks on Automating Inequality


MalO2E3H_400x400.jpg

 In this episode I talk to Virginia Eubanks. Virginia is an Associate Professor of Political Science at the University at Albany, SUNY. She is the author of several books, including Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor and Digital Dead End: Fighting for Social Justice in the Information Age. Her writing about technology and social justice has appeared in The American Prospect, The Nation, Harper’s and Wired. She has worked for two decades in community technology and economic justice movements. We talk about the history of poverty management in the US and how it is now being infiltrated and affected by tools for algorithmic governance.

 You can download the episode here or listen below. You can also subscribe to the show on iTunes or Stitcher (the RSS feed is here).



Show Notes

  • 0:00 - Introduction
  • 1:39 - The future is unevenly distributed but not in the way you might think
  • 7:05 - Virginia's personal encounter with the tools for automating inequality
  • 12:33 - Automated helplessness?
  • 14:11 - The history of poverty management: denial and moralisation
  • 22:40 - Technology doesn't disrupt our ideology of poverty; it amplifies it
  • 24:16 - The problem of poverty myths: it's not just something that happens to other people
  • 28:23 - The Indiana Case Study: Automating the system for claiming benefits
  • 33:15 - The problem of automated defaults in the Indiana Case
  • 37:32 - What happened in the end?
  • 41:38 - The L.A. Case Study: A "match.com" for the homeless
  • 45:40 - The Allegheny County Case Study: Managing At-Risk Children
  • 52:46 - Doing the right things but still getting it wrong?
  • 58:44 - The need to design an automated system that addresses institutional bias
  • 1:07:45 - The problem of technological solutions in search of a problem
  • 1:10:46 - The key features of the digital poorhouse
 

Relevant Links


 

Sunday, October 14, 2018

Robots and the Expanding Moral Circle




(I appear in this video from 15:49-25:51) 


[The following is, roughly, the text of a speech I delivered to the Trinity College, Dublin Historical Society on the 10th October 2018 (which you can watch in the video above from 15:49 - 25:51). It was for debate on the topic of AI personhood. The proposition that was up for debate was “That this House would recognize AI as legal persons”. I was supposed to speak in favour of the proposition but, as you’ll see below, I don’t quite do that, though I do argue for something not too from this idea. I find that formal debates present an interesting challenge. They are hardly the best means of getting at the truth, but there is, I think, some value to distilling your arguments on a particular topic down into a short speech. It means you have to focus on what it most relevant to your case and skip some of the nuance and waffle that is common in academic talks. This is my way of saying that what you are about to read is hardly the most careful and sophisticated defence of my views on AI moral personhood, but it has the virtue of brevity.]


(1) Not going to talk about legal personhood
In every debate in which I have participated, I have disagreed with the proposition. Tonight is no different. Unfortunately, I am not going to argue that we should recognize AI as legal persons. I don’t think that is an interesting question for at least three reasons. First, legal personhood is a social construct that can be manipulated and reshaped by us if we choose: it is not something with independent moral content or weight. Second, and this may shock you, it may already be the case that AIs can be recognized as legal persons. Shawn Bayern (a law prof at Northwestern University in the US) has argued that there are loopholes in US corporate law that allow for an AI to legally control a limited liability company. If he is right, then since LLCs are legal persons, AIs can also be legal persons, at least in the US, which could transfer to the EU due to mutual recognition provisions. Third, whether or not this is a good idea – the recognition of AIs as legal persons – depends on something else. Specifically, I think it depends on whether AIs/robots (I will talk about both) have a moral status that deserves legal recognition and protection. That’s what I want to consider.


(2) The Ethical Behaviourist Approach
Now, I am not going to argue that AIs/robots currently have moral status. I am just going to argue that they very plausibly could in the not too distant future. The reason for this is that I am an ethical behaviourist. I believe that all claims about the moral status of a particular entity (e.g. human being, animal) depend on inferences we make from external behaviours and representations made by that entity. In debates about moral status people will talk about things like sentience, the capacity to feel pain, the capacity to have interests, to be a continuing subject of conscious experience, and so on, as if these properties are what matter to judgments of moral status. I don’t disagree with any of that: I think they are what matters. I just think all judgments about the existence of those properties depends on inferences we make from behavioural states.

This posture of ethical behaviourism leads me to endorse a ‘performative equivalency’ standard when it comes to making judgments about moral status. According to this standard, if a robot/AI is performatively equivalent to another entity to whom we afford moral status, then the robot/AI must be afforded the same moral status. This can then translate into legal recognition and protection. I think it is possible (likely?) robots/AI will meet this PE-standard in the near future, and so they should be granted moral status.


(3) An initial defence of Ethical Behaviourism
Why should we embrace this performative equivalency standard? I think this is ultimately a view that is best defended in the negative, but there are three initial reasons I would offer:

The first is the Kantian Reason: we cannot know the thing-in-itself we can only ever know it through its external representations. We do not have direct epistemic access to someone’s conscious experiences of this world (which are central to judgments of moral status); we only have access to their behaviours and performances. It follows from this that the PE standard is the only one we can apply in moral affairs.

The second reason is common sense: we all know this to be true in our day-to-day lives. It’s obvious that we do not know what is going on in someone else’s head and so must make judgments about how they experience the world through their external representations to us. In other words, we are all, already, following the PE standard in our day-to-day moral decision-making.

The third reason is that this chimes pretty well with scientific practice: psychologists who make inferences as to what is going on in a person’s mind do so through behavioural measures; and neuroscientists validate correlations between brain states and mental states through behavioural measures. I’m just advocating the same approach when it comes to ascriptions of moral status.


(4) Objections and Replies
So that’s the initial defence of my position. If you are like the other people with whom I have shared this view you will think it is completely ridiculous. So let me soften the blow by responding to some common objections:


Objection 1: Robots/AIs aren’t made out of the right biological stuff (or don’t have the right biological form) and this is what matters to ascriptions of moral status, not performative equivalency (I sometimes call this the ‘ontology matters’ or ‘matter matters’ objection).

Response: Now, I happen to think this view is ridiculous as it amounts to an irrational form of biological mysterianism, but I would actually be willing to concede something to it just for the sake of argument. I would be willing to concede that being made of the right biological stuff is a sufficient condition for moral status, but that it is not a necessary one. In other words, if you have a human being or animal that doesn’t have a sophisticated behavioural repertoire you might be within your rights to grant it moral status on the grounds of biological constitution alone; it just doesn’t follow from this that it would be right to deny moral status to a robot that does have a sophisticated behavioural repertoire because it isn’t made of the right stuff. They are both sufficient conditions for moral status.

Objection 2: Robots/AIs have different origins to human beings/animals. They have been programmed and designed into existence whereas we have evolved and developed. This undermines any inferences we might make from behaviour to moral status. To slightly paraphrase the philosopher Michael Hauskeller: “[A]s long as we have an alternative explanation for why [the robot/AI] behaves that way (namely, that it has been designed and programmed to do so), we have no good reason to believe that its actions are expressive of anything [morally significant] at all” (Hauskeller 2017)

Response: I find it hard to accept this view because I find it hard to accept that different origins matter more than behaviour in moral judgments of others. Indeed, I think this is a view with a deeply problematic history: it’s effectively the basis for all forms of racism and minority exclusion: that you are judged by racial and ethnic origin, not actual behaviour. Most importantly, however, it’s not clear that there are strong ‘in principle’ differences in origin between humans and AIs of the sort that Hauskeller and others suppose. Evolution is a kind of behavioural programming (and is often explained in these terms by scientists). So you could argue that humans are programmed as well as AIs. Also, with the advent of genetic engineering and other forms of human enhancement the lines between humans and machines in terms of origin are likely to blur even more in the future. So this objection will become less sustainable.

Objection 3: Robots/AI will be owned and controlled by humans; this means they shouldn’t be granted moral status.

Response: I hesitate to include this objection but it is something that Joanna Bryson – one of the main critics of AI moral status –made much of in her earlier work (she may have distanced herself from it since). My response is simple: the direction of moral justification is all wrong here. The mere fact that we might own and control robots/AI does not mean we should deny them moral status. We used to allow humans to own and control other humans. That doesn’t mean it was the right thing to do. Ownership and control are social facts that should be grounded in sound moral judgments, not the other way around.

Objection 4: If performative equivalency is the standard of moral status, then manufacturers of robots/AI are going to engage in various forms of deception or manipulation to get us to think they deserve moral status when they really don’t.

Response: I’m not convinced that the commercial motivations for doing this are that strong, but set that to the side. This is, probably, the main concern that people have about my view. I have three responses to it: (i) I don’t think people really know what they mean by ‘deception/manipulation’ in this context – if a robot consistently (and the emphasis is on consistently) behaves in a way that is equivalent to other entities to whom we afford moral status then there is no deception/manipulation (those concepts have no moral purchase unless cashed out in terms of behavioural inconsistencies); (ii) if you are worried about this, then a lot of the worry can be avoided by setting the ‘performative equivalency’ standard relatively high, (i.e. err on the side of false negatives rather than false positives when it comes to expanding the moral circle though this strategies does have its own risks) and (iii) deception and manipulation are rampant in human-to-human relationships but this doesn’t mean that we deny humans moral status – why should we take a different approach with robots?




(5) Conclusion
Let me wrap up by making two final points. First, I want to emphasise that I am not making any claims about what the specific performative equivalency test for robots/AI should be – that’s something that needs to be determined. All I am saying is that if there is performative equivalency, then there should be a recognition of moral status. Second, my position does have serious implications for the designers of robots/AI. It means that their decisions to create such entities has a moral dimension that they may not fully appreciate and may like to disown. This might be one reason why there is such resistance to the idea. But we shouldn’t allow them to shirk responsibility if, as I believe, performative equivalency is the correct moral standard to apply in these cases. Okay, that’s it from me. Thank you for your attention.








Friday, October 12, 2018

The Automation of Policing: Challenges and Opportunities


[These are some general reflections on the future of automation in policing. They are based on a workshop I gave to the ACJRD (Association for Criminal Justice Research and Development) annual conference in Dublin on the 4th October 2018). I took it that the purpose of the workshop was to generate discussion. As a result, the claims made below are not robustly defended. They are intended to be provocative and programmatic.]

This conference is all about data and how it can be used to improve the operation of the criminal justice system. This focus is understandable. We are, as many commentators have observed, living through a ‘data revolution’ in which we are generating and collecting more data than ever before. It makes sense that we would want to put all this data to good use in the prevention and prosecution of crime.

But the collection and generation of data is only part of the revolution that is currently taking place. The data revolution, when combined with advances in artificial intelligence and robotics, enables the automation of functions traditionally performed by human beings. Police forces can be expected to make use of the resulting automating technologies. From predictive policing, to automated speed cameras, to bomb disposal robots, we already see a shift away from a human-centric policing systems to ones in which human police officers must partner with, or be replaced by, machines.

What does this mean for the future of policing? Will police officers undergo significant technological displacement, just as workers in other industries have? Will the advent of smart, adaptable security robots change how we think about the enforcement of the law? I want to propose some answers to these questions. I will divide my remarks into three main sections. I will start by setting out a framework for thinking about the automation of policing. I will then ask and propose answers to two questions: (i) what does the rise of automation mean for police officers (i.e. the humans currently at work in the policing system)? and (ii) what does it mean for the policing system as a whole?


1. A Framework for Thinking about the Automation of Policing
Every society has rules and standards. Some, but not all, of these rules are legal in nature. And some, but not all, of these legal rules concern what we call ‘crimes’. Crimes are the rules to which we attach the most social and public importance. Somebody who fails to comply with such rules will open themselves up to public prosecution and condemnation. Nevertheless, it is important to bear in mind that crimes are just one subset of the many rules and standards we try to uphold. What’s more, the boundaries of the ‘criminal’ are fluid — new crimes are identified and old crimes are declassified on a semi-regular basis. This fluidic boundary is important when we consider the impact of automation on policing (more on this later).

When trying to get people to comply with social rules, there are two main strategies we can adopt. We can ‘detect and enforce’ or we can ‘predict and prevent’. If we detect and enforce, we will try to discover breaches of the rules after the fact and then impose some sanction or punishment on the person who breached them (the ‘offender’). This punishment can be levied for any number of reasons (retribution, compensation, rehabilitation etc), but a major one — and one that is central to the stability of the system — is to deter others from doing the same thing. If we predict and prevent, we will try to anticipate potential breaches of the rules and then plan interventions that minimise or eliminate the likelihood of the breach taking place.

I’ve tried to illustrate all this in the diagram below.



This diagram is important to the present discussion because it helps to clarify what we mean when we talk about the automation of policing. Police officers are the people we task with ensuring compliance with our most cherished social rules and standards (crimes) and most police forces around the world follow both ‘predict and prevent’ as well as ‘detect and enforce’ strategies. So when we talk about the automation of policing we could be talking about the automation of one (or all) of these functions. In what follows I’ll be considering the impact of automation on all of them.

(Note: I appreciate that there is more to the criminal justice system than this framework lets on. There is also the post-enforcement management of offenders (through prison and probation) as well as other post-release and early-intervention systems, which may properly be seen as part of the policing function. There is much complexity here that gets obscured when we talk, quite generally, about the ‘automation of policing’. I can’t be sensitive to every dimension of complexity in this analysis. This is just a first step.)


2. The Automation of Police Officers
Let’s turn then to the first major question: what effect will the rise of automating technologies have on police officers? There is a lot of excitement nowadays about automating technologies. Police forces around the world are making use of data analytics systems (‘predictive policing’) to help them predict and prevent crime in the most efficient way possible. Various forms of automated surveillance and enforcement are also commonplace through the use of speed cameras and red light cameras. There are also more ‘showy’ or obvious forms of automation on display, though they are slightly less common. There are no robocops just yet, but many police forces make use of bomb disposal robots, and some are experimenting with fully-automated patrol bots. The most striking example of this is probably the Dubai police force, which has rolled out security bots and drone surveillance at tourist spots. There are also some private security bots, such as those made by Knightscope Robotics in California, which could be used by police forces.

If we assume that similar and more advanced automating technologies are going to come on-stream in the future, obvious questions arise for those who currently make their living within the police force. Do they need to start looking elsewhere for employment? Will they, ultimately, be replaced by robots and other automating technologies? Or will it still make sense for the children of 2050 to dream of being police officers when they grow up?

To answer that question I think it is important to make a distinction, one that is frequently made by economists looking at automation, between a ‘job’ and a ‘task’. There is the job of being a police officer. This is the socially-defined role to which we assign the linguistic label ‘police officer’. This is, more or less, arbitrarily defined by grouping together different tasks (patrolling, investigating, form-filling, data analysis and so on) and assigning them to that role. It is these tasks that really matter. They are what police officers actually do and how they justify their roles. In modern policing, there is a large number of relevant tasks, some of which are further sub-divided and sub-grouped according to the division and rank of the individual police officer. Furthermore, some tasks that are clearly essential to modern policing (IT security, data analysis, community service) are sometimes assigned new role labels and not included within the traditional class of police officer. This illustrates the arbitrariness of the socially defined role.

This leads to an important conclusion: When we think about the automation of police officers, it is important not to focus on the job per se (since that is arbitrarily defined) but rather on the tasks that make up that job. It is these tasks, rather than the job, that are going to be subject to the forces of automation. Routine forms of data analysis, surveillance, form-filling and patrolling are easily automatable. If they are automated, this does not mean that the job of being a police officer will disappear. It is more likely that the job will be redefined to include or prioritise other tasks (e.g. in-person community engagement and creative problem-solving).

This leads me to another important point. I’ve been speaking somewhat loosely about the possibility of automating the tasks that make up the role of being a police officer. There are, in fact, different types of task relationships between humans and automating technologies that are obscured when you talk about things in this way. There are three kinds of relationship that I think are worth distinguishing between:

Tool Relationships: These arise when humans simply use technology as a tool to perform their job-related tasks more efficiently. Tools do not replace humans; they simply enable those humans to perform their tasks more effectively. Some automating technologies, e.g. bomb disposal robots, are tools. They are teleoperated by human controllers. The humans are still essential to the performance of the task.
Partnership Relationships: These arise when the machines are capable of performing some elements of a task by themselves (autonomously) but they still partner with humans in their performance. I think many predictive policing systems are of this form. They can perform certain kinds of data analysis autonomously, but they are still heavily reliant on humans for inputting, interpreting and acting upon that data.
Usurpation Relationships: These arise when the machines are capable of performing the whole task by themselves and do not require any human assistance or input. I think some of the new security bots, as well as certain automated surveillance and enforcement technologies are of this type. They can fully replace human task performers, even if those humans retain some supervisory role.


All three relationship types pose different risks for humans working within the policing system. Tool relationships don’t threaten mass technological unemployment, but they do threaten to change the skills and incentives faced by police officers. Instead of being physically dextrous and worrying about their own safety, police officers just have to be good at controlling machines that are put in harm’s way. Something similar is true for partnership relationships, although those systems may threaten at least some displacement and unemployment. Usurpation relationships, of course, promise to be the most disruptive and the most likely to threaten unemployment for humans. Even if we need some human commanders and supervisors for the usurpers we probably need fewer of them relative to those who are usurped.

So what’s the bottom line then? Will human police officers be displaced by automating technologies? I make three predictions, presented in order of likelihood:


  • (a) Future (human) police officers will require different skills and training as a result of automation: they will have to develop skills that are complementary to those of the machines, not in competition with them. This is a standard prediction in debates about automation and should not be controversial (indeed, the desire for machine-complementary skills in policing is already obvious).

  • (b) This could lead to significant polarisation, inequality, and redefinition of what it means to be a police officer. This, again, tracks with what happens in other industries. Some people are well-poised to benefit from the rise of the machines: they have skills that are in short supply and they can leverage the efficiencies of the technologies to their own advantage. They will be in high demand and will attract high wages. Others will be less well-poised to benefit from the rise of the machines and will be pushed into forms of work that are less skilled and less respected. This could lead to some redefinition of the role of being a ‘police officer’ and some dissatisfaction within the ranks.

  • (c) There might be significant technological unemployment of police officers. In other words, there may be many fewer humans working in the police forces of the future than at present. This is the prediction about which I am least confident. Police officers, unlike many other workers, are usually well-unionised and so are probably more able to resist technological unemployment than other workers. I also suspect there is some public desire for human faces in policing. Nevertheless, mass unemployment of police officers is still conceivable. It may also happen by stealth (e.g. existing human workers are left to retired and not replaced, and there is redefinition of roles to gradually phase out humans).



3. The Automation of Policing as a Whole
So much for the individual police officers, what about the system of policing as a whole? Let’s go back to the framework I developed earlier on. As mentioned, automating technologies could be (and are being) used to perform both the ‘detect and enforce’ and the ‘predict and prevent’ functions. What I want to suggest now is that, although this is true, it’s possible (and perhaps even likely) that automating technologies will encourage a shift away from ‘detect and enforce’ modes of policing to ‘predict and prevent’ modes. Indeed, automating technologies may encourage a ‘prevent-only’ model of policing. Furthermore, even when automated systems are used to perform the detect and enforce functions, they are likely to do so in a different way.

Neither of these suggestions is unique to me. Regulatory theorists have long observed that technology often encourages a shift from ‘detect and enforce’ to ‘predict and prevent’ methods of ensuring normative compliance. Roger Brownsword, for example, talks about the shift from rule-enforcement to ‘technological management’ in regulatory systems. He gives the example of a golf course that is having trouble with its members driving their golf carts over a flowerbed. To stop them doing this, the management committee first create a rule that assigns penalties to those who drive their golf carts over the flowerbed. They then put up a surveillance camera to help them detect breaches of this rule. This helps a little bit, but then a new technology comes on the scene that enables, through GPS-tracking, the remote surveillance and disabling of any golf carts that come close to the flowerbed. Since that system is so much more effective — it renders non-compliance with the rule impossible — the committee adopt that instead. They have moved from traditional rule-enforcement to technological management.

Elizabeth Joh argues that something similar is likely to happen in the ‘smart cities’ of the future. Instead of using technology simply to detect and enforce breaches of the law, it will be used to prevent non-compliance with the rules. The architecture of the city will ‘hardcode’ in the preferred normative values and will work to disable or deactivate anyone who tries to reject those values. Furthermore, constant surveillance and monitoring of the population will enable future police forces locate those who pose a threat to the system and quickly defuse them. This may lead to an expansion of the set of rules that the policing systems try to uphold to include relatively minor infractions that disturb the public peace. Joh thinks that this might lead to a ‘disneyfication’ of future policing. She bases this judgment on a famous study by Shearing and Stenning on security practices in Disney theme parks. She thinks this provides a model for policing in the smart city:

”The company anticipates and prevents possibilities for disorder through constant instructions to visitors, physical barriers that both guide and limit visitors’ movements, and through “omnipresent” employees who detect and correct the smallest errors (Shearing & Stenning 1985: 301). None of the costumed characters nor the many signs, barriers, lanes, and gardens feel coercive to visitors. Yet through constant monitoring, prevention, and correction embedded policing is part of the experience…”

(Joh 2018)

It’s exactly this kind of policing that is enabled by automating technologies.

This is not to say that detection and enforcement will play no part in the future of policing. There will always be some role for that, but it might take a very different form. Instead of the strong arm of the law we might have the soft hand of the administrator. Instead of being sent to jail and physically coerced when we fail to comply with the law; we might be nudged or administratively sanctioned. The Chinese Social Credit system, much reported on and much maligned in the West, provides one possible glimmer of this future. Through mass surveillance and monitoring, compliance with rules can be easily rewarded or punished through a social scoring system. Your score determines your ease of access to social services and opportunities. We already have isolated and technologically enabled version of this in Western democracies — e.g. penalty points systems for driving licences and credit rating scores in finance — the Chinese system simply takes these to their logical (and technological) extreme.

There is one other possible future for policing that is worth mentioning. It could be that the rise of automating technologies will encourage a shift away from public models of policing to privatised models. This is already happening to some extent. Many automated systems used by modern police forces are owned by private companies and hence lead to a public-private model of policing (with many attendant complexities when it comes to the administration of the law). But this trend may continue to push towards wholly private forms of automated policing. I cannot currently afford my own team of private security guards, but I might be able to afford my own team of private security bots (or at least rent them from an Uber-like company).

The end result may be that the keeping the peace is no longer be a public responsibility discharged by the police, but a private responsibility discharged by each of us.


4. Conclusion: Ethico-Legal Concerns
Let me conclude by briefly commenting on the ethical and legal concerns that could result from the automation of policing. There are five that I think are worth mentioning here that arise in other cases of automation too:

Freedom to Fail: The shift from rule-enforcement to technological management seems to undermine human autonomy and moral agency. Instead of being given the opportunity to exercise their free will and agency, humans are constrained and programmed into compliance. They lose their freedom to fail. Should we be concerned?
Responsibility Gaps: As we insert autonomous machines into the policing questions arise as to who is responsible for the misdeeds of these machines. Responsibility gaps open up that must be filled.
Transparency and Accountability: Related to the problem of responsibility gaps, automating technologies are often opaque or unclear in their operations. How can we ensure sufficient transparency? Who will police the automated police?
Biased Data —> Biased Outcomes: Most modern-day automating technologies are trained on large datasets. If the information within these datasets is biased or prejudiced this often leads to the automating technologies being biased or prejudices. Concerns about this have already arisen in relation to predictive policing and algorithmic sentencing. How can we stop this from happening?
The Value of Inefficiency: One of the alleged virtues of automating technologies is their efficient and unfailing enforcement/compliance with rules. But is this really a good thing? As Woodrow Hartzog and his colleagues have pointed out it could be that we don’t want our social rules to be efficiently enforced. Imagine if you were punished everytime you broke the speed limit? Would that be a good thing? Given how frequently we all tend to break the speed limit, and how desirable this is on occasion, it may be that efficient enforcement is overkill. In other words, it could be that there is some value to inefficiency that is lost when we shift to automating technologies. How can we preserve valuable forms of efficiency?


I think each of these issues is worthy of more detailed consideration. I just want to close by noting how similar they are to the issues raised in other debates about automation. This is one of the main insights I derived from preparing this talk. Although we certainly should talk about the consequences of automation in specific domains (finance, driving, policing, military weapons, medicine, law etc.), it is also worth developing more general theoretical models that can both explain and prescribe answers to the questions we have about automation.




Friday, October 5, 2018

Episode #46 - Minerva on the Ethics of Cryonics

Francesca_BW_square_small.jpg

 In this episode I talk to Francesca Minerva. Francesca is a postdoctoral fellow at the University of Ghent. Her research focuses on applied philosophy, specifically lookism, conscientious objection, abortion, academic freedom, and cryonics. She has published many articles on these topics in some of the leading academic journals in ethics and philosophy, including the Journal of Medical Ethics, Bioethics, Cambridge Quarterly Review of Ethicsand the Hastings Centre Report. We talk about life, death and the wisdom and ethics of cryonics.

You can download the episode here or listen below. You can also subscribe on iTunes or Stitcher (the RSS feed is here).

Show Notes:

  • 0:00 - Introduction
  • 1:34 - What is cryonics anyway?
  • 6:54 - The tricky logistics and cryonics: you need to die in the right way
  • 10:30 - Is cryonics too weird/absurd to take seriously? Analogies with IVF and frozen embryos
  • 16:04 - The opportunity cost of cryonics
  • 18:18 - Is death bad? Why?
  • 22:51 - Is live worth living at all? Is it better never to have been born?
  • 24:44 - What happens when live is no longer worth living? The attraction of cryothanasia
  • 30:28 - Should we want to live forever? Existential tiredness and existential boredom
  • 37:20 - Is immortality irrelevant to the debate about cryonics?
  • 41:42 - Even if cryonics is good for me might it be the unethical choice?
  • 45:00 (ish) - Egalitarianism and the distribution of life years
  • 49:39 - Would future generations want to revive us?
  • 52:34 - Would we feel out of place in the distant future?

Relevant Links

 

Thursday, October 4, 2018

The Philosophy of Space Exploration (Index)




I have written quite a few pieces about the philosophy and ethics of space exploration over the past 12 months. I am very interested in the idea that space exploration can represent a long-term utopian project for humanity. I expect this series will continue to grow so I thought it might be useful to collect them together in one place. Enjoy!








I have also recorded some podcasts that touch cover the philosophy and ethics of space exploration:








Saturday, September 29, 2018

Artificial Intelligence and the Constitutions of the Future




[Note: This is the text of a talk I delivered to the AI and Legal Disruption (AI-LeD) workshop in Copenhagen University on the 28th September 2018. As I said at the time, it is intended to be a ‘thinkpiece’ as opposed to a well-developed argument or manifesto. It’s an idea that I’ve been toying with for awhile but have not put it down on paper before. This is my first attempt. It makes a certain amount of sense to me and I find it useful, but I’m intrigued to see whether others do as well. I got some great feedback on the ideas in the paper at the AI-LeD workshop. I have not incorporated those into this text, but will do so in future iterations of this framework I set out below.]

What effect will artificial intelligence have on our moral and legal order? There are a number of different ‘levels’ at which you can think about this question.

(1) Granular Level: You can focus on specific problems that arise from specific uses of AI: responsibility gaps with self-driving cars; opacity in credit scoring systems; bias in sentencing algorithms and so on. The use of AI in each of these domains has a potentially ‘disruptive’ effect and it is important to think about the challenges and opportunities that arise. We may need to adopt new legal norms to address these particular problems.
(2) Existential Level: You can focus on grand, futuristic challenges posed by the advent of smarter-than-human AI. Will we all be turned into paperclips? Will we fuse with machines and realise the singularitarian dreams of Ray Kurzweil? These are significant questions and they encourage us to reflect on our deep future and place within the cosmos. Regulatory systems may be needed to manage the risks that arise at this existential level (though they may also be futile).
(3) Constitutional Level: You can focus on how advances in AI might change our foundational legal-normative order. Constitutions enshrine our basic rights and values, and develop political structures that protect and manage these foundational values. AI could lead to a re-prioritising or re-structuring of our attitude to basic rights and values and this could require a new constitutional order for the future. What might that look like?

Lots of work has been done at the granular and existential levels. In this paper, I want to make the case for more work to be done at the constitutional level. I think this is the important one and the one that has been neglected to date. I’ll make this case in three main phases. First, I’ll explain in more detail what I mean by the ‘constitutional level’ and what I mean by ‘artificial intelligence’. Second, I’ll explain why I think AI could have disruptive effects at the constitutional level. Third, I’ll map out my own vision of our constitutional future. I’ll identify three ‘ideal type’ constitutions, each associated with a different kind of intelligence, and argue that the constitutions of the future will emerge from our exploration of the possibility space established by these ideal types. I’ll conclude by considering where I think things should go from here.



1. What is the constitutional level of analysis?
Constitutions do several different things and they take different forms. Some might argue that this variability in constitutional form and effect makes it impossible to talk about the ‘constitutional level’ of analysis in a unitary way. I disagree. I think that there is a ‘core’ or ‘essence’ to the idea of a constitution that makes it useful to do so.

Constitutions do two main things. First, they enshrine and protect fundamental values. What values does a particular country, state, legal order hold dear? In liberal democratic orders, these values usually relate to individual rights and democratic governance (e.g. right to life, right to property, freedom of speech and association, freedom from unwarranted search and seizure, right to a fair trial, right to vote etc.). In other orders, different values can be enshrined. For example, although this is less and less true, the Irish constitution had a distinctively ‘Catholic’ flavour to its fundamental values when originally passed, recognising the ‘special place’ of the Catholic Church in the original text, banning divorce and (later) abortion, outlawing blasphemy, and placing special emphasis on the ‘Family’ and its role in society. It still had many liberal democratic rights, of course, which also illustrates how constitutions can blend together different value systems.

Second, constitutions establish institutions of governance. They set out the general form and overall function of the state. Who will rule? Who will pass the laws? Who will protect the rule of law? Who has the right to create and enforce new policies? And so on. These institutions of governance will typically be required to protect the fundamental values that are enshrined in the constitution, but they will also have the capacity for dynamic adaptation — to ensure that the constitutional order can grow and respond to new societal challenges. In this regard, one of the crucial things that constitutions do, as has been argued by Adrian Vermeule in his book The Constitution of Risk, is that they help to manage ‘political risk’, i.e. the risk of bad governance. If well designed, a constitution should minimise the chances of a particular government or ruler destroying the value structure of the constitutional system. That, of course, is easier said than done. Ultimately, it’s power and the way in which it is exercised that determines this. Constitutions enable power as well as limit it, and can, for that reason, be abused by charismatic leaders.

The constitutional level of analysis, then, is the level of analysis that concerns itself with: (i) the foundational values of a particular legal order and (ii) the institutions of governance within that order. It is distinct from the granular level of analysis because it deals with general, meta-level concerns about social order and institutions of governance. The granular level deals with particular domains of activity and what happens within them. HLA Hart’s distinction between primary and secondary legal rules might be a useful guide here, for people who know it. It is also distinct from the existential level of analysis (at least as I understand it) because that deals, almost exclusively, with extinction style threats to humanity as a whole. That said, there is more affinity between the constitutional level of analysis and some of the issues raised in the ‘existential risk’ literature around AI. So what I am arguing in this paper could be taken as a plea to reframe or recategorise parts of that discussion.

It is my contention that AI could have significant and under-appreciated effects at the constitutional level. To make the case for this, it would help if I gave a clearer sense of what I mean by ‘artificial intelligence’. I don’t have anything remarkable to say about this. I follow Russell and Norvig in defining AI in terms of goal-directed, problem-solving behaviour. In other words, an AI is any program or system that acts so as to achieve some goal state. The actions taken will usually involve some flexibility and, dare I say it, ‘creativity’, insofar as there often isn’t a single best pathway to the goal in all contexts. The system would also, ideally, be able to learn and adapt in order to count as an AI (though I don’t necessarily insist on this as I favour a broad definition of AI). AI, so defined, can come in specialised, narrow forms, i.e. it may only be able to solve one particular set of problems in a constrained set of environments. These are the forms that most contemporary AI systems take. The hope of many designers is that these systems will eventually take more generalised forms and be able to solve problems across a number of domains. There are some impressive developments on this front, particularly from companies like DeepMind that have developed an AI that learns how to solve problems in different contexts without any help from its human programmers. But, still, the developments are at an early stage.

It is generally agreed that we are now living through some kind of revolution in AI, with rapid progress occurring on multiple fronts particularly in image recognition, natural language processing, predictive analytics, and robotics. Most of these developments are made possible through a combination of big data and machine learning. Some people are sceptical as to whether the current progress is sustainable. AI has gone through at least two major ‘winters’ in the past when there seemed to be little improvement in the technology. Could we be on the cusp of another winter? I have no particular view on this. The only thing that matters from my perspective is that (a) the developments that have taken place in the past decade or so will continue to filter out and find new use cases and (b) there are likely to be future advances in this technology, even if they occurs in fits and starts.


2. The Relationship Between AI and Constitutional Order
So what kinds of effects could AI have at the constitutional level? Obviously enough, it could affect either the institutions of governance that we use to allocate and exercise power, or it could affect the foundational values that we seek to enshrine and protect. Both are critical and important, but I’m going to focus primarily on the second.

The reason for this is that there has already been a considerable amount of discussion about the first type of effect, even though it is not always expressed in these terms. The burgeoning literature on algorithmic governance — to which I have been a minor contributor — is testament to this. Much of that literature is concerned with particular applications of predictive analytics and data-mining in bureaucratic and institutional governance. For example, in the allocation of welfare payments or in sentencing and release decisions in the criminal justice system. As such it can seem to be concerned with the granular level. But there have been some contributions to the literature that concern themselves more generally with the nature of algorithmic power and how different algorithmic governance tools can be knitted together to create an overarching governance structure for society (what I have called an ‘algocracy’, following the work of the sociologist A.Aneesh). There is also growing appreciation for the fact that the combination of these tools can subvert (or reinforce) our ideologically preferred mode of governance. This conversation is perhaps most advanced among blockchain enthusiasts, several of whom dream of creating ‘distributed autonomous organisations’ that function as ‘AI Leviathans’ for enforcing a preferred (usually libertarian) system of governance.

These discussions of algorithmic governance typically assume that our foundational values remain fixed and non-negotiable. AI governance tools are perceived either as threats to these values or ways in which to protect them. What I think is ignored, or at least not fully appreciated, is the way in which AI could alter our foundational values. So that’s where I want to focus my analysis for the remainder of this paper. I accept that there may be different ways of going about this analytical task, but I’m going to adopt a particular approach that I think is both useful and illuminating. I don’t expect it to be the last word on the topic; but I do think it is a starting point.

My approach works from two observations. The first is that values change. This might strike some of you as terribly banal, but it is important. The values that someone like me (an educated, relatively prosperous, male living in a liberal democratic state) holds dear are historically contingent. They have been handed down to me through centuries of philosophical thought, political change, and economic development. I might think they are the best values to have; and I might think that I can defend this view through rational argument; but I still have to accept that they are not the only possible values that a person could have. A cursory look at other cultures and at human history makes this obvious. Indeed, even within the liberal democratic states in which I feel most comfortable there are important differences in how societies prioritise and emphasise values. It’s a cliche, but it does seem fair to say, that the US seems to value economic freedom and individual prosperity more than many European states, which place a greater emphasis on solidarity and equality. So there are many different possible ways of structuring our approach to foundational values, even if we agree on what they are.

Owen Flanagan’s book The Geography of Morality: Varieties of Moral Possibility sets out what I believe is the best way to think about this issue. Following the work of moral psychologists like Jonathan Haidt, Flanagan argues that there is a common, evolved ‘root’ to human value systems. This root centres on different moral dimensions like care/harm, fairness/reciprocity, loyalty, authority/respect, and purity/sanctity (this is just Haidt’s theory; Flanagan’s theory is a bit more complex as it tries to fuse Haidt’s theory with non-Western approaches). We can turn the dial up or down on these different dimensions, resulting in many possible combinations. So from this root, we can ‘grow’ many different value systems, some of which can seem radically opposed to one another, but all of which trace their origins back to a common root. The value systems that do develop can ‘collide’ with one another, and they can grow and develop themselves. This can lead to some values falling out of favour and being replaced by others, or to values moving up and down a hierarchy. Again, to use the example of my home country of Ireland, I think we have seen over the past 20 years or so a noticeable falling out of favour of traditional Catholic values, particularly those associated with sexual morality and the family. These have been replaced by more liberal values, which were always present to some extent, but are now in the ascendancy. Sometimes these changes in values can be gradual and peaceful. Other times they can be more abrupt and violent. There can be moral revolutions, moral colonisations or moral cross-fertilisations. Acknowledging the fact that values change does not mean that we have to become crude ‘anything goes’ moral relativists; it just means that we have to acknowledge historical reality and to, perhaps, accept that the moral ‘possibility space’ is wider than we initially thought. If it helps, you can distinguish between factual/descriptive values and actual moral values if you are worried about being overly relativistic.

The second observation is that technology is one of the things that can affect how values change. Again, this is hardly a revelatory statement. It’s what one finds in Marx and many other sociologists. The material base of society can affect its superstructure of values. The relationship does not have to be unidirectional or linear. The claim is not that values have no impact on technology. Far from it. There is a complex feedback loop between the two. Nevertheless, change in technology, broadly understood, can and will affect the kinds of values we hold dear.

There are many theories that try to examine how this happens. My own favourite (which seems to be reasonably well-evidenced) is the one developed by Iain Morris in his book Foragers, Farmers and Fossil Fuels. In that book, Morris argues that the technology of energy capture used by different societies affects their value systems. In foraging societies, the technology of energy capture is extremely basic: they rely on human muscle and brain power to extract energy from an environment that is largely beyond their control. Humans form small bands that move about from place to place. Some people within these bands (usually women) specialise in foraging (i.e. collecting nuts and fruits) and others (usually men) specialise in hunting animals. Foraging societies tend to be quite egalitarian. They have a limited and somewhat precarious capacity to extract food and other resources from their environments and so they usually share when the going is good. They are also tolerant of using some violence to solve social disputes and to compete with rival groups for territory and resources. They display some gender inequality in social roles, but they tend to be less restrictive of female sexuality than farming societies. Consequently, they can be said to value inter-group loyalty, (relative) social equality, and bravery in combat. Farming societies are quite different. They capture significantly more energy than foraging societies by controlling their environments, by intervening in the evolutionary development of plants and animals, and by fencing off land and dividing it up into estates that can be handed down over the generations. Prior to mechanisation, farming societies relied heavily on manual labour (often slavery) to be effective. This led to considerable social stratification and wealth inequality, but less overall violence. Farming societies couldn’t survive if people used violence to settle disputes. There was more focus on orderly dispute resolution, though the institutions of governance could be quite violent. Furthermore, there was much greater gender inequality in farming societies as women took on specific roles in the home and as the desire to transfer property through family lines placed on emphasis on female sexual purity. This affected their foundational values. Finally, fossil fuel societies capture enormous amounts of energy through the combustion and exploitation of fossil fuels (and later nuclear and renewable energy sources). This enabled greater social complexity, urbanisation, mechanisation, electrification and digitisation. It became possible to sustain very large populations in relatively small spaces, and to facilitate more specialisation and mobility in society. As a result, fossil fuel societies tend to be more egalitarian than farming societies, particularly when it comes to political and gender equality, though less so when it comes to wealth inequality. They also tend to be very intolerant of violence, particularly within a defined group/state.

This is just a very quick sketch of Morris’s theory. I’m not elaborating the mechanisms of value change that he talks about in his book. I use it for illustrative purposes only; to show how one kind of technological change (energy capture) might affect a society’s value structure. Morris is clear in his work that the boundaries between the different kinds of society are not clearcut. Modern fossil fuel societies often carry remnants of the value structure of their farming ancestry (and the shift from farming isn’t complete in many places). Furthermore, Morris speculates that advances in information technology could have a dramatic impact on our societal values over the next 100 years or so. This is something that Yuval Noah Harari talks about in his work too, though he has the annoying habit of calling value systems ‘religions’. In Homo Deus he talks about how new technologically influenced religions of ‘transhumanism’ of ‘dataism’ are starting to impact on our foundational values. Both of these ‘religions’ have some connection to developments in AI. We already have some tangible illustrations of the changes that may be underway. The value of privacy, despite the best efforts of activists and lawmakers, is arguably on the decline. When faced with a choice, people seem very willing to submit themselves to mass digital surveillance in order to avail of free and convenient digital services. I suspect that it continues to be true despite the introduction of the new GDPR in Europe. Certainly, I have found myself willing to consent to digital surveillance in its aftermath for the efficiency of digital media. It is this kind of technologically-influenced change that I am interested in here and although I am inspired by the work of Morris and (to a lesser extent) Harari I want to present my own model for thinking about it.


3. The Intelligence Triangle and the Constitutions of the Future
My model is built from two key ideas. The first is the notion of an ideal type constitution. Human society is complex. We frequently use simplifying labels to make sense of it all. We assign people to general identity groups (Irish, English, Catholic, Muslim, Black, White etc) even though we know that the experiences of any two individuals plucked from those identity groups are likely to differ. We also classify societies under general labels (Capitalist, Democratic, Monarchical, Socialist etc) even though we know that they have their individual quirks and variations. Max Weber argued that we need to make use of ‘ideal types’ in social theory in order to bring order to chaos. In doing so, we must be fully cognisant of the fact that the ideal types do not necessarily correspond to social reality.

Morris makes use of ideal types in his analysis of the differences between foraging, farming and fossil fuel societies. He knows that there is probably no actual historical society that corresponds to his model of a foraging society. But that’s not the point of the model. The point is to abstract from the value systems we observe in actual foraging societies and use them to construct a hypothetical, idealised model of a foraging society’s value system. It’s like a Platonic form — a smoothed out, non-material ‘idea’ of something we observe in the real world — but without the Platonic assumption that the form is more real than what we find in the world. I’ll be making use of ideal types in my analysis of how AI can affect the constitutional order.

This brings me to the second idea. The key motivation for my model is that one of the main determinants of our foundational values is the form of intelligence that is prioritised in society. Intelligence is the basic resource and capacity of human beings. It’s what makes other forms of technological change possible. For example, the technology of energy capture that features heavily in Morris’s model is itself dependent on how we make use of intelligence. There are three basic forms that intelligence can take: (i) individual, (ii) collective and (iii) artificial. For each kind of intelligence there is a corresponding ideal type constitution, i.e. a system of values that protects, encourages and reinforces that particular mode of intelligence. But since these are ideal types, not actual realities, it makes most sense to think about the kinds of value system we actually see in the world as the product of tradeoffs or compromises between these different modes of intelligence. Much of human history has involved a tradeoff between individual and collective intelligence. It’s only more recently that ‘artificial’ forms of intelligence have been added to the mix. What was once a tug-of-war between the individual and the collective has now become a three-way battle* between the individual, the collective and the artificial. That’s why I think AI has the potential to be so be disruptive of our foundational values: it adds something genuinely new to the mix of intelligences that determines our foundational values.

That’s my model in a nutshell. I appreciate that it requires greater elaboration and defence. Let me start by translating it into a picture. They say a picture is worth a thousand words so hopefully this will help people understand how I think about this issue. Below, I’ve drawn a triangle. Each vertex of the triangle is occupied by one of the ideal types of society that I mentioned: the society that prioritises individual intelligence, the society that prioritises collective intelligence, and the one that prioritises artificial intelligence. Actual societies can be defined by their location within this triangle. For example, a society located midway along the line joining the individual intelligence society to the collective intelligence society would balance the norms and values of both. A society located at the midpoint of the triangle as a whole, would balanced the norms and values of all three. And so on.**



But, of course, the value of this picture depends on what we understand by its contents. What is individual intelligence and what would a society that prioritises individual intelligence look like? These are the most important questions. Let me provide a brief sketch of each type of intelligence and its associated ideal type of society in turn. I need to apologise in advance that these sketches will be crude and incomplete. As I have said before, my goal is not to provide the last word on the topic but rather to present a way of thinking about the issue that might be useful.

Individual Intelligence: This, obviously enough, is the intelligence associated with individual human beings, i.e. their capacity to use mental models and tools to solve problems and achieve goals in the world around them. In its idealised form, individual intelligence is set off from collective and artificial intelligence. In other words, the idealised form of individual intelligence is self-reliant and self-determining. The associated ideal type of constitution will consequently place an emphasis on individual rights, responsibilities and rewards. It will ensure that the individual is protected from interference; that he/she can benefit from the fruits of their labour; that their capacities are developed to their full potential; and that they are responsible for their own fate. In essence, it will be a strongly libertarian constitutional order.

Collective Intelligence: This is associated with groups of human beings, and arises from their ability to coordinate and cooperate in order to solve problems and achieve goals. Examples might include a group of hunters coordinating an attack on a deer or bison, or a group of scientists working in lab trying to develop a medicinal drug. According to the evolutionary anthropologist Joseph Heinrich, this kind of group coordination and cooperation, particularly when it is packaged in easy-to-remember routines and traditions, is the ‘secret’ to humanity’s success. Despite this, the systematic empirical study of collective intelligence — why some groups are more effective at problem solving than others — is a relatively recent development albeit an inquiry that is growing in popularity (see, for example, Geoff Mulgan’s book Big Mind). The idealised form of collective intelligence sees the individual as just a cog in a collective mind. And the associated ideal type of constitution is one that emphasises group solidarity and cohesion, collective benefit, common ownership, and possibly equality of power and wealth (though equality is, arguably, more of an individualistic value and so cohesion might be the overriding value). In essence, it will be a strongly communistic/socialistic constitutional order.

I pause here to repeat the message from earlier: I doubt that any human society has ever come close to instantiating either of these ideal types. I don’t believe that there was some primordial libertarian state of nature in which individual intelligence flourished. On the contrary, I suspect that humans have always been social creatures and that the celebration of individual intelligence came much later on in human development. Nevertheless, I also suspect that there has always been a compromise and back-and-forth between the two poles.

Artificial Intelligence: This is obviously the kind of intelligence associated with computer-programmed machines. It mixes and copies elements from individual and collective intelligence (since humans did create it), but it is also based on some of its own tricks. The important thing is that it is non-human in nature. It functions in forms and at speeds that are distinct from us. It is used initially as a tool (or set of tools) for human benefit: a way of lightening or sharing our cognitive burden. It may, however, take on a life of its own and will perhaps one day pursue agendas and purposes that are not conducive to our well-being. The idealised form of AI is one that is independent from human intelligence, i.e. does not depend on human intelligence to assist in its problem solving abilities. The associated ideal type of constitution is, consequently, one in which human intelligence is devalued; in which machines do all the work; and in which we are treated as their moral patients (beneficiaries of their successes). Think about the future of automated leisure and idleness that is depicted in a movie like Wall:E or something similar. Instead of focusing on individual self-reliance and group cohesion, the artificially intelligent constitution will be one that prioritises pleasure, recreation, game-playing, idleness, and machine-mediated abundance (of material resources and phenomenological experiences).



Or, at least, that is how I envision it. I admit that my sketch of this ideal type of constitution is deeply anthropocentric: it assumes that humans will still be the primary moral subjects and beneficiaries of the artificially intelligent constitutional order. You could challenge this and argue that a truly artificially intelligent constitutional order would be one in which machines as the primary moral subjects. I’m not going to go there in this paper, though I’m more than happy to consider it. I’m sticking with the idea of humans being the primary moral subjects because I think that is more technically feasible, at least in the short to medium term. I also think that this idea gels well with the model I’ve developed. It paints an interesting picture of the arc of human history: Human society once thrived on a combination of individual and collective intelligence. Using this combination of intelligences we built a modern, industrially complex society. Eventually the combination of these intelligences allowed us to create a technology that rendered our intelligence obsolescent and managed our social order on our behalf. Ironically, this changed how we prioritised certain fundamental values.


4. Planning for the Constitutions of the Future

I know there are problems with the model I’ve developed. It’s overly simplistic; it assumes that there is only one determinant of fundamental values; it seems to ignore moral issues that currently animate our political and social lives (e.g. identity politics). Still, I find myself attracted to it. I think it is important to think about the ‘constitutional’ impact of AI, and to have a model that appreciates the contingency and changeability of the foundational values that make up our present constitutional order. And I think this model captures something of the truth, whilst also providing a starting point from which a more complex sketch of the ‘constitutions of the future’ can be developed. The constitutional orders that we currently live inside do not represent the ‘end of history’. They can and will change. The way in which we leverage the different forms of intelligence will have a big impact on this. Just as we nowadays clash with rival value systems from different cultures and ethnic groups; so too will we soon clash the value systems from the future. The ‘triangular’ model I’ve developed defines the (or rather ‘a’) ‘possibility space’ in which this conflict takes place.

I want to close by suggesting some ways in which this model could be (and, if it has any merit, should be) developed:


  • A more detailed sketch of the foundational values associated with the different ideal types should be provided.

  • The link between the identified foundational values and different mechanisms of governance should be developed. Some of the links are obvious enough already (e.g. a constitutional order based on individual intelligence will require some meaningful individual involvement in social governance; one based on collective intelligence will require mechanisms for collective cooperation and coordination and so on), but there are probably unappreciated links that need to be explored, particularly with the AI constitution.

  • An understanding of how other technological developments might fit into this ‘triangular’ model is needed. I already have some thoughts on this front. I think that there are some technologies (e.g. technologies of human enhancement) that push us towards an idealised form of the individual intelligence constitution, and others (e.g. network technologies and some ‘cyborg’ technologies) that push us towards an idealised form of the collective intelligence constitution. But, again, more work needs to be done on this.

  • A normative defence of the different extremes, as well as the importance of balancing between the extremes, is needed so that we have some sense of what is at stake as we navigate through the possibility space. Obviously, there is much relevant work already done on this so, to some extent, it’s just a question of plugging that into the model, but there is probably new work to be done too.

  • Finally, a methodology for fruitfully exploring the possibility space needs to be developed. So much of the work done on futurism and AI tends to be the product of individual (occasionally co-authored) speculation. Some of this is very provocative and illuminating, but surely we can hope for something more? I appreciate the irony of this but I think we should see how ‘collective intelligence’ methods could be used to enable interdisciplinary groups to collaborate on this topic. Perhaps we could have a series of ‘constitutional conventions’ in which such groups actually draft and debate the possible constitutions of the future?



* This term may not be the best. It’s probably too emotive and conflictual. If you prefer, you could substitute in ‘conversation’ or ‘negotiation’.

** This ‘triangular’ graphing of ideal types is not unique to me. Morris uses a similar diagram in his discussion of farming societies, pointing out that his model of a farming society is, in fact, an abstraction from three other types.




Tuesday, September 18, 2018

Episode #45 - Vallor on Virtue Ethics and Technology


1450560361.jpg.png

 In this episode I talk to Shannon Vallor. Shannon is the Regis and Diane McKenna Professor in the Department of Philosophy at Santa Clara University, where her research addresses the ethical implications of emerging science and technology, especially AI, robotics and new media. Professor Vallor received the 2015 World Technology Award in Ethics from the World Technology Network. She has served as President of the Society for Philosophy and Technology, sits on the Board of Directors of the Foundation for Responsible Robotics, and is a member of the IEEE Standards Association's Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. We talk about the problem of techno-social opacity and the value of virtue ethics in an era of rapid technological change.

 You can download the episode here or listen below. You can also subscribe to the podcast on iTunes or Stitcher (the RSS feed is here).

Show Notes

  • 0:00 - Introduction
  • 1:39 - How students encouraged Shannon to write Technology and the Virtues
  • 6:30 - The problem of acute techno-moral opacity
  • 12:34 - Is this just the problem of morality in a time of accelerating change?
  • 17:16 - Why can't we use abstract moral principles to guide us in a time of rapid technological change? What's wrong with utilitarianism or Kantianism?
  • 23:40 - Making the case for technologically-sensitive virtue ethics
  • 27:27 - The analogy with education: teaching critical thinking skills vs providing students with information
  • 31:19 - Aren't most virtue ethical traditions too antiquated? Aren't they rooted in outdated historical contexts?
  • 37:54 - Doesn't virtue ethics assume a relatively fixed human nature? What if human nature is one of the things that is changed by technology?
  • 42:34 - Case study on Social Media: Defending Mark Zuckerberg
  • 46:54 - The Dark Side of Social Media
  • 52:48 - Are we trapped in an immoral equilibrium? How can we escape?
  • 57:17 - What would the virtuous person do right now? Would he/she delete Facebook?
  • 1:00:23 - Can we use technological to solve problems created by technology? Will this help to cultivate the virtues?
  • 1:05:00 - The virtue of self-regard and the problem of narcissism in a digital age
 

Relevant Links

  • Shannon's Twitter profile