Thursday, May 26, 2016

New paper - Should we use commitment contracts to regulate student use of cognitive enhancement drugs?




I have a new paper coming out in the journal Bioethics. It's about the philosophy of education and student use of cognitive enhancement drugs. It suggests that universities might be justified in regulating their students' use of enhancement drugs, but only in a very mild, non-compulsory way. It suggests that a system of voluntary commitment contracts might be an interesting proposal. The details are below.


Title: Should we use commitment contracts to regulate student use of cognitive enhancement drugs?
Journal: Bioethics
Links: Philpapers; Academia; Official
Abstract: Are universities justified in trying to regulate student use of cognitive enhancing drugs? In this paper I argue that they can be, but that the most appropriate kind of regulatory intervention is likely to be voluntary in nature. To be precise, I argue that universities could justifiably adopt a commitment contract system of regulation wherein students are encouraged to voluntarily commit to not using cognitive enhancing drugs (or to using them in a specific way). If they are found to breach that commitment, they should be penalised by, for example, forfeiting a number of marks on their assessments. To defend this model of regulation, I adopt a recently-proposed evaluative framework for determining the appropriateness of enhancement in specific domains of activity, and I focus on particular existing types of cognitive enhancement drugs, not hypothetical or potential forms. In this way, my argument is tailored to the specific features of university education, and common patterns of usage among students. It is not concerned with the general ethical propriety of using cognitive enhancing drugs. 

The Life Cycle of Prescriptive (Legal) Theories

Caspar David Friedrich - The Stages of Life


Legal officials have to make decisions. Take the judge as an example. He or she is confronted with legal disputes everyday, some involving private legal disputes (e.g. breach of contract), some involving purported criminal acts (e.g. alleged murder), some involving the infringement of constitutional rights (e.g. limitations on the right to free speech). When confronted with these decisions the judge must decide whose case will prevail, which interests to prioritise, what can and cannot be done as a matter of law. Oftentimes these disputes involve contentious matters of political morality. For example, the judge may be asked: can the legislature of the country ban controversial forms of speech on the grounds that they offend the interests of minority groups, or does the right to free speech trump any such offensiveness?

Judges might try to decide these cases by directly engaging with the moral and political issues they raise. But oftentimes they are reluctant to do so. There is a worry that the judge is not politically empowered to use such criteria in making decisions. They simply apply the law, whatever it is. It is for others, usually directly elected assemblies, to weigh the values inherent in these controversial political matters. And what’s true for the judge is true for other legal officials. Bureaucrats and regulators are also granted decision-making authority and there are occasions on which this authority brings them face to face with controversial questions of political morality. They too are often reluctant to directly engage with these matters as they feel it is contrary to their political-legal role.

Worries about the legitimacy of such decision-making authority has often led legal scholars to propose apolitical prescriptive legal theories. These are theories that propose decision-making procedures that are shorn of any concern for the controversial political content at the heart of legal disputes, and allow legal officials to make their decisions in an objective, neutral fashion. Or so, at least, these theories often claim, but anyone who has read up about these theories will know that they often fail to be objective and neutral.

Indeed, there is a common life-cycle to many prescriptive legal theories. They start off strong, purporting to provide an apolitical solution to the legal official’s problem, only to become attenuated and weakened over time. They then either persist in the attenuated form or die off. This life-cycle is articulated in a recent paper by David Pozen and Jeremy Kessler. I want to describe their proposed model of the life-cycle in this post. I do so because I think it is an interesting idea, and because once you know about it you will start to spot the pattern elsewhere. I’ll give an example of one prominent contemporary debate in applied ethics that shares this pattern at the end of this post.


1. The General Idea
Pozen and Kessler’s life-cycle consists of six major stages. They don’t give these stages names, but I will since I like giving names to things in order to make them more memorable:

T1 - Birth: A decision procedure is introduced that purports to allow legal officials to resolve highly politicised legal conflicts in a way that does not appeal directly to the political values at stake in those conflicts. In other words, a decision procedure is introduced that depoliticises a decision-making function.
T2 - Critique: The proposed decision procedure is batted about for a while and critics start to spot flaws in it. Some of these are quite academic and technical, some of them are more value-laden. The most common, in legal contexts, is to point out how the procedure fails to yield the ‘right’ decision on some matter that is subject to universal (or near-universal) approval.
T3 - Response: Proponents of the theory respond to the critiques by modifying the decision procedure it in such a way as to avoid the technical and political objections.
T4 - Iteration: This process of critique and response cycles back and forth for some period of time. At each stage the theory adapts to accommodate a critique by incorporating commitments or assumptions that bring it back closer to the original highly politicised conflict.
T5 - Maturity: The theory reaches a point where it becomes so adulterated and attenuated that it essentially starts to reflect the ‘conflict-ridden field it had promised to transcend’. In other words, we arrive back at the same state we were in at around the time of the theory’s birth. At this point in time one of two things will happen:
T6(a) - Death: The theory falls out of favour and (possibly) something new is proposed in its stead.
T6(b) - Persistence: The theory persists, albeit in the highly adulterated and attenuated form. There are several reasons why this may happen (discussed in Pozen and Kessler’s paper), the main one is simply that the language and structure of the theory has certain side benefits for those who continue to couch their arguments in its terms.

Pozen and Kessler's Life Cycle of Legal Theories



The net result is that the prescriptive theories tend to ‘work themselves impure’ over time. This model will probably seem a little abstract right now. Pozen and Kessler illustrate it with several examples in their paper. I’ll go through one of them below. Before that, however, I want to note a couple of things. First, as the authors themselves point out, there is nothing particularly novel about this model. Similar life cycle models have been proposed in other fields. A notable example would be the model of scientific theories proposed in Thomas Kuhn’s famous book The Structure of Scientific Revolutions. Kuhn argued that scientific theories are originally proposed to explain some set of observations. Over time, new observations are made that seem to conflict with the theory. The theory is forced to accommodate these observations by adding auxiliary hypotheses or sub-theories to account for the anomalies. This results in some adulteration and attenuation of the theory, until eventually there is some ‘paradigm shift’ to a new theory.

Nevertheless, there is something unique about prescriptive legal theories that makes them particularly susceptible to the life cycle proposed by Pozen and Kessler. Apolitical prescriptive theories tell legal officials how they ought to resolve controversial moral-political debates. But they do so by encouraging them to avoid direct engagement with the values that are at stake. The problem is that those values are what ultimately matter and they consequently have a way of re-surfacing over time. There is a sense then in which the theories can never really do what they purport to do (these are my words, not Pozen and Kessler’s): they are always forced to encompass the moral contestation that sparked their formulation. To be clear, this is not true for all prescriptive legal theories — some are more honest and upfront about their attempt to accommodate core political values — but it is true for those that go down the apolitical route.


2. The Life-Cycle of Constitutional Originalism
One of the examples used in Pozen and Kessler’s article is that of originalist theories of constitutional interpretation. I’ll set out this example here because it is the one I am most familiar with and provides a very clear illustration of the life cycle (for those who don’t know, I’ve written two academic articles that are critical of the more philosophical versions of this theory). There are some very detailed and interesting histories of originalism out there. The gist of that history is as follows.

The US Supreme Court under Earl Warren (and, in the early years, under Warren Burger) was renowned for making a series of progressive and significant constitutional decisions. Some of these were widely celebrated (e.g. Brown vs Board of Education about the desegregation of schools) while others were more hotly contested (e.g. Roe v Wade on the right to abortion). The more hotly contested decisions provoked a backlash among conservative lawyers and legal scholars. They felt that decisions like Roe v Wade involved judges stepping beyond their constitutional authority and making judgments of political morality that were the proper preserve of the legislature or executive.

This backlash led to originalism. In its initial form, originalism promised to provide judges with a simple decision procedure that allowed them to reach determinate outcomes in controversial cases without implicating controversial political values. It thus prevented them from overstepping their constitutional authority. The decision procedure required them to interpret the provisions of the constitution in accordance with their originally intended meaning. That is to say, the meaning that the drafters and ratifiers of those constitutional provisions would have intended them to have. This would reduce the judicial task to one of factual and historical analysis; not normative or moral theorising.

In its original (!) form, originalism was simple and (to a certain mindset) appealing. It soon ran into difficulties. Critics pointed out that there was not always good evidence for the intentions of the original framers and ratifiers; and that the whole concept of a single original intent was philosophically and factually problematic. What’s more, critics argued that if you followed the originalist decision procedure to the hilt, you would have to overturn widely-accepted precedents like Brown v Board of Education. The challenge was to modify the theory so as to accommodate these critiques and enable consistency with widely-accepted precedents.

This led to several cycles of modification and elaboration. Originalists dropped their commitment to intent and switched instead to the originally understood public meaning. They acknowledged that certain provisions within the constitution might be vague or ambiguous and hence that there was room for moral or political creativity when it came to applying those provisions. They also started to draw distinctions between the normative and semantic versions of the theory, and between the interpretive and constructive tasks of the judge. Taking this more sophisticated theoretical structure onboard, scholars engaged in more detailed historical inquiries that allowed them to account for decisions like Brown v Board of Education. Indeed, so modified and elaborated did the theory become that one prominent liberal living constitutionalist (Jack Balkin) argued that it was possible to reconcile originalism and living constitutional theories of interpretation. The consequence was that originalism became so weak a theory that virtually anyone could embrace it and apply it in a way that accommodates different political values. We got back to where we started.

And yet, as Pozen and Kessler note, originalism is one of those theories that seems to persist in its attenuated state rather than dying off. They argue that this is because the language and structure of the theory has side benefits for those who endorse. In particular, with its complex structure and refined reasoning, it may tend to ‘enhance the power and prestige of lawyers as a privileged expert class, while raising barriers to entry for nonlegal actors’ (Pozen and Kessler 2016, 51).


3. Conclusion - Is Effective Altruism Working itself Impure?
I don’t have too much to say in response to this. I haven’t collected systematic evidence on the life cycle of all prescriptive legal theories, but the model proposed by Pozen and Kessler seems intuitively right to me. Furthermore, I think I see it in operation in other fields. One example which springs to mind is the ongoing debate about effective altruism (EA). I’ve been writing a series of posts about this theory so it is to forefront of my thinking at the moment.

As noted in that series, when it originally burst onto the scene, EA seemed to provide an attractive, rational and evidentially robust procedure for making decisions about charitable donation. This is a hotly contested field, with many different causes competing for our attention, often seeming to be equally worthy of our money. EA promised to cut through some of the noise. It adopted simple, appealing metrics of effectiveness, highlighted underappreciated causes, and allowed its followers to feel good about their charitable decision-making by convincing them that by prioritising certain charities they were doing the most good with their limited resources. But critics have started to identify flaws in this initially appealing theory. They argue that it ignores important moral goods, prioritises biased or incomplete metrics of effectiveness, and is not quite as rational or effective as its proponents would have you believe. Fans of EA have typically responded by trying to accommodate some of these criticisms, and by expanding the range of metrics and considerations that can go into the assessment of charitable donations.

We are at the early stages in this process of critique and response so it’s not entirely clear where things will end up. But I suspect it may end up following the life cycle outlined by Pozen and Kessler. In other words, I think that as the theory of EA grows to encompass the anomalies and omissions highlighted by its critics, it may become so attenuated as to leave us largely where we started. Where once EA provided clear guidance on which charities to support; it will eventually end up endorsing many, mutually inconsistent ones. It will thus fail to provide the clarity and simplicity it once provided. Where things will go from there is anyone’s guess. Will EA die out, or will the language and structure of EA have side benefits that enable its persistence?

Of course, this is all somewhat speculative but it will be interesting to see whether EA does indeed follow this life cycle.

Wednesday, May 25, 2016

Is Effective Altruism Methodologically Biased?


The roundabout playpump - A flawed intervention?


(Part One; Part Two)

After a long hiatus, I am finally going to complete my series of posts about Iason Gabriel’s article ‘Effective Altruism and its Critics’ (changed from the original title 'What's wrong with effective altruism?). I’m pleased to say that once I finish the series I am also going to post a response by Iason himself which follows up on some of the arguments in his paper. Let me start today, however, by recapping some of the material from previous entries and setting the stage for this one.

Gabriel’s article takes a critical look at the leading objections to effective altruism (EA). EA, for present purposes, is defined as the practice of trying to do the most good you can through charitable donations. In typical EA arguments, this practice brings with it a number of key commitments, three of which figure prominently: (i) welfarism, i.e. EAs think you should try to improve individual well-being; (ii) consequentialism, i.e. EAs tend to favour consequentialist approaches to ethics and (iii) evidentialism, i.e. EAs look to policy interventions with a robust evidential base.

Gabriel considers three main objections to this form of EA. The first is that it is unjust; the second that it is methodologically biased; and the third that it is not as effective as its proponents claim. I’ve looked at the first of these objections already. Today, I look at the second. That objection breaks down into three main sub-types of objection. I’ll discuss each of these in turn.

[Reader's note: I am basing this series on the original pre-published version of Gabriel's article because that's what I used when I originally structured this series and presented the taxonomy of objections. There have been some changes to the wording and framing of the critiques discussed below but, as best I can tell, it covers the same ground.]


1. Is EA too measurement focused and reductionist?
The first methodological critique highlights the evidential bias of the EA philosophy. The critique manifests itself in a couple of different ways. One of them is a variant on the classic ‘what gets measured gets managed’ concern. EAs place a premium on improving outcomes that are susceptible to quantification and measurement. This causes them to downplay other, less measurable and quantifiable outcomes, that might be equally morally worthy. To put the objection more formally:

  • (1) EAs emphasise moral goals that are readily measurable and quantifiable.
  • (2) There are many important moral goals that are not so readily measurable and quantifiable.
  • (3) Therefore, EAs tend to ignore important moral goals.

Unlike the previous round of objections, the concern here is not that EAs fail to recognise other important moral goods. Rather, the concern is that their evidentialist methodology biases them away from these other moral goods. To give an example, there might be some value that is intrinsic to political processes that respect and honour human rights. At the same time, it might be very difficult to measure and quantify those outcomes. Contrariwise, there might be some value to individual health and well-being that is relatively easier to measure and quantify. When it comes to deciding between policies, this will cause EAs to prefer policies that emphasise the latter moral goal to the former, even though they acknowledge the value of the former.

This can have two particularly negative consequences. The first is simply that proponents of EA become absorbed in assessing the relative merit of interventions that target measurable and quantifiable outcomes and forget to consider the less measurable and quantifiable. The other is that EAs become accustomed to standards of proof that are unreasonable in many domains. For instance, EAs love randomised controlled trials (RCTs), but RCTs are often only appropriate for small scale changes where it is possible to have control groups and to precisely measure outcomes. They are often not appropriate to larger country-wide or international reforms. Does this mean we should abandon these initiatives? Or does it mean that EAs need to moderate their standards of proof? That’s an issue that needs to be resolved.

Another, more specific version of the materialistic objection, worries that EAs tend to be reductionists when it comes to assessing the value of different interventions. One example of this is the tendency for EAs to rely on the DALY-measure (Disability-adjusted life year) when it comes to assessing interventions. The DALY measure allows us to make indirect inferences about a person’s subjective well-being and to compare different people according to this metric. This makes it a very attractive measurement system for EAs. The fear is that overreliance on it reduces everything to a comparison of subjective well-being.

How can EAs respond to these objections? Gabriel identifies a number of possibilities some of which are already happening. One example is that GiveWell — possibly the leading charity evaluator — has moved away from overreliance on the DALY measure and instead favours interventions that are supported by multiple lines of independent analysis. Gabriel thinks that EAs should also be more upfront about the bounded nature of the information they provide. They could do this by concluding that some intervention is ‘unprovable’ rather than ‘unproven’. He also thinks that they should engage more with other potential metrics, such as the Multidimensional Poverty Index, which evaluates outcomes in non-welfarist terms.


2. Is EA too individualistic?
The second version of the methodological critique argues that EA is overly individualistic in its focus. That is to say, it prioritises interventions that improve individual well-being and either ignores or downplays those that improve collective or community-based goods. Enhancing and empowering local communities is often a goal for NGOs, and it is also something favoured by certain schools of political morality, but because EAs are so resolutely welfarist in their outlook, they tend to value communities in instrumental ways, i.e. as vehicles for improving individual outcomes. This is similar to the reductionist critique given above (and, indeed, in the final version of the article Gabriel merges them together).

To put the objection in quasi-formal terms:

  • (4) EAs emphasise moral goods that accrue to the individual (i.e. that enhance individual well-being etc).
  • (5) There are important moral goods that accrue to the community.
  • (6) Therefore, EAs ignore an important set of moral goods.

The objection is defended and elaborated along similar lines to the previous one. Gabriel uses a thought experiment to highlight its practical consequences:

Medicine: Suppose it is known that condom distribution is more effective in minimizing the harm caused by HIV/AIDs than the provision of Anti-Retroviral drugs (AVRs). This is because AVRs only help those who have the disease while condoms can prevent people from contracting it. You are faced with the choice of funding two different programs. The first allocates all the money to condom distribution. The second allocates 90% to condom distribution and 10% to AVRs. Which do you choose?

Gabriel argues that if the evidence does indeed support the view that condom distribution is more effective than the provision of AVRs, then EAs will tend to favour the first program. It is, after all, the one that does the most good for the money provided. The problem is that this does not sit easily with most people. The idea of leaving those with the disease untreated seems wrong. Gabriel suggests that this might have something to do with the value of hope communities. People want to live in a society that will care for them if they are sick, even if this is not the most cost-effective approach. They want to have the hope that they will be looked after. Furthermore, hope may be an important resource for communities undergoing hardship, one that enables them to take collective action that addresses problems that cannot be addressed at the individual level. You get more buy-in at the community level if people have some sense of hope.

The upshot of this, for Gabriel, is that EAs shouldn’t move so quickly from claims about cost-effectiveness of policies at the individual level to claims about the overall value or desirability of a policy.


3. Is EA too instrumentalistic?
The final methodological critique holds that EAs are overly instrumental in their evaluation of policies. That is to say, they compare interventions based on the outcomes they achieve and not on the procedures they use to achieve those outcomes. This creates a problematic bias in their recommendations. Procedures that are inclusive and democratic in nature are often slower and messier than more non-inclusive and technocratic procedures. Consequently, EAs tend to favour technocratic interventions. This causes them to downplay or ignore important procedural values.

  • (7) EAs assess interventions in instrumental ways: i.e. how efficient are they at achieving the desired outcome; they often ignore or downplay the values attached to the procedures that lead to those outcomes.
  • (8) There are intrinsically valuable procedures (i.e. democratic and inclusive procedures) that may be less efficient than other technocratic and non-inclusive procedures.
  • (9) Therefore, EAs tend to favour technocratic and non-inclusive procedures for achieving their desired outcomes.

Gabriel again uses a thought experiment to support the argument:

Participation: Some villages need help developing a water and sanitation system to combat the spread of waterborne parasites. You can fund one of two projects that help them in this regard. The first will hire a group of contractors to build the system - something they have done successfully in the past. The second will work with members of the community and help them build and develop the system themselves. This has also worked in the past but because villagers are not experts in this area of construction the systems tend to be less functional.

The complaint is that EAs would naturally choose the first project because it is the most effective. But the second project might have numerous advantages that go unappreciated by the standard EA methodology. It values the agency and autonomy of the villagers; it allows them to build capacity and understanding; and it can assist with the acceptability and perceived legitimacy of the intervention.

This objection works at more national scales too. There are concerns about largescale philanthropic projects that subvert democratic processes in favour of technocratic solutions, and thereby worsen the governance problems in certain developing nations.

Gabriel thinks that EAs need to be more sensitive to this problem. They need to appreciate the importance of popular control over social outcomes and the value of strong, democratic decision-making procedures. It strikes me, however, that many EAs are already sensitive to this problem. Indeed, Will MacAskill’s book Doing Good Better opens with a lengthy critique of the ‘Playpump’. This was a device that helped villagers pump water through a child’s roundabout. The idea being that water could be pumped and children could play at the same time. The pump was a failure for several reasons one of which (highlighted by MacAskill) is that nobody really consulted with the villagers who were being given these things. Now perhaps MacAskill thinks that non-consultation was a problem purely because it led the inventors and promoters of the playpump to favour an ineffective intervention, but there is still some sensitivity to the value of more inclusive procedures on display.


4. Conclusion
As you can see, each of these criticisms is a variation on the same basic theme: EAs prioritise certain ways of assessing the value of charitable interventions and this causes them to ignore or downplay something of importance. The response to each criticism is the same. Either the EA says that it is right to downplay and ignore those things, or they must try to expand their metrics and methodologies to include those things.

Monday, May 23, 2016

Vacancy - Research Assistant on the Algocracy and Transhumanism Project





I'm hiring a research assistant as part of my Algocracy and Transhumanism project. It's a short-term contract (5 months only) and available from July onwards. The candidate would have to be able to relocate to Galway for the period. Details below. Please share this with anyone you think might be interested.

Algocracy and the Transhumanist Project, IRC New Horizons NUI Galway
Whitaker Institute, NUI Galway
Ref. No. NUIG 067-16
Applications are invited from suitably qualified candidates for a full time, fixed term position as a Research Assistant with the Algocracy and Transhumanism Project at the Whitaker Institute, National University of Ireland, Galway. This position is funded by the Irish Research Council and is available for a five month period from July 2016.
 The project critically evaluates the interaction between humans and artificially intelligent, algorithm-based systems of governance.  It focuses on the role of algorithms in public decision-making processes and the increased integration between humans and technology. It examines how technology creates new governance structures and new governance subjects and the effect this has on core political values such as liberty and equality. Further information about the project can be found on the project webpage http://algocracy.wordpress.com
Job Description:The post holder will perform a variety of duties associated with the project.  They will participate in research, preparation and editing of interviews with leading experts in the areas of algorithmic governance and human enhancement. They will prepare literature reviews. They will review and edit manuscripts for publication. They will assist in the organisation of research seminars and one major workshop. They will contribute to the project webpage and provide general assistance in disseminating project results. The post holder will report to Dr John Danaher.
 Qualifications: Candidates should have completed a degree in a relevant field of study. Given the broad, interdisciplinary nature of the project, this includes (but is not limited to) law, philosophy, politics, sociology, psychology and information systems. Ideally, the candidate will have some experience in analytical and philosophical modes of research. Candidates should have a strong academic record, and good IT skills. Ideal candidates will be professional, highly motivated, and able to work effectively in a team environment, have an abundance of creativity, and have enthusiasm for research. Strong analytical skills, writing, and organisational abilities are important prerequisites. Support/training will be provided to the successful candidate interested in furthering their own academic/research career.
 Salary: €21,850 per annum, pro rata for this five month contract.   Start date: July 2016.
NB: Work permit restrictions apply to this category of post.
Further information on research and working at NUI Galway is available at http://www.nuigalway.ie/our-research/ Further information is available at www.whitakerinstitute.ie
Informal enquiries concerning the post may be made to Dr John Danaher – john.danaher@nuigalway.ie
To Apply: Applications by email, to include a covering letter, CV, and the contact details of three referees should be sent, via e-mail (in word or PDF only) to Gwen Ryan gwen.ryan@nuigalway.ie
Please state reference number NUIG 067-16 in the subject line of your e-mail application.
Closing date for receipt of applications is 5.00 pm on Wednesday, 15th June 2016.
National University of Ireland, Galway is an equal opportunities employer.

Friday, May 13, 2016

Episode #3 - Sven Nyholm on Love Enhancement, Deep Brain Stimulation and the Ethics of Self Driving Cars

photo

This is the third episode in the Algocracy and Transhumanism project podcast. In this episode I talk to Sven Nyholm who is an Assistant Professor of Philosophy at the Eindhoven University of Technology. Sven has a background in Kantian philosophy and currently does a lot of work on the ethics of technology. We have a wide ranging conversation, circling around three main themes: (i) how technology changes what we value (using the specific example of love enhancement technologies); (ii) how technology might affect the true self (using the example of deep brain stimulation technologies) and (iii) how to design ethical decision-making algorithms (using the example of self-driving cars).

The work discussed in this podcast on deep brain stimulation and the design of ethical algorithms is being undertaken by Sven in collaboration with two co-authors: Elizabeth O'Neill (in the case of DBS) and Jilles Smids (in the case of self-driving cars). Unfortunately we neglected to mention this during our conversation. I have provided links to their work above and below.

Anyway, you can download the podcast here, listen below or subscribe on Stitcher or iTunes.



 

Show Notes


0:00 - 1:30 - Introduction to Sven

1:30 - 7:30 - The idea of love enhancement

7:30 - 10:30 - Objections to love enhancement

10:30 - 12:30 - The medicalisation objection to love enhancement

12:30 - 21:10 - Medicalisation as an evaluative category mistake

21:10 - 24:00 - Can you favour love enhancement and still value love in the right way?

24:00 - 28:10 - Evaluative category mistakes in other debates about technology

28:10 - 30:50 - The use of deep brain stimulation (DBS) technology

30:50 - 35:20 - Reported effects of DBS on personal identity

35:20 - 41:20 - Narrative Identity vs True Self in debates about DBS

41:20 - 46:25 - Is the true self an expression of values? Can DBS help in its expression?

46:25 - 50:30 - Use of DBS to treat patients with Anorexia Nervosa

50:30 - 55:20 - Ethical algorithms in the design of self-driving cars

55:20 - 1:02:40 - Is the trolley problem a useful starting point?

1:02:40 - 1:06:30 - The importance of legal and moral responsibility in the design of ethical algorithms

1:06:30 - 1:09:00 - The important of uncertainty and risk in the design of ethical algorithms

1:09:00 - end - Should moral uncertainty be factored into the design?  


Links

  • Jilles Smids (Sven's Co-author on ethical algorithms for self-driving cars)

Wednesday, May 11, 2016

New Paper - Robots, Law and the Retribution Gap




Apologies for the dearth of posts lately, I'll be back to more regular blogging soon enough. To fill the gap, here's a new paper I have coming out in the journal Ethics and Information Technology. In case you are interested, the idea for this paper originated in this blogpost from late 2014. I was somewhat ignorant of the literature back then; I know more now.

Title: Robots, Law and the Retribution Gap
Journal: Ethics and Information Technology
Links: Philpapers; Academia; Official
Abstract: We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises from a mismatch between the human desire for retribution and the absence of appropriate subjects of retributive blame. I argue for the potential existence of this gap in an era of increased robotisation; suggest that it is much harder to plug this gap than it is to plug those thus far explored in the literature; and then highlight three important social implications of this gap.

Tuesday, May 3, 2016

Episode #2: James Hughes on the Transhumanist Political Project


James_Hughes

This is the second episode in the Algocracy and Transhumanism project podcast. In this episode I interview Dr. James Hughes, executive director of the Institute for Ethics and Emerging Technologies and current Associate Provost for Institutional Research, Assessment and Planning for the University of Massachusetts Boston. James is leading figure in both transhumanist thought and political activism. He is the author of Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future. I spoke to James about the origins of the transhumanist project, the political values currently motivating transhumanist activists, as well as some more esoteric and philosophical ideas associated with transhumanism. You can download the podcast here. You can listen below. You can also subscribe on Stitcher and iTunes.




Show Notes

0:00 - 1:00 - Introduction to James  
1:00 - 11:00 - The History of Transhumanist Thought (Religious and Mythical Origins) 
11:00 - 17:00 - Transhumanism and the Enlightenment Project  
17:00 - 25:30 - Transhumanism and Disability Rights Movement  
25:30 - 34:30 - The Political Values for Hiveminds and Cyborgs  
34:30 - 41:00 - The Dark Side of Transhumanist Politics  
41:00 - 43:00 - Technological Unemployment and Technoprogressivism  
43:00 - 51:00 - Building Better Citizens through Human Enhancement  
51:00 - 1:01:55 - The Threat of Algocracy?  
1:01:55 - 1:07:55 - Internal and External Moral Enhancement   

Links