Tuesday, October 24, 2017

Podcast Interview - Singularity Bros #114 on Robot Sex


Logo from the Singularity Bros Podcast


As part of the major publicity drive that I am putting together for the book Robot Sex: Social and Ethical Implications, I just appeared on the Singularity Bros Podcast. We have a very wide-ranging and philosophically rich discussion about the ethics of sexual relationships with robots. You should check it out here.

And remember: if you want to buy the book, it is just a click away.




Sunday, October 22, 2017

Freedom and the Unravelling Problem in Quantified Work


A Machinist at the Tabor Company where Frederick Taylor (founder of 'scientific management') consulted.


[This is a text version of a short talk I delivered at a conference on ’Quantified Work’. It was hosted by Dr Phoebe Moore at Middlesex University on the 13th October 2017 and was based around her book ‘The Quantified Self in Precarity’.]

Surveillance has always been a feature of the industrial workplace. With the rise of industrialism came the rise of scientific management. Managers of manufacturing plants came to view the production process as a machine, not just as something that involved the use of machines. The human workers were simply parts of that machine. Careful study of the organisation and distribution of the machine parts could enable a more efficient production process. To this end, early pioneers in scientific management (such as Frederick Taylor and Lillian and Frank Gilbreth) invented novel methods for surveilling how their workers spend their time.

Nowadays, the scale and specificity of our surveillance techniques has changed. Our digitised workplaces enable far more information to be collected about our movements and behaviour, particularly when wearable smart-tech is factored into the mix. The management philosophy underlying the workplace has also changed. Where Taylor and the Gilbreths saw the goal of scientific management as creating a more consistent and efficient machine, we now embrace a workplace philosophy in which the ability to rapidly adapt to a changing world is paramount (the so-called ‘agile’ workplace). Acceleration and disruption are now the aim of the game. Workers must be equipped with the tools to enable them to navigate an uncertain world. What’s more, work now never ends — it follows us home on our laptops and phones — and we are constantly pressured to be available to work, while maintaining overall health and well-being. Employers are attuned to this and have instituted various corporate wellness programmes aimed at enhancing employee health and well-being, while raising productivity. The temptation to use ‘quantified self’ technology to track and nudge employee behaviour is, thus, increasing.

These are the themes addressed in Phoebe’s book, and I think they prompt the following question, one that I will seek to answer in this talk:

Question: Does the rise of ‘quantified self’ surveillance threaten our freedom in some new or unique way?

In other words, do these new forms of workplace surveillance constitute something genuinely new or unprecedented in the world of work, or are they really just more of the same? I consider two answers to that question.


Answer 1: No, because work always, necessarily, undermines our freedom
The first answer is the sceptical one. The notion that work and freedom are mutually inconsistent is a long-standing one in left-wing circles. Slavery is the epitome of unfreedom. Work, it is sometimes claimed, is a form of ‘waged’ or ‘economic’ slavery. You are not technically owned by your employer (after all you could be self-employed, as many of us now are in the ‘gig’ economy) but you are effectively compelled to work out of economic necessity. Even in countries with a generous social welfare provision, access to this provision is usually tied to the ability and willingness to work. There is, consequently, no way to escape the world of work.

I’ve covered arguments of this sort previously on my blog. My favourite, comes from the work of Julia Maskivker. The essence of her argument is this:

(1) A phenomenon undermines our freedom if: (a) it limits our ability to choose how to make use of our time; (b) it limits our ability to be the authors of our own lives; and/or (c) it involves exploitative/coercive offers.
(2) Work, in modern society, (a) limits our ability to choose how to make use of our time; (b) limits our ability to be the authors of our own lives; and c) involves an exploitative/coercive offer.
(3) Therefore, work undermines our freedom.

Now, I’m not going to defend this argument here. I did that on a previous occasion. Suffice to say, I find the premises in it plausible, with something reasonable to said in defence of each. I’m not defending it because my present goal is not to consider whether work does in fact, always, undermine our freedom, but, rather, to consider what the consequences of accepting this view are for the debate about quantified work practices.

You could argue that if you accept it, then there is nothing really interesting to be said about the freedom-affecting potential of quantified work. If work always undermines our freedom, then quantified work practices are just more in a long line of freedom-undermining practices. They do not threaten something new or unique.

I am sympathetic to this claim but I want to resist it. I want to argue that even if you think freedom is necessarily undermined by work, there is the possibility of something new and different being threatened by quantified work practices. This is for three reasons. First, even if the traditional employer-employee relationship undermines freedom, there is usually some possibility of escape from that freedom-undermining characteristic in the shape of down time or leisure time. Quantified work might pose a unique threat if it encourages and facilitates more surveillance in that down time. Second, quantified work might threaten something new if its utility is largely self-directed, rather than other-directed. In other words, if it is imposed from the bottom-up, by workers themselves, and not from the top-down, by employers. Finally, quantified work might threaten something new simply due to the scale and ubiquity of the available surveillance technology.

As it happens, I think there are some reasons to think that each of these three things might be true.


Answer 2: Yes, due to the unravelling problem
The second answer maintains that there is something new and different in the modern world of quantified work. Specifically, it claims that quantified work practices pose a unique threat to our freedom because they hasten the transition to a signalling economy, which in turn leads to the unravelling problem. I take this argument from the work of Scott Peppet.

A ‘signalling’ economy is to be differentiated from a ‘sorting’ economy. The difference has to do with how information is acquired by different economic actors. Information is important when making decisions about what to buy and who to employ. If you are buying a used car, you want to know whether or not it is a ‘lemon’. If you are buying health insurance, the insurer will want to know if you have any pre-existing conditions. If you are looking for a job, your prospective employer will want to know whether you have the capacity to do it well. Accurate, high-quality information enables more rational planning, although it sometimes comes at the expense of those whose informational disclosures rule them out of the market for certain goods and services. In a ‘sorting’ economy, the burden is on the employer to screen potential employees for the information they deem relevant to the job. In a ‘signalling’ economy, the burden is on the employee to signal accurate information to the employer.

With the decline in long-term employment, and the corresponding rise in short-term, contract-based work, there has been a remarkable shift away from a sorting economy to a signalling economy. We are now encouraged to voluntarily disclose information to our employers in order to demonstrate our employability. Doing so is attractive because it might yield better working conditions or pay. The problem is that what initially appears to be a voluntary set of disclosures ends up being a forced/compelled disclosure. This is due to the unravelling problem.

The problem is best explained by way of an example. Imagine you have a bunch of people selling crates of oranges on the export market. The crates carry a maximum of 100 oranges, but they are carefully sealed so that a purchaser cannot see how many oranges are inside. What’s more, the purchaser doesn’t want to open the box prior to transport because doing so would cause the oranges to go bad. But, of course, the purchaser can easily verify the total number of oranges in the box after transport by simply opening it and counting them. Now suppose you are one of the people selling the crates of oranges. Will you disclose to the purchaser the total number of oranges in the crate? You might think that you shouldn’t because, if you are selling less than the others, you would put you at a disadvantage on the market. But a little bit of game theory tells us that we should expect the sellers to disclose the number of oranges in the crates. Why so? Well, if you had 100 oranges in your crate, you would be incentivised to disclose this to any potential purchaser. Doing so makes you an attractive seller. Correspondingly, if you had 99 oranges in your crate, and all the sellers with 100 oranges have disclosed this to the purchasers, you should disclose this information. If you don’t, there is a danger that a potential seller will lump you in with anyone selling 0-98 oranges. In other words, because those with the maximum number of oranges in their crates are sharing this information, purchasers will tend to assume the worst about anyone not sharing the number of oranges in their crate. But once you have disclosed the fact that you have 99 oranges in your crate, the same logic will apply to the person with 98 oranges and so on all the way down to the seller with 1 orange in their crate.

This is informational unravelling in practice. The seller with only 1 orange in their crate would much rather not disclose this fact to the purchasers, but they are ultimately compelled to do so by the incentives in operation on the market. The claim I am making here — and that Peppet makes in his paper — is that unravelling is also likely to happen on the employment market. The more valuable information we have about ourselves, the more we are incentivised to disclose this to our employers in order to maintain our employability. Those with the best information will do so voluntarily and willingly, but ultimately everybody will be forced to do so in an effort to differentiate themselves from other, potentially ‘inferior’, employees.

This could have a pretty dramatic effect on our freedom. If quantified self technologies enable more and more valuable information be tracked and disclosed, there will be more and more unravelling, which will in turn lead to more and more forced disclosures. This could result in something quite different from the old world of workplace surveillance, partly because it is being driven from the bottom up, i.e. workers do it themselves in order to secure some perceived advantage. There are laws in place that prevent employers from seeking certain information about their employees (e.g. information about health conditions) but those laws usually only cover cases where the employer demands the information. Where the information is being supplied, seemingly willingly, by masses of gig workers looking to increase their employability, the situation is rather different. This could be compounded by the fact that the types of information that are desirable in the new, agile, workplace will go beyond simple productivity metrics into information about general health and well-being. New and more robust legal protections may be required to redress this problem of seemingly voluntary disclosure.

I’ll close on a more positive note. Even though I think the unravelling problem is worth taking seriously, the argument I have presented is premised on the assumption that the information derived from quantified self technologies is in fact valuable. This may not be the case. It may turn out that accurately signalling something like the numbers of hours you slept last night, the number of calories you consumed yesterday, or the number of steps you have taken, is not particularly useful to employers. In that case, the scale of the unravelling problem might be mitigated. But we should still be cautious. There is a distinction to be drawn between information that is genuinely valuable (i.e. has some positive link to economic productivity) and information that simply perceived to be valuable (i.e. thought to be of value by potential employers). Unfortunately, the latter is what really counts, not the former. I see this all the time in my own job. Universities are interested in lots of different metrics for gauging the success of their employees (papers published, number of citations, research funding received, number of social media engagements, number of paper downloads etc. etc.). Many of these metrics are of dubious value. But that doesn’t matter. They are perceived as having some value and so academic staff are encouraged to disclose more and more of them.





Saturday, October 14, 2017

Some things you wanted to know about robot sex* (but were afraid to ask)




BOOK LAUNCH - BUY NOW!

I am pleased to announce that Robot Sex: Social and Ethical Implications (MIT Press, 2017), edited by myself and Neil McArthur, is now available for purchase. You can buy the hardcopy/ebook via Amazon in the US. You can buy the ebook in the UK as well, but the hardcopy might take another few weeks to arrive. I've never sold anything before via this blog. That all changes today. Now that I actually have something to sell, I'm going to turn into the most annoying, desperate, cringeworthy and slightly pathetic salesman you could possibly imagine...

...Hopefully not. But I would really appreciate it if people could either (a) purchase a copy of the book and/or (b) recommend it to others and/or (c) review it and generally spread the word. Academic books are often outrageously expensive, but this one lies at the more reasonable end of the spectrum ($40 in the US and £32 in the UK). I appreciate it is still expensive though. To whet your appetite, here's a short article I put together with Neil McArthur that covers some of the themes in the book.

----------------------------------------------------------------

Sex robots are coming. Basic models exist today and as robotics technologies advance in general, we can expect to see similar advances in sex robotics in particular.

None of this should be surprising. Technology and sex have always gone hand-in-hand. But this latest development in the technology of sex seems to arouse considerable public interest and concern. Many people have questions that they want answered, and as the editors of a new academic book on the topic, we are willing to oblige. We present here, for your delectation, *some* of the things you might have wanted to know about robot sex, but were afraid to ask.


1. What is a sex robot?
A ‘robot’ is an embodied artificial agent. A sex robot is a robot that is designed or used for the purpose of sexual stimulation. One of us (Danaher) has argued that sex robots will have three additional properties (a) human-like appearance, (b) human-like movement and behaviour and (c) some artificial intelligence. Each of these properties comes in degrees. The current crop of sex robots, such as the Harmony model developed by Abyss Creations, possess them to a limited extent. Future sex robots will be more sophisticated. You could dispute this proposed definition, particularly its fixation on human-likeness, but we suggest that it captures the kind of technology that people are interested in when they talk about ‘sex robots’.


2. Can you really have sex with a robot?
In a recent skit, the comedian Richard Herring suggested that the use of sex robots would be nothing more than an elaborate form of masturbation. This is not an uncommon view and it raises the perennial question: what does it mean to ‘have sex’? Historically, humans have adopted anatomically precise definitions of sexual practice: two persons cannot be said to have ‘had sex’ with one another until one of them has inserted his penis into the other’s vagina. Nowadays we have moved away from this heteronormative, anatomically-obsessive definition, not least because it doesn’t capture what same-sex couples mean when they use the expression ‘have sex’. In their contribution to our book, Mark Migotti and Nicole Wyatt favour a definition that centres on ‘shared sexual agency’: two beings can be said to ‘have sex’ with one another when they intentionally coordinate their actions to a sexual end. This means that we can only have sex with robots when they are capable of intentionally coordinating their actions with us. Until then it might really just be an elaborate form of masturbation -- emphasis on the 'elaborate'.


3. Can you love a robot?
Sex and love don’t have to go together, but they often do. Some people might be unsatisfied with a purely sexual relationship with a robot and want to develop a deeper attachment. Indeed, some people have already formed very close attachments to robots. Consider, for example, the elaborate funerals that US soldiers have performed for their fallen robot comrades. Or the marriages that some people claim to have with their sex dolls. But can these close attachments ever amount to ‘love’? Again, the answer to this is not straightforward. There are many different accounts of what it takes to enter into a loving relationship with another being. Romantic love is often assumed to require some degree of reciprocity and mutuality, i.e. it’s not enough for you to love the other person, they have to love you back. Furthermore, romantic love is often held to require free will or autonomy: it’s not enough for the other person to love you back, they have to freely choose you as their romantic partner. The big concern with robots is that they wouldn’t meet these mutuality and autonomy conditions, effectively being pre-programmed, unconscious, sex slaves. It may be possible to overcome these barriers, but it would require significant advances in technology.


4. Should we use child sex robots to treat paedophilia?
Robot sex undoubtedly has its darker side. The darkest of all is the prospect of child sex robots that cater to those with paedophiliac tendencies. In July 2014, in a statement that he probably now regrets, the roboticist Ronald Arkin suggested that we could use child sexbots to treat paedophilia in the same way that methadone is used to treat heroin addiction. After all, if the sexbot is just an artificial entity (with no self-consciousness or awareness) then it cannot be harmed by anything that is done to it, and if used in the right clinical setting, this might provide a safe outlet for the expression of paedophiliac tendencies, and thereby reduce the harm done to real children. ‘Might’ does not imply ‘will’, however, and unless we have strong evidence for the therapeutic benefits of this approach, the philosopher Litska Strikwerda suggests that there is more to be said against the idea than in its favour. Allowing for such robots could seriously corrupt our sexual beliefs and practices, with no obvious benefits for children.


5. Will sex robots lead to the collapse of civilisation?
The TV series Futurama has a firm answer to this. In the season 3 episode, ‘I Dated a Robot’, we are told that entering into sexual relationships with robots will lead to the collapse of civilisation because everything we value in society — art, literature, music, science, sports and so on — is made possible by the desire for sex. If robots can give us ‘sex on demand’ this motivation will fade away. The Futurama-fear is definitely overstated. Unlike Freud, we doubt that the motivations for all that is good in the world ultimately reduce to the desire for sex. Nevertheless, there are legitimate concerns one can have about the development of sex robots, in particular the ‘mental model’ of sexual relationships that they represent and reinforce. Others have voiced these concerns, highlighting the inequality inherent in a sexual relationship with a robot and how that may spill over into our interactions with one another. At the same time, there are potential upsides to sex robots that are overlooked. One of us (McArthur) argues in the book that sex robots could distribute sexual experiences more widely and lead to more harmonious relationships by correcting for imbalances in sex drive between human partners. Similarly, our colleague Marina Adshade, argues that sex robots could improve the institution of marriage by making it less about sex and more about love.

This is all speculative, of course. The technology is still in its infancy but the benefits and harms need to be thought through right now. We recommend viewing its future development as a social experiment, one that should be monitored and reviewed on an ongoing basis. If you want to learn more about the topic, you should of course buy the book.


~ Full Table of Contents ~



I. Introducing Robot Sex
1. 'Should we be thinking about robot sex?' by John Danaher 
2. 'On the very idea of sex with robots?' by Mark Migotti and Nicole Wyatt

II. Defending Robot Sex
3. 'The case for sex robots' by Neil McArthur 
4. 'Should we campaign against sex robots?' by John Danaher, Brian Earp and Anders Sandberg 
5. 'Sexual rights, disability and sex robots' by Ezio di Nucci

III. Challenging Robot Sex
6. 'Religious perspectives on sex with robots' by Noreen Hertzfeld 
7. 'The Symbolic-Consequences argument in the sex robot debate' by John Danaher 
8. Legal and moral implications of child sex robots' by Litska Strikwerda

IV. The Robot's Perspective
9. 'Is it good for them? Ethical concern for the sexbots' by Steve Petersen 
10. 'Was it good for you too? New natural law theory and the paradox of sex robots' by Joshua Goldstein

V. The Possibility of Robot Love
11. 'Automatic sweethearts for transhumanists' by Michael Hauskeller
12. 'From sex robots to love robots: Is mutual love with a robot possible' by Sven Nyholm and Lily Eva Frank

VI. The Future of Robot Sex
13. 'Intimacy, Bonding, and Sex Robots: Examining Empirical Results and Exploring Ethical Ramifications' by Matthias Scheutz and Thomas Arnold
14. 'Deus sex machina: Loving robot sex workers and the allure of an insincere kiss' by Julie Carpenter
15. 'Sex robot induced social change: An economic perspective' by Marina Adshade









Sunday, October 1, 2017

Episode #30 - Bartholomew on Adcreep and the Case Against Modern Marketing

1442864569210.jpg

In this episode I am joined by Mark Bartholomew. Mark is a Professor at the University of Buffalo School of Law. He writes and teaches in the areas of intellectual property and law and technology, with an emphasis on copyright, trademarks, advertising regulation, and online privacy. His book Adcreep: The Case Against Modern Marketing was recently published by Stanford University Press. We talk about the main ideas and arguments from this book.

You can download the episode here or listen below. You can also subscribe on iTunes and Stitcher (RSS is here).


Show Notes

  • 0:00 - Introduction
  • 0:55 - The crisis of attention
  • 2:05 - Two types of Adcreep
  • 3:33 - The history of advertising and its regulation
  • 9:26 - Does the history tell a clear story?
  • 12:16 - Differences between Europe and the US
  • 13:48 - How public and private spaces have been colonised by marketing
  • 16:58 - The internet as an advertising medium
  • 19:30 - Why have we tolerated Adcreep?
  • 25:32 - The corrupting effect of Adcreep on politics
  • 32:10 - Does advertising shape our identity?
  • 36:39 - Is advertising's effect on identity worse than that other external forces?
  • 40:31 - The modern technology of advertising
  • 45:44 - A digital panopticon that hides in plain sight
  • 48:22 - Neuromarketing: hype or reality?
  • 55:26 - Are we now selling ourselves all the time?
  • 1:04:52 - What can we do to redress adcreep?
 

Relevant Links