Monday, October 27, 2014

Yes means Yes: The Case for an Affirmative Consent Standard in Sexual Offences



Rape is non-consensual sexual intercourse (at least, it is everywhere that doesn’t still cling to a “force” requirement). In the typical rape case, consent is relevant in two respects. First, it is relevant when proving that the actus reus (“guilty act”) took place: the complainant/victim’s lack of consent is deemed to be a crucial element of the offence. Second, it is relevant when proving mens rea (“guilty mind”): the defendant's lack of a reasonable belief in consent being critical to legal blameworthiness.

But how do we know when consent is present or absent? How do we determine if the defendant lacked a reasonable belief? For a long time, supporters of rape law reform rallied around the “no means no” standard. According to this, if a complainant said “no” to a sexual act (or otherwise signalled non-consent), then this should be taken at face value. It should be taken to mean that they did not consent to the act and that a defendant could not make the case for a reasonable belief in consent.

At first glance this seems like an attractive standard, but problems emerge in practice. Consequently, many now advocate for a “yes means yes” or affirmative standard of consent. In this post, I want to look at the argument in favour of such a standard. In doing so, I draw upon Nicholas Little’s article “From no means no to only yes means yes: The rational results of an affirmative consent standard in rape law”, which appeared in the Vanderbilt Law Review back in 2005, and was recently recommended over on the Feminist Philosophers blog. Like many US law review articles, I think the word that best describes Little’s piece is “unfocussed” (British spelling). It seems to ramble over many areas of current and past legal policy and practice, sometimes losing sight of the central issue. Nevertheless, I think it does contain the kernel of a good argument in favour of an affirmative standard. My goal in this post is to extract that kernel.

I do so in three steps. First, I discuss the epistemic problem at the core of sexual interactions. Second, I explain how the “yes means yes” standard would work. And third, I rebut a range of objections to such a standard. With the exception of the first of these steps, everything I say is based heavily on Little’s original piece.

This post is somewhat timely. Although US universities have adopted affirmative consent standards for their students in the past, one has just recently been adopted across Californian colleges and universities. Nevertheless, I am not overly concerned with those reforms in this piece. The discussion is focussed more on the criminal law and on the general philosophical and ethical issues.


1. Sexual Consent and the Common Knowledge Problem
At its core, sexual consent is an attitudinal thing. It is a willingness and desire on the part of the participants to engage in some sexual act. The difficulty with this attitudinal account is that it makes consent a subjective phenomenon. Something that resides in the minds of individual actors. This makes it vulnerable to a classic philosophical problem: how can we really know what (if anything) another person is thinking?

The simple answer, of course, is to ask them. Although I do not have direct access to your thoughts, I do have indirect access to them. I can ask you what you are thinking and you can use “signals” — objectively meaningful codes and symbols — to reveal your thoughts to me. Sometimes these signals are verbal — “I am really hungry right now” — and sometimes they are non-verbal — e.g. pointing to some food and rubbing your belly. Signals of this sort can only work if both parties know what they mean. Thus, in order for rubbing your belly to successfully signal hunger to me, I need to know what it means and you need to know that I know what it means and so on ad infinitum (i.e. the meaning of the signal needs to be common or at least shared knowledge between the two of us).

The need for common knowledge presents a difficulty. Many signals are arbitrary in nature. The three letters “D-O-G” mean dog in the English language, but there is nothing special about those three letters. Different letters signal the same thing in different languages. Indeed it is even worse than that. The same signals can mean different things in different contexts; and “private” languages — wherein a signal takes on a particular meaning known only to a narrow group — can emerge in some cases. We navigate through these difficulties on a daily basis, often by asking for clarifications when a signal’s meaning is opaque. But sometimes we are reluctant to do this because we are afraid to look stupid or admit to uncertainty.

This creates a particular problem in the sexual domain. Given that sexual interactions can be a source of both great joy and great suffering, their participants need to tread carefully. They need to ensure that each person consents to each part of the interaction. To ensure this, they need to know what the other party is thinking: what their attitudes toward the interaction are. This means that they need to have signals that clearly and unambiguously indicate a willingness to proceed.

One might think that a “no means no” standard would help in this regard. After all, the word “no” (or a non-verbal equivalent such a physical resistance) certainly looks like a clear and unambiguous signal of non-consent. But for a variety of reasons this is not the case. There are many myths surrounding sexual behaviour. Women are perceived as “slutty” or “promiscuous” if they are too forthcoming in their sexual desires; men are sometimes led to believe that a “no” really means a “yes” (or, at least, a “try again”); and people often over-interpret the meaning of non-verbal signals (clothing, friendliness etc.).

These myths can have a direct impact on rape trials. Juries are often willing to acquit a defendant on the basis that he (and rape is gendered crime in most jurisdictions) reasonably inferred consent from some non-verbal signal, or because he reasonably believed that “no” meant “yes”. A good example of this can be seen in Finch and Munro’s 2006 mock-jury study of the English law. This study found that several jurors were willing to acquit on the basis that a friendly demeanour and inviting someone back to one’s bedroom (etc.) could ground a reasonable belief in consent.

And there is another problem with the “no means no” standard: it places the onus on the victim (typically a woman) to provide the signals. It is as if the default position is one of consent, which can only be rebutted by a clear and unambiguous signal to the contrary. This is problematic because the victim is often socially or physically “weaker” than the defendant and so fears the consequences of signalling non-consent. They may also buy into some of the prevailing myths of sexuality themselves, believing that they should remain silent in order to maintain social decorum.


2. Moving toward a “yes means yes” standard
Can do better? Can use the law to improve our socio-sexual morality? Although the law doesn’t hold sway over all facets of human behaviour, and oftentimes follows it instead of shaping it, it may be possible for the law to have some causal influence on our sexual behaviour. By having a legal standard that combats the existing myths, and elevates consent to its rightful place, we may be able to correct for some of the flaws in the current system.

One way of doing this would be to adopt an affirmative (“yes means yes”) standard of consent. In his article, Little describes the proposal like this:

An affirmative consent standard requires that, for sex to be considered consensual, it must have been consented to by the woman in advance. In short, if the instigator of a sexual interaction wishes to do anything, he or she must inquire whether his or her partner wishes that to be done, and that partner must receive freely given consent to continue. 
(Little, 2005 p. 1345)

I think this effectively captures the gist of the idea, but it has at least one problem. As Little himself notes later on, one of the virtues of the “yes means yes” proposal is that it can help us to take a more egalitarian view of sexual interactions. Instead of there being a (male) “instigator” and a (female) recipient, there are two (or maybe more!) co-conspirators, both taking an equal role in planning and shaping the future direction of their sexual interaction. Thus, I would prefer to banish talk of “instigators” from the proposal.

Affirmative Standard of Consent: In order for any particular sexual interaction to be deemed morally (or legally) permissible, the participants to that activity must have freely, positively and unambiguously signalled their willingness to proceed with the interaction; the mere absence of objection will not suffice.

Admittedly, this may be too idealistic. In a legal trial the focus will still have to be on one of the participants (the defendant) and what he reasonably believed about consent to the interaction. So there will always be some lingering asymmetry in how we view the scenario. Nevertheless, I think a move towards the “co-conspirator” model would be beneficial and the affirmative consent standard may at least nudge us in the right direction.

The standard would seem to have at least two further benefits. First, it would directly oppose some of the traditional (and I would submit harmful) myths about sexual behaviour: it would undermine the credibility of the “no sometimes means yes” viewpoint, and it would lure us out of the dangerous belief that people (particularly women) should not (and do not) give voice to their true sexual desires. Second, it would shift some (but not all) of the burden of proof. Instead of the onus being on the victim/complainant to signal a desire to stop; the onus would be on the defendant to seek a signal to begin and to continue.

That said, the proposal is certainly not a panacea. This is something Little is keen to emphasise in his article. Many rape trials adopt a “he-said-she-said” format. The sexual intercourse is not denied, but the parties have very different interpretations or recollections of what happened. An affirmative consent standard can do nothing to avoid the epistemic problems associated with this trial format: it will still come down to a question of whose account is more credible. All that the affirmative standard can do is eliminate certain lines of argument from the defendant’s arsenal. He can longer argue things like “I thought she was consenting because she said nothing” or “I thought she was consenting because she was so friendly to me earlier in the evening”.


3. Objections and Replies
Some people object to the introduction of an affirmative standard. It is important to address their objections. An initial one — not discussed in Little’s article — might be that the argument is unnecessary because an affirmative consent standard has already been incorporated into the law. This clearly isn’t true in every jurisdiction but I’m thinking here of the English legal position, as set down in the Sexual Offences Act 2003. In that act, a reasonable belief in consent is characterised in the following manner:

Section 1(2): Whether a belief is reasonable is to be determined having regard to all the circumstances, including any steps A [the Defendant] has taken to ascertain whether B [the Complainant] consents.

One might think that the phrase “including any steps A has taken to ascertain whether B consents” is a nod in the direction of affirmative consent, and in a way it is. It does suggest that a defendant has to seek affirmative signals of consent. But it is nothing more than a nod. It doesn’t oblige the defendant to seek such signals, and it includes the modifying phrase “having regard to all the circumstances”. This implies (and this implication seems to be borne out in practice) that there are circumstances in which A need not take any such steps. A proper affirmative consent standard would raise the bar higher than this.

Leaving that to the side, what are the other objections one might have to an affirmative consent standard? There are five, and each is addressed at some length in Little’s article (note he doesn’t number or order them in quite the same way).

First, one could argue that adopting an affirmative consent standard represents a dangerous shift in criminal legal policy. The notion that the prosecution must prove their case against the defendant beyond all reasonable doubt is a longstanding one. This is for good reason: the penalties associated with a criminal offence are high and we need to guard against the risk of false imprisonment. By shifting some of the burden of proof onto a defendant, an affirmative consent standard may increase that risk.

This may be true, but it there are some come counterbalancing considerations. For starters, the burden will never be shifted in full: there are other elements to a rape or sexual assault charge that will need to proved by the prosecution. The risk of false imprisonment also needs to be balanced against the current risk of false exonerations. Furthermore, there are already areas of the criminal law in which part of the burden of proof is shifted onto the defendant. Indeed, in the English law on consent to sexual activities, there are certain contexts in which consent is conclusively or evidentially presumed to be absent (not to mention the existence of statutory rape laws which eliminate a consent requirement). A full-blooded affirmative consent standard goes further than these exceptions, to be sure, but still represents a point along a continuum, not a radical break from existing practice. In any event, the standard of proof required from the defendant could be relatively low (e.g. he may need to prove it on balance of probabilities or something even lower). Picking the right standard could help to balance the risks of false incarceration and false exoneration.

Second, and related to the first objection, there is the worry that an affirmative consent standard may lead to a rise in false accusations. You can imagine the argument: with the burden shifted away from them, it would become much easier for a complainant to bring a false accusation to bear on an innocent man. False accusations are, no doubt, real and have historically had a racial component to them (in the US at least), but the risks are probably exaggerated. As Little notes:

[F]alse accusations of rape are no more prevalent than false accusations of other types of major crime. Indeed, when such false accusations do occur, they tend to be made by young women, and are dealt with rapidly and efficiently by the police. 
(Little, 2005, 1357 - footnotes omitted)

Little goes on to provide some further context for these claims, as well as responses to criticisms of them. I'm not well-positioned to evaluate this factual issue. In any case, the risk of a false accusation would seem to be greatly diminished if the participants to a sexual interaction have an open and frank “conversation” (verbal or non-verbal) about what they desire and what they are willing to do. It is those types of conversation that the affirmative consent standard tried to encourage, and it is the absence of such conversations that increases the risk of committing a rape. Finally, the risk of false accusation must be balanced against the risk of under-reporting rape. As Little and others have noted in the past, the risk of under-reporting seems greater at present.

Third, there is the concern that seeking affirmative consent in a sexual encounter is somehow awkward and inappropriate, or that it “kills the mood”. There are several things that can be said in response to this. In the first place, one can note that what is deemed “awkward” or “inappropriate” is culturally contingent: a legal standard demanding affirmative consent may make it much less awkward and inappropriate. In addition to this, there is the fact that there are forms of human sexuality that already adopt an affirmative consent standard. Little gives the example of S&M, in which the norm (admittedly not always respected) is to set pre-determined limits on what the participants are willing to do, and to use safe words to facilitate the withdrawal of consent at any time. This doesn't “kill the mood”, and the rationale behind it is interesting. People seem to think that the risk of physical harm from S&M warrants extra caution, but then why shouldn’t the harms of non-consensual sex always warrant such caution? Another point is that affirmative consent standards are not that unusual in other areas of the law. For example, if I want to borrow your car I typically need to seek your affirmative consent, otherwise I may be guilty of theft (actually: the law on theft and consent is complicated); or if I want to perform surgery on you I need to seek your affirmative consent. Why should we treat sexuality differently when it is so important to many of our lives? Finally, it is likely that having an open and frank conversation will improve the sexual experience, rather than detract from it. By being open, the participants can better ensure that the interaction is to their mutual advantage. Any awkwardness will dissipate in time.

A fourth objection to the affirmative consent standard would highlight its potential harms to women. A defender of the existing model could argue that a “no means no” standard allows women to have the best of both worlds. As noted above, prevailing cultural beliefs tend to punish women who are too forthcoming or open in expressing their sexual desires. A “no means no” standard might be thought to allow them to maintain some level of decorum whilst also getting what they want. I think this is dubious, at best. The problems with the “no means no” standard and the potential harms of non-consensual sex, would seem to greatly outweigh this suggested benefit. Furthermore, the negative stereotyping of women should be combatted in other ways. In this respect, I think the Irish satirical news website Waterford Whispers News is to be commended for their article “Woman On Walk Of Shame Not Really Feeling All That Ashamed Of Anything” (note: it is satire).

There is one final objection to mention. This comes from the radical feminist school of thought. I’ll let Little explain it:

[Radical feminists] argue that society is set up such that women are constantly oppressed and subordinated and, therefore, their consent cannot be a valid expression of willingness to take part in sexual activity. Indeed, a single mother who has no source of income may "consent" to provide sexual services to a man in exchange for shelter and food for herself and her child. Such a relationship, while not consensual in the most meaningful sense, would not be considered rape under any proposed affirmative consent standards. 
(Little, 2005, p 1361)

He goes on to discuss examples of this view from the work of MacKinnon. There is a fair point to be made here. An affirmative consent standard is not going to solve all the problems of sexual inequality. Nor is it even going to solve all the problems associated with the concept of “consent”. For example, it provides no guidance in relation to deception, coercion, mistake, incapacity, and intoxication, all of which have an impact on sexual morality. But we shouldn’t expect it to do everything. It makes a step in the right direction. It tries to change social attitudes toward sexual interactions, tries to equalise the relationship between the participants, and tries to encourage a more progressive and mature approach to sexuality. It is not going to eliminate rape and sexual assault.


4. Summary and Conclusion
I haven’t presented a formal argument for the affirmative consent standard in this post. Rather, using Little’s article, I have tried to identify some problems with existing approaches and some of the potential benefits of switching to the affirmative standard. To conclude it might be worth pulling together the various strands of argumentation into a more user-friendly summary.

We can start with the basic case for an affirmative consent standard (note: this is not intended to be a logically valid argument; rather it is an informal summary of the reasoning):


  • (1) The harms of non-consensual sex are great; we should do what we can to minimise those harms.

  • (2) A “no means no” standard of consent does not minimise those harms because in typical rape case it places the onus on the woman to signal non-consent, and is often overwhelmed by prevailing cultural myths about sexual behaviour (e.g. the meaning on non-verbal signals, the belief that “no” means “yes”)

  • (3) A “yes means yes” standard would do more to minimise those harms because it would (a) try to equalise the relationship between the sexual partners (both ought to be willing co-conspirators); (b) in the typical rape case, it would put the onus on the man to seek some affirmative signal of consent; and c) it would counteract the prevailing cultural myths by blocking any reliance on them as a defence.

  • (4) Therefore, we should introduce an affirmative consent standard.


Then we have the objections and replies:

Objection 1: An affirmative consent standard represents a dangerous shift away from the presumption of innocence by placing the burden of proof on the defendant.
Replies: The full burden need not be shifted; the risk associated with this should be weighed against the risk of false exonerations under the current system; there are already aspects of the law on sexual assault that shift some of the burden onto the defendant; and the standard of proof imposed on the defendant can be set at an appropriate level.

Objection 2: An affirmative consent standard could increase the number of false accusations.
Replies: The number of false accusations is probably low and those we know about are often dealt with quickly and efficiently by the police; the risk of false accusation would also be mitigated by having an open and frank conversation with one’s prospective sexual partner; and finally the risk of false accusations needs to be balanced against the risk of under-reporting.

Objection 3: Seeking affirmative consent would be awkward, inappropriate or mood-killing.
Replies: The law can change what is deemed awkward and inappropriate; affirmative consent standards are already the norm in some areas of human sexuality (e.g. S&M) and in other areas of the law (e.g. consent to having one’s property borrowed, consent to medical treatment); and having an open and frank conversation with one’s prospective sexual partner is likely to enhance, rather than detract from, the sexual experience.

Objection 4: A “no means no” standard is beneficial to women as it allows them to maintain the socially desired form of “decorum” whilst at the same time engaging in the kinds of sexual activities they desire - an affirmative standard would disrupt this and play into the hands of stigmatisers. 
Replies: The alleged benefits of this approach are probably outweighed by its costs; and the problem of stigma can and should be combatted in other ways.

Objection 5: An affirmative standard plays into the dominant, patriarchal conception of sexual agency; given the systematic oppression and subordination suffered by women, their affirmative consent is often not a true or valid expression of their sexual desires.
Replies: An affirmative consent standard is not a panacea. It cannot correct for all societal ills, nor can it deal with all aspects of what it means to “consent” to something. It is merely a step in the right direction.


Okay, that’s all I have to say for now. I’m sure there is more nuance and detail that needs to be explored. Nevertheless, I hope this has provided a useful overview of the argument.

Friday, October 24, 2014

Procedural Due Process and the Dangers of Predictive Analytics



(Threat of Algocracy - Series Index)

I am really looking forward to Frank Pasquale’s new book The Black Box Society: The Secret Algorithms that Control Money and Information. The book looks to examine and critique the ways in which big data is being used to analyse, predict and control our behaviour. Unfortunately, it is not out until January 2015. In the meantime, I’m trying to distract myself with some of Pasquale’s previously published material.

One example of this is an article that he wrote with Danielle Keats Citron (whose work is also really interesting). The article is entitled “The Scored Society: Procedural Due Process for Automated Predictions”. The article looks at the recent trend for using big data to “score” various aspects of human behaviour. For example, there are now automated “scoring” systems used to rank job applicants based on their social media output, or college professors for their student-friendliness, or political activists for their likelihood of committing crimes. Is this a good thing? Citron and Pasquale argue that it is not, and suggest possible reforms to existing legal processes. In short, they argue for a more robust system of procedural due process when it comes to the use of algorithms to score human behaviour.

In this post, I want to take a look at what Citron and Pasquale have to say. I do so in three parts. First, by looking at the general problem of the “Black Box” society (something I have previously referred to as “algocracy”). Second, by discussing the specific example of a scoring system that Citron and Pasquale use to animate their policy recommendations, namely: credit risk scoring systems. And third, by critically evaluating those policy recommendations.


1. The Problem of the Black Box Society
Scoring systems are now everywhere, from Tripadvisor and Amazon reviews, to Rate my Professor and GP reviews on the NHS. Some of these scoring systems are to be welcomed. They often allow consumers and users of services to share valuable information. And they sometimes allow for productive feedback-loop between consumers and providers of services. The best systems seem to work on either a principle of equality — where everyone is allowed to input data and have their say — or a reversal of an inequality of power — e.g. where the less powerful consumer/user is allowed to push back against the more powerful producer.

But other times scoring systems take on a more sinister vibe. This usually happens when the scoring system is used by some authority (or socially powerful entity) to shape or control the behaviour of those being scored. For example, the use of scoring systems by banks and financial institutions to restrict access to credit, or by insurance companies to increase premiums. The motives behind these scoring systems are understandable: banks want to reduce the risk of a bad debt, insurance companies want enough money to cover potential payouts (and to make a healthy profit for themselves). But their implementation is more problematic.

The main reason for this has to do with their hidden and often secretive nature. Data is collected without notice; the scoring algorithm is often a trade secret; and the effect of the scores on an individual’s life is often significant. Even more concerning is the way in which humans are involved in the process. At the moment, there are still human overseers, often responsible for coding the scoring algorithms and using the scores to make decisions. But this human involvement may not last forever. As has been noted in the debate about drone warfare, there are three kinds of automated system:

Human-in-the-loop Systems: These are automated systems in which an input from a human decision-maker is necessary in order for the system to work, e.g. to programme the algorithm or to determine what the effects of the score will be.
Human-on-the-loop Systems: These are automated systems which have a human overseer or reviewer. For example, an online mortgage application system might generate a verdict of “accept” or “reject” which can then be reviewed or overturned by a human decision-maker. The automated system can technically work without human input, but can be overridden by the human decision-maker.
Human-out-of-the-loop Systems: This is a fully automated system, one which has no human input or oversight. It can collect data, generate scores, and implement decisions without any human input.

By gradually pushing human decision-makers off the loop, we risk creating a “black box society”. This is one in which many socially significant decisions are made by “black box AI”. That is: inputs are fed into the AI, outputs are then produced, but no one really knows what is going on inside. This would lead to an algocracy, a state of affairs in which much of our lives are governed by algorithms.


2. The Example of Credit-Scoring Systems
Citron and Pasquale have a range of policy proposals that they think can deal with the problem of the black box society. They animate these proposals by using the example of credit-scoring algorithms. These algorithms are used to assess an individual’s credit risk. They are often used by banks and credit card companies as part of their risk management strategies.

The classic example, in the U.S., is the Fair, Isaac & Co. (FICO) scoring system. A typical FICO score is a three-digit number that is supposed to represent the risk of a creditor defaulting on a loan. FICO scores range from 300 to 850, with higher scores representing lower risk. They are routinely used in consumer lending decisions in the U.S.. That said, there are other credit scoring systems, many of which became famous during the 2008 financial crisis (largely because they were so ineffective at predicting actual risk).

These scoring systems are problematic for three reasons, each of which is highlighted by Citron and Pasquale:

Opacity Problem: The precise methods of calculation are trade secrets. FICO have released some of the details of their algorithm, but not all of them. This means that the people who are being scored don’t know exactly which aspects of their personal data are being mined, and how those bits of data are being analysed and weighted. This has led to a lot of misinformation, with books and websites trying to inform consumers about what they can do to get a better score. 
Arbitrariness Problem: The scores produced by the various agencies seem to be arbitrary, i.e. to lack a consistent, reasoned basis. Citron and Pasquale give two examples of this. One is a study done on 500,000 customer files across three different ratings agencies. This showed that 29% of customers had ratings that differed by more that 50 points across the different agencies, even though all three claim to assess the risk of default. A second example is the fact that seemingly responsible behaviour can actually reduce your score. For example, seeking more accurate information about one’s mortgage — which sounds like something that a responsible mortgagee would do — lowers one’s score.
Discrimination Problem: The scoring systems can be loaded with biasing factors, either because of the explicit biases of the users, or the implicit biases attached to certain datasets. This means that they can often have a disproportionate impact on racial and ethnic minorities (cf Tal Zarsky’s argument). One example of this is the way in which Allstate Insurance (in the U.S.) used credit scores. They were alleged to have used the information provided by credit scores in a discriminatory way against 5 million Hispanic and African-American customers. This resulted in litigation that was eventually settled out of court.

When you couple these three problems with the ineffectiveness of credit rating systems in the lead-up to the financial crisis, you get a sense of the dangers of the black box society. Something should be done to mitigate these dangers.


3. Can we solve the problem?
To that end, Citron and Pasquale recommend that we think of the scoring process in terms of four distinct stages: (i) the data gathering stage, in which personal data is gathered; (ii) the score-generating stage, in which an actual score is produced from the data; (iii) the dissemination stage, in which the score is shared with decision-makers; and (iv) the usage stage, in which the score is actually used in order to make some decision. These four stages are illustrated in the diagram below.



Once we are clear about the process, we can begin to think about the remedies. Citron and Pasquale suggest that new procedural safeguards and regulations be installed at each stage. I’ll provide a brief overview here.

First, at the data-gathering stage, people should be entitled to know which data are being gathered; and they should be entitled to challenge or correct the data if they believe it is wrong. The data-gatherers should not be allowed to hide behind confidentiality agreements or other legal facades to block access to this information. There is nothing spectacular in this recommendation. Freedom of information and freedom of access to personal information is now common in many areas of law (particularly when data are gathered by governments).

Second, at the score-generating stage, the source code for the algorithms being used should be made public. The processes used by the scorers should be inspectable and reviewable by both regulators and the people affected by the process. Sometimes, there may be merit to the protection of trade secrets, but we need to switch the default away from secrecy to openness.

Third, at the dissemination stage, we run into some tricky issues. Some recent U.S. decisions have suggested that the dissemination of such information cannot be blocked on the grounds that doing so would compromise free speech. Be that as it may, Citron and Pasquale argue that everyone should have a right to know how and when their information is being disseminated to others. This right to know wouldn’t compromise the right to free speech. Indeed, transparency of this sort actually facilitates freedom of speech.

Fourth, at the usage stage, Citron and Pasquale argue for a system of licencing and auditing whenever the data are used in important areas (e.g. in making employment decisions). This means that the scoring system would have to be licenced for use in the particular area and would be subjected to regular auditing in order to ensure quality control. Think of the model of health and safety licencing and inspection for restaurants and you’ve got the basic idea.

These four recommendations are illustrated in the diagram below.





4. Criticisms and Reflections
Citron and Pasquale go on to provide a detailed example of how a licencing and auditing system might work in the area of credit-scoring algorithms. I won’t go into those details here as the example is very US-centric (focusing on particular regulatory authorities in the US and their current powers and resources). Instead, I want to offer some general reflections and mild criticisms of their proposals.

In general, I favour their policy recommendations. I agree that there are significant problems associated with the black box society (as well as potential benefits) and we should work hard to minimise these problems. The procedural safeguards and regulatory frameworks proposed by the authors could very well assist in doing this. Though, as the authors themselves note, the window of opportunity for reforming this area may not open any time soon. Still, it is important to have policy proposals ready-to-go when it does.


Furthermore, I agree with the authors when they reject certain criticisms of transparency and openness. A standard objection is that transparency will allow people to “game the system”, i.e. generate good ratings when they actually present a risk. This may happen, but its importance is limited by two factors. First, if it does happen, it may just indicate that the scoring system is flawed and needs to be improved: reliable and accurate systems are generally more difficult to game. Transparency may facilitate the necessary improvements in the scoring system by allowing competitors to innovate and learn from past mistakes. Second, the costs associated with people “gaming the system” need to be considered in light of the costs of the present system. The current system did little to prevent the financial crisis in 2008, and its secrecy has an impact on procedural fairness and individual lives. Is the “gaming” worry sufficient to outweigh those costs?

Nevertheless, I have two concerns about the authors’ proposal. One is simply that it may be too idealistic. We are already drowning in information and besieged by intrusions into our personal data. Adding a series of procedural safeguards and rights to review data-gathering systems might do little to prevent the slide toward the algocratic society. People may not exercise their rights or may not care about the (possibly deleterious) ways in which their personal data are being used. In addition to this, and perhaps more subtly, I worry that proposals of this sort do little to empower the individuals affected by algocratic systems. Instead, they seem to empower epistemic elites and technocrats who have the time and ability to understand how these systems work. They will then be tasked with helping the rest of us to understand what is going on, advising us as to how these systems may be negatively impacting on us, and policing their implementation. In other words, proposals of this sort seem to just replace one set of problems — associated with a novel technological process — with an older and more familiar set of problems — associated with powerful human elites. But maybe the devil you know is better than the devil you don’t.

I’m not sure. Anyway, that brings us to the end of this discussion. Just to reiterate, there is plenty that I agree with in Citron and Pasquale’s paper. I am just keen to consider the broader implications.

Wednesday, October 22, 2014

One Million Pageviews


According to google stats, this blog finally crossed the 1,000,000 page view threshold yesterday. As I understand it, google stats are not always reliable, so this is likely to be an overestimate. Nevertheless, it felt like a moment that was worth marking in some way.

A big thanks everyone who reads on a regular basis, checks in from time-to-time, and shares my work online. It is much appreciated. If I can ever reciprocate, just let me know.


Friday, October 17, 2014

Algocracy and other Problems with Big Data (Series Index)




What kind of society are we creating? With the advent of the internet-of-things, advanced data-mining and predictive analytics, and improvements in artificial intelligence and automation, we are on the verge of creating a global "neural network": a constantly-updated, massively interconnected, control system for the world. Imagine what it will be like when every "thing" in your home, place of work, school, city, state and country is monitored or integrated into a smart device? And when all the data from that device is analysed and organised by search algorithms? And when this in turns feeds into some automated control system?

What kind of world do you see? Should we be optimistic or pessimistic? I've addressed this question in several posts over the past year. I thought it might be useful to collect the links to all those posts in one place. So that's what I'm doing here.

As you'll see, most of those posts have been concerned with the risks associated with such technologies. For instance, the threat they may pose to transparency, democratic legitimacy and traditional forms of employment. But just to be clear, I am not a technophobe -- quite the contrary in fact. I'm interested in the arguments people make about technology. I like to analyse them, break them down into their key components, and see how they stand up to close, critical scrutiny. Sometimes I end up agreeing that there are serious risks; sometimes I don't.

Anyway, I hope you enjoy reading these entries. This is a topic that continues to fascinate me and I will write about it more in the future.

(Note: I had no idea what to call this series of posts. So I just went with whatever came into my head. The title might be somewhat misleading insofar as "Big Data" isn't explicitly mentioned in all of these posts, though it does feature in many of them)


1. Rule by Algorithm? Big Data and the Threat of Algocracy
This was the post that kicked everything off. Drawing upon some work done by Evgeny Morozov, I argued that increasing reliance on algorithm-based decision-making processes may pose a threat to democratic legitimacy. I'm currently working on a longer paper that develops this argument and assesses a variety of possible solutions.

2. Big Data, Predictive Algorithms and the Virtues of Transparency (Part One, Part Two)
These two posts looked at the arguments from Tal Zarsky's paper "Transparent Predictions". Zarsky assesses arguments in favour of increased transparency in relation to data-mining and predictive analytics.

3. What's the case for sousveillance? (Part One, Part Two)
This was my attempt to carefully assess Steve Mann's case for sousveillance technologies (i.e. technologies that allow us to monitor social authorities). I suggest that some of Mann's arguments are naive, and that it is unlikely that sousveillance technologies will resolve problems of technocracy and social inequality.

4. Big Data and the Vices of Transparency
This followed up on my earlier series of posts about Tal Zarsky's "Transparent Predictions". In this one I looked at what Zarsky had to say about the vices of increased transparency.

5. Equality, Fairness and the Threat of Algocracy
I was going through a bit of Tal Zarsky-phase back in April, so this was another post assessing some of his arguments. Actually, this one looked at his most interesting argument (in my opinion anyway). In this one, Zarsky claimed that automated decision-making processes should be welcomed because they could reduce implicit bias.

6. Will Sex Workers be Replaced by Robots? (A Precis)
This was an overview of the arguments contained in my academic article "Sex Work, Technological Unemployment and the Basic Income Guarantee". That article looked at whether advances in robotics and artificial intelligence threaten to displace human sex workers. Although I conceded that this is possible, I argued that sex work may be one of the few areas that is resilient to technological displacement.

7. Is Modern Technology Creating a Borg-Like Society?
This post looked at a recent paper by Lipschutz and Hester entitled "We are the Borg! Human Assimilation into the Cellular Society". The paper argued that recent technological developments pushed us in the direction of a Borg-like society. I tried to clarify those arguments and then asked the important follow-up: is this something we should worry about? I identified three concerns one ought to have about the drive toward Borg-likeness.

8. Are we heading for technological unemployment? An Argument
This was my attempt to present the clearest and most powerful argument for technological unemployment. The argument drew upon the work of Andrew McAfee and Erik Brynjolfsson in The Second Machine Age. Although I admit that the argument has flaws -- as do all arguments about future trends -- I think it is sufficient to warrant serious critical reflection.

9. Sousveillance and Surveillance: What kind of future do we want?
This was a short post on surveillance technologies. It looked specifically at Steve Mann's attempt to map out four possible future societies: the univeillant society (one that rejects surveillance and embraces sousveillance); the equiveillant society (one that embraces surveillance and sousveillance); the counter-veillance society (one that rejects all types of veillance); and the McVeillance society (one that embraces surveillance but rejects sousveillance).

10. Procedural Due Process and Predictive Analytics
Big data is increasingly being used to "score" human behaviour in order to predict future risks. Legal scholars Frank Pasquale and Danielle Keats Citron critique this trend in their article "The Scored Society". I analyse their arguments and offer some mild criticisms of the policy proposals.

11. How might algorithms rule our lives? Mapping the logical space of algocracy
This post tried to formulate a method for classifying the different types of algocratic decision procedure. It did so by identifying four distinct decision-making tasks and four distinct ways in which those tasks could be distributed between humans and algorithms.

12. The Logic of Surveillance Capitalism
This post looks at Shoshanna Zuboff's work on surveillance capitalism. Zuboff follows a conceptual framework set out by Google's chief economist Hal Varian and argues that we are entering a new phase of capitalism, which she calls 'surveillance capitalism'. This phase hinges on the collection and control of data and is characterised by four distinctive features. I discuss (and critique) her analysis of these four features in this post.

13. The Philosophical Importance of Algorithms
This post looks at some of Rob Kitchin's work on the importance of algorithms in modern society. First, it assesses the process of algorithm-construction and highlights two key translation problems that are inherent to that process. Second, it considers the importance of algorithms for the three main branches of philosophical inquiry.

14. How to Study Algorithms: Challenges and Methods
This is another post looking at Rob Kitchin's work. This one is quite practical in nature, focusing on the different research strategies one could adopt when studying the role of algorithms in contemporary society.

15. Understanding the Threat of Algocracy
This is a video of a talk I delivered to the Programmable City Project at Maynooth University on the Threat of Algocracy. I tried to ask and answer four questions: (i) What is algocracy? (ii) What is the threat of algocracy? (iii) Can we (or should we) resist the threat? and (iv) Can we accommodate the threat?

16. Is there Trouble with Algorithmic Decision-Making? Fairness and Efficiency-Based Objection
This is a discussion of a paper by Tal Zarsky on the trouble with algorithmic decision-making. The post tries to offer a high-level summary of the main objections to algorithmic decision-making and the potential responses to those objections.




Wednesday, October 15, 2014

The Journal Club #4: Puryear on Finitism and the Beginning of the Universe



Welcome to the fourth edition of the Philosophical Disquisitions Journal Club. Apologies for the delay with the club this month — real life got in the way — but I’m here now and ready to go. The purpose of the journal club is to facilitate discussion and debate about a recent paper in the philosophy of religion. This month’s paper is:

Puryear, StephenFinitism and the Beginning of the Universe” (2014) Australasian Journal of Philosophy, forthcoming.

The paper introduces a novel critique of the Kalam Cosmological argument. Or rather, a novel critique of a specific sub-component of the argument in favour of the Kalam. As you may be aware, the Kalam argument makes three key claims: (i) that the universe must have begun to exist; (ii) that anything that begins to exist must have a cause of its existence; and (iii) that in the case of the universe, the cause must be God.

There is no need to get into the intricacies of the argument today. To understand Puryear’s paper we only need to focus on the first of those three key claims. That claim is typically defended by arguing that the universe could not be infinitely old because actual infinities cannot exist in the real world. Puryear argues that this defence creates problems for proponents of the Kalam, particularly when they try to reinforce it and render it less vulnerable to objections.

Is he right? Well, that’s what is up for debate. As per usual, I’ll try to kickstart the debate by providing a brief overview of Puryear’s main arguments.


1. Why can’t there be an actual infinite?

To start, we need to consider why proponents of the Kalam think that the past cannot be an actual infinite. William Lane Craig — the foremost defender of the argument — does so by highlighting absurdities in either the concept of an actual infinite (e.g. the absurdity of Hilbert’s Hotel) or in the concept of an actual infinite being formed by successive addition (e.g. the reverse countdown argument). We will focus on the later absurdity here.

One of the main ideas put forward by Craig is that the past is made up of a series of events (E1, E2…En). Indeed, what we refer to as “the past” is simply the set of all these events added together. But if that’s what the past is, then it cannot be an actual infinite. If you start with one event, and add another event, and another, and so on ad infinitum, then you never get an actually infinite number of events. Instead, you get a set whose number of constituents is getting ever larger, tending towards infinity, but never actually reaching it. In the literature, this is known as a “potential infinite”. And if the set of past events cannot be actually infinite, it must have had a beginning (i.e. a first event).

This line of reasoning can be summarised as follows:



  • (1) If the universe did not have a beginning, then the past would consist in an infinite temporal sequence of events.
  • (2) An infinite temporal sequence of past events would be actually and not merely potentially infinite.
  • (3) It is impossible for a sequence formed by successive addition to be actually infinite.
  • (4) The temporal sequence of past events was formed by successive addition.
  • (5) Therefore, the universe had a beginning.



There are a variety of criticisms one could launch against this argument. I’ve considered some of them in past, but for now we’re just interested in one possible response. Premise (3) is claiming that actual infinities cannot be formed by successively adding more and more elements to a set. Another way of putting it would be to say that we cannot traverse an actually infinite sequence in a stepwise fashion (i.e. go from E1 to E2 to E3 and so on until we reach an actual infinite).

The problem with this is that it runs foul of the possibility that we traverse actually infinite sequences all the time. This was a notion that was first introduced to us by Zeno and his famous paradoxes. A simple version of Zeno’s argument would go something like this. In order for me to get from one side of the road to another, I first have to traverse half the distance. And in order to traverse half the distance I have to first traverse a quarter of the distance. And before I do that I have to traverse an eighth of the distance. And before that a sixteenth. And so on ad infinitum. The space between me and the other side of the road is made up of an actual infinite sequence of sub-distances. Nevertheless, I can traverse it in a stepwise fashion. Where’s the problem?

Well, there could be several. One is that maybe space cannot be infinitely sub-divided, as Zeno’s paradox assumes. We’ll return to that possibility later on. Another possibility is that when it comes to segments of space and time, the whole is prior to the parts. What does this mean? Take a line drawn on a piece of paper. You could argue that the line is made up of smaller sub-units or you could argue, perhaps more plausibly, that the whole line is prior to the sub-units. In other words, that the full length of the line exists first, and then we simply sub-divide it into units thereafter. This sub-division is, however, purely conceptual in nature: it exists in thought only, not in reality. This means that the sub-division is only potentially infinite, not actually infinite. Why? Because we cannot mentally sub-divide something into an actually infinite number of sub-units, we can only add more and more sub-divisions and thereby tend towards infinity, but never reach it.

William Lane Craig has advocated this “priority of the whole” response himself, arguing that “the past” exists as an undivided whole first, and is then broken down into sub-units afterwards by our mentality. This means it could only ever consist in a potentially infinite number of sub-units. Puryear argues that in embracing this response, Craig creates problems for the Kalam as a whole. Let’s see what those problems are.


2. Why the Priority of the Whole view is Problematic
Puryear’s basic contention is this: If it is true that the whole can be prior to the parts, then it is possible that the past is simply a indefinitely extended temporal unit. In other words, it is possible that the past consists of one, metaphysically indivisible whole, that is then conceptually sub-divided into temporal units (minutes, seconds, lunar cycles, whatever). Those conceptual sub-divisions would be imposed upon the metaphysical reality; they would not be actual features of that reality.

Why is this a problem? Because it would defeat one of the original assumptions underlying the Kalam. Proponents of the Kalam believe that their critics cling steadfastly to the notion of an actually infinite past sequence of events because they are committed to a beginningless past. But if the past can simply be one whole, which extends indefinitely in the reverse temporal direction, then it is possible to argue both that the universe did not begin to exist and that the past does not consist of an actually infinite number of events. This is because the sub-division of the past into events would be conceptual only, i.e. a potential infinite not an actual infinite, much like the division of a line into sub-units after it is drawn on the page.

That’s the gist of Puryear’s argument. One possible objection would be to argue time and the events which take place in time are metaphysically distinct. In other words, to say that although the past could be one whole temporal unit, the events which take place in the past may not be. This would imply that if the past was an indefinitely extended whole, it would still need to consist of an actually infinite number of events. And if it does, then the absurdities beloved by Craig and others would still apply.

To rebut this objection, Puryear needs to argue that the priority of the whole with respect to time (PWT) entails the priority of the whole with respect to events (PWE): if the past is just one big, indefinitely extended thing upon which we impose conceptual sub-divisions, then the same is true for events. That is to say, the past can simply be viewed as one big event (one “happening”) that we conceptually sub-divide into other events. To illustrate, Puryear gives the example of a moon orbiting a planet for an indefinitely extended period of time. Clearly, in such a case, the number of past events (i.e. number of “orbits”) coincides with the number of temporal intervals (i.e. lunar years). But if the latter are purely conceptual in nature, then so too could the former be purely conceptual. This could be true for all “events”.

If this is right, then the attempt to defend the Kalam by reference to the priority of the whole view fails.


3. Conclusion and Thoughts
This could have two significant implications. It could mean that the Zeno-paradox argument is open to the critic of the Kalam once more. Thus, if the division of time into sub-units isn’t purely conceptual, then, as Wes Morriston has argued, the fact that we can specify a rule that would lead to it being divided into an actually infinite sequence of sub-units gives us some reason to think that it does consist of an actually infinite sequence of sub-units. This, again, reopens the possibility that we traverse actually infinite spaces in a stepwise fashion all the time. Alternatively, it could mean that proponents of the Kalam are forced to defend the view that time and space are quantized, i.e. that there is some minimum unit of sub-division.

Anyway, that’s a brief overview of Puryear’s article. I think it opens up an interesting avenue for debate, one that isn’t typically explored in conversations about the Kalam. Instead of plumbing the depths of our intuitions about infinity — which is never that fruitful given that infinity is such a counter-intuitive idea — it plumbs the depths of our intuitions about composition. But it also raises some questions. Is the priority of the whole view plausible? Does Puryear successfully argue for the equivalency between the past sequence of events and the past sequence of temporal intervals? Is the “quantised” view of space and time workable?

What do others think?

Monday, October 13, 2014

How can you make your writing more coherent? Four Tips




I’m currently teaching a course on research and writing. The goal of the course is to teach students how to better research, plan and write an academic essay. As a student, this was the kind of course I tended to dislike — usually because the advice offered was either completely banal (“write in a clear, straightforward manner”) or fussily prescriptive (“judgment should be spelled without an ‘e’ when it refers to legal judgment, but with an ‘e’ when it does not”*). Teaching such a course has changed my attitude. I’ve realised that although most of what I was taught was indeed banal and fussy, there are nevertheless some interesting things to be said about the craft of writing.

One of these is the importance of coherence in essay-writing. Incoherence is one of biggest flaws I see in student essays. Such essays can often be made-up of well-formed sentences, but nevertheless be difficult to decipher. I cannot remember the number of times that I’ve waded through page-after-page of carefully-worded prose, only to be left in the dark as to what the student was trying to say. The missing ingredient was coherence: the connective tissue that helps to knit together all carefully-worded prose.

Although I’ve long been aware that this was the missing ingredient, I have never had much in the way of concrete advice to offer. I’m not that self-conscious about what I’m doing when I’m writing, so I’m typically unable to break the process down into a series of rules. Getting all the elements of an essay to fit together seems to come pretty naturally to me (though I’m not claiming to be a good or coherent writer). Fortunately, there are other people who can break things down into rules. Indeed, this was one of the joys of reading Steven Pinker’s recent book The Sense of Style. In one of my favourite chapters, he sets out exactly what it takes to write coherently. In this post, I want to share the four main “tips” that emerge from that chapter. In doing so, I’ll focus on their application to the kinds of academic writing that I engage in.


1. Adopt a sensible overarching structure
There are different “levels” to an academic paper. At the lowest level, there are the words that make up the sentences. At the next lowest level, there are the sentences that make up the paragraphs. Then come the paragraphs which make up sections and subsections. And then come….You get the idea. “Coherence” is something that can be assessed across these different levels. Before you start writing, it’s worth thinking about it at the most general level: that of the paper itself. What are you trying to say? What order should you say it in?

The answer is that you should adopt a sensible overarching structure (often referred to as an “essay plan”). Admittedly, this is pretty banal advice. But it can be rendered less banal with some concrete examples. Suppose I want to write an essay about the nature of love in Shakespeare’s plays. How should I go about it? There are a number of sensible structures I could adopt. I could just open a complete collection of Shakespeare’s work and take it play-by-play, discussing all the different forms of love that appear in each play. Alternatively, I could group the plays into their sub-genres (comedies, tragedies and histories) and explain the similarities and differences across the genres.

Another possibility would be to group the types of love into different categories (romantic love, friendship, tragic love, unrequited love etc.) and discuss how they arise in different plays. Or I could take the plays in the order in which Shakespeare wrote them and see how his thinking about love evolved over time. Some of these might be more appropriate in different contexts. The important point is that each of them is sensible: if someone read an essay with one of those structures, at no point would they feel lost or disoriented by the discussion.

Pinker gives some examples of sensible structures from his own writing. First, he talks about a time when he had to write about the vast and unruly literature on the neuroscience and genetics of language. How could he bring order to this chaos?

It dawned on me that a clearer trajectory through this morass would consist of zooming in [on the brain] from a bird’s-eye view to increasingly microscopic components. From the highest vantage point you can make out only the brain’s two big hemispheres, so I began with studies of split-brain patients and other discoveries that locate language in the left hemisphere. Zooming in on that hemisphere, one can see a big cleft dividing the temporal lobe from the rest of the brain, and the territory on the banks of that cleft repeatedly turns up as crucial for language in studies of stroke patients and brain scans of intact subjects. Moving in closer, one can distinguish various regions — Broca’s area, Wernicke’s area, and so on — and the discussion can turn to the more specific language skills, such as recognising words and parsing them into a tree, that have been tied to each area. 
(Sense of Style, p. 144)


That definitely makes sense. In fact, it sounds like an exciting tour of different brain regions. Another example he gives relates to something he wrote on different languages: English, French, Hebrew, German, Chinese, Dutch, Hungarian, and Arapesh (spoken in New Guinea). He decided to write about them from a chronological perspective, starting with the most recent language and working his way back to the oldest. This allowed readers to see how human language had changed over time.

I tend to structure my papers around the arguments I want to make. Typically, I make one general argument in each paper, which is defended by a number of premises, and fended off from attack by other counter-arguments and objections. I think of the argument I wish to defend as having a structure, one that can be literally mapped out and visualised using an argument mapping technique. I then view the paper as my attempt to illuminate that structure to the reader. Thinking about it in this way helps me plan out the structure. I always start with the conclusion — I don’t want to keep the reader in suspense: I want them to know where the discussion is going. This is usually followed by a section setting out the key concepts and ideas (just to make sure everyone has what they need to understand the structure of the argument). Thereafter, there are a number of different orderings available to me. Sometimes, I will start by looking at objections to my position, usually grouped by author or theme. A good example of this would be my paper “Hyperagency and the Good Life”. In it, I defended the notion that extreme forms of human enhancement might make life more meaningful. And I did so by first looking at four authors who disagreed with me. Other times, I will start with a basic defence of my own position, and follow it up with an assessment of the various counterarguments and objections (I did that in a more recent paper; yet to be published).

This probably sounds pretty dull and uninteresting — certainly when compared to Pinker’s tour of the brain — but I think it works well for academic writing, which often needs to be quite functional.


2. Make sure you introduce the reader to the topic and the point
A reader needs to know what it is you are writing about (the topic) and why (the point). Again, this seems like pretty banal and uninteresting advice, but it’s super-important and really interesting to see why. Read the following passage (taken from a study by the psychologists John Bransford and Marcia Johnson):

The procedure is actually quite simple. First you arrange things into different groups depending on their makeup. Of course, one pile may be sufficient depending on how much there is to do. If you have to go somewhere else due to lack of facilities that is the next step, otherwise you are pretty well set. It is important not to overdo any particular endeavour. That is, it is better to do too few things at once than too many. In the short run this may not seem important, but complications from doing too many can easily arise. A mistake can be expensive as well. The manipulation of the appropriate mechanisms should be self-explanatory, and we need not dwell on it here. At first the whole procedure will seem complicated. Soon, however, it will become just another facet of life. It is difficult to foresee any end to the necessity for this task in the immediate future, but then one never can tell.

Didn’t make much sense did it? Now read it again only this time add in the following topic sentence at the very start: “We need to talk about washing clothes”.

Isn’t it amazing how this one little sentence can transform an incoherent mess of words into something that actually makes sense? It is still not a paragon of clear writing, but it is vastly different. If that doesn’t convince you of the importance of telling the reader what you are writing about, then I’m not sure what will. The same goes for telling them why you are writing about it. In other words, telling them what it is you want them to get out of reading your paper. Do you want to educate them? Convince them of some conclusion? Illuminate some obscure area of research? Get them to do something different with their lives? It’s important that they know as soon as possible. Otherwise they won’t be able to see how everything you say fits together.

To be sure, there is some judgment to be exercised here. You don’t want to bludgeon the reader to death with topic sentences and constant reminders of where it is all going. They’ll be able to keep a certain amount of this detail in their heads as the read through. In a short piece (e.g. one that will take less than 20 minutes to read) one mention of the topic and the point will usually suffice (with the proviso that sometimes you might change topics and you’ll need to inform the reader of this). In longer pieces, you might want to add a few reminders so that they keep on track. As a general guide, I find that I end up taking reminders out of what I’ve written rather than adding them in. This is because it takes longer to write than it does to read, and I often need to remind myself of the topic and the point as I write. But many of these reminders are unnecessary from the reader’s perspective.


3. Keep the focus on your protagonists
Everything you write will have one or more protagonists. The protagonist could be an actual person, or group of persons; or it could be an abstract concept or idea. Whatever the case may be, it is essential that you keep the reader’s focus on that protagonist throughout your discussion. They need to know what the protagonist is up to. If you constantly switch focus — without proper foreshadowing — you end up with something that is disjointed and incoherent.

Again, some concrete examples might help. Suppose I’m writing an essay about Charles Darwin and what he thought about evolution. In that case, Darwin — or, more precisely, his thinking — is my protagonist. I must keep the reader’s focus on what he thought throughout the essay. So I might start by talking about his days in Cambridge, what he was taught, and how this might have influenced his thinking. I would then move on to discuss his time onboard the HMS Beagle, how he collected fossils throughout South America and the importance of his observations in the Galapagos Islands. I would then talk about his return to England, his taking up residence in Down House in Kent, the slow maturation of his ideas, and the eventual publication of his work. As I write, I might occasionally switch focus. For instance, to fully understand his observations on the Galapagos Islands, I might need to take a paragraph explaining some of the unusual geographical features of those islands. Or, when I write about the eventual publication of his ideas, I might need to talk about Alfred Russell Wallace and his independent discovery of the principle of natural selection. These divagations would be perfectly acceptable; the important thing would be to bring the focus back to Darwin soon afterwards.

A more abstract example might be an essay on the concept of justice. In this case, justice itself is my protagonist. I must keep the reader focussed on its meaning, importance and implications. So I could start with a basic definition, talking about the role of justice in shaping political and social institutions. I could then divide justice up into different sub-concepts (distributive justice/corrective justice) and talk about them for a while. I might occasionally switch focus to a particular thinker and what he or she thought about justice. For example, I might talk about John Rawls and his concept of “justice as fairness”. This could involve a couple of paragraphs about Rawls as a person, how he developed his concept, and its influence on contemporary political thinking. This switch to a different protagonist would be fine, so long as it was foreshadowed (e.g. “The 20th century philosopher John Rawls had some interesting ideas about justice. Let’s talk about him for a bit”), and so long as the focus switched back to the concept itself once the discussion of Rawls reaches its natural endpoint.

Keeping the protagonists front and centre in your prose is essential to coherent writing. To do it effectively, you must have some consistent way of referring to them. One of the worst mistakes you can make is to indulge in the sin of elegant variation, i.e. constantly coming up with new ways of referring to an old protagonist. For example, referring to Rawls as “the Harvard sage” or the “bespectacled justice-fetishist” or whatever. Some, occasional, variation is nice, but too much of it is confusing. You don’t want the reader pausing every few minutes to figure out if you are still talking about the same thing.

I have to say, elegant variation is one of the biggest flaws I see among student essays, particularly those written by better students. They are often taught that variation is the hallmark of sophisticated prose; that repetitive use of the same word evinces an underdeveloped vocabulary. This is wrong. The goal of written communication is not to impress the reader with your verbosity; it is to be understood.


4. Understand how coherence relations work
The final tip is the most technical. As David Hume noted, there are a few basic types of relationship that can exist between different ideas (resemblance, contiguity and cause-and-effect). We can call these coherence relations. When writing, it is important to use these basic types of relationship to knit your ideas together. The easiest way to do this is to use connectives, particular words or strings of words that explicitly signal which type of relationship exists.

Pinker identifies four types of coherence relation: the three Humean ones, and additional type he calls attribution. In one of the most useful sections of his book, he goes through each of these relations, giving examples and explaining how they work. I’ll do the same now.

Let’s start with resemblance relations. The name is a little bit misleading because it doesn’t merely cover situations in which one idea resembles another; it also covers situations in which one idea differs from another, or clarifies or generalises another. Here’s a list of the most common types of resemblance relation:

Similarity: Shows how one idea is similar to another, e.g. “Darwin’s theory of evolution was like that of Alfred Russell Wallace.” A similarity relation is commonly signalled by the use of and, similarly, likewise and too.
Contrast: Shows how one idea differs from another, e.g. “Hobbes conceived of the state of nature as a war of all against all. Rousseau had a much rosier view.” A contrast relation is commonly signalled by the use of but, in contrast, on the other hand, and alternatively.
Elaboration: Describes something in a generic way first, and then in specific detail, e.g. “Justice is about fairness. It is about making sure that everybody gets an equal share of public resources.” Elaboration is commonly signalled by the use of a colon (:), that is, in other words, which is to say, also, furthermore, in addition, notice that, and which.
Exemplification: Starts with a generalisation and then gives one or more examples, e.g. “Free will is a deeply contested concept. There are as many different theories of free will as there are days of the week: agent causalist theories, event-causal libertarianist theories, compatibilist and semi-compatibilist theories, illusionist theories, hard-determinist theories and so on.” Exemplification is commonly signalled by the use of for example, for instance, such as, including and a colon (:).
Generalisation: Starts with a specific example and then gives a general rule, e.g. “There are as many different theories of free will as there are days of the week: agent causalist theories, event-causal libertarianist theories, compatibilist and semi-compatibilist theories, illusionist theories, hard-determinist theories and so on. This shows that free will is a deeply contested concept.” Generalisation is commonly signalled by in general, and more generally.
Exception - exception first: Gives an exception first and then gives the general rule, e.g. “David Hume was good-natured and witty. But philosophers are usually a sour bunch.” This is commonly signalled by however, on the other hand, and then there is.
Exception - generalisation first: Gives the generalisation first and then givse the exception, e.g. “Philosophers are usually a sour bunch. But David Hume was good-natured and witty.” This is commonly signalled by nonetheless, nevertheless, and still.

In my experience, resemblance relations are most common in academic writing. This is because academic writing typically talks about the relationships between abstract concepts and ideas, or between conclusions and premises and so on. That said, it sometimes talks about real people and real events. When it does, the other kinds of coherence relation are relevant.

Contiguity relations show how different events are related to one another in space and time. There are really only two forms this can take:

Sequence - before-and-after: Says that one thing happened and then another thing happened afterwards, e.g. “Darwin went on a five year voyage on the HMS Beagle. He then came home and developed his theory of evolution.” This type of sequence is commonly signalled by and, before, and then.
Sequence - after-and-before: Says that one thing happened and before that another thing, e.g. “Darwin developed his theory of evolution while living in Down House in Kent. Before that he had been a five-year voyage on the HMS Beagle.” This type of sequence is commonly signalled by after, once, while and when.

Although both of these sequences are acceptable, human beings tend to follow things better if they are written in their natural sequence (i.e. if you describe them in the order in which they happened). That’s not to say that reverse-ordering should be avoided — sometimes it can cast an interesting light on a topic — but it should be used with discernment.

Then, we have relations of cause-and-effect. These are common in scientific and historical discussions where you are trying to explain why things happened the way they did. There are four types of these relation:


Result (cause-effect): Introduces an explanatory principle or rule, then says what follows from that rule, e.g. “David Hume was living in an era of religious intolerance, that’s why he never published his Dialogues Concerning Natural Religion during his lifetime.” This type of relation is commonly signalled by and, as a result, therefore, and so.
Explanation (effect-cause): States what happened first, then introduces the explanation, e.g. “The Soviet Union collapsed in 1991. This was because of internal corruption and decay.” This type of relation is commonly signalled by because, since, and owing to.
Violated expectation (preventer-effect): Used when the cause prevents something from happening that would otherwise have happened, e.g. “Darwin would never have published his theory were it not for Huxley’s intervention.” This is commonly signalled by but, while, however, nonetheless, and yet.
Failed prevention (effect preventer): Used when the cause fails to prevent something from happening, e.g. “Darwin published his theory, despite his concerns about the religious backlash.” This is commonly signalled by despite and even though.


This brings us to the final category of coherence relation, which has only one member:

Attribution: Used when you want to attribute an idea or action or belief (or whatever) to a particular agent or individual, e.g. “Hume thought that there was no logical connection between the fact that the sun rose yesterday, and the fact that it would rise again tomorrow.” This is commonly signalled by according to, or X stated that.



Attribution is important when one wants to distinguish between who believes what and who did what. It is particularly useful when you want to distinguish between what you, as the writer, believe and what someone else believes.

These coherence relations are summarised in the table below. One thing should be stated before concluding: you don’t always have to use connectives to signal the existence of a coherence relation. Indeed, too much signalling can make your writing seem awkward and laboured. You need to exercise some judgment. When is the relationship between two sentences or paragraphs clear and when is it not? Put in the connectives whenever it seems unclear. This, incidentally, is why re-reading and re-drafting is essential to good writing. If you don’t put yourself in the shoes of the reader — or get others to play this role for you — you won’t be able to get the mix of explicit and implicit signalling right.


5. Conclusion
So that’s it. Four tips for improving the coherence of one’s writing. To briefly recap:

1. Adopt a sensible overarching structure: Make your point in a logical, easy-to-follow fashion. Adopting, spatial or temporal metaphors can help you to do this, e.g. imagining your argument as something with a visible structure.
2. Introduce the reader to the topic and the point: Make sure they know what you are talking about and why you are talking about it.
3. Help the reader keep track of the protagonists: Always be mindful of the person, concept or argument you are discussing. Make sure you keep the reader focused on that person, concept or argument. Avoid elegant variation.
4. Understand how coherence relations work: Be aware of how the ideas, concepts, agents, or events you are discussing relate to one another. Make sure the reader can follow those relations, either explicitly (through connective phrases) or implicitly (by good paragraph and sentence structuring).

* This is “fussily prescriptive” because it is a pseudo-rule. From my limited research, it seems that no one knows where the “rule” came from, and it is silly to insist on it because breaking it doesn’t hinder one’s ability to communicate.

Monday, October 6, 2014

Sousveillance and Surveillance: What kind of future do we want?



Jeremy Bentham’s panopticon is the classic symbol of authoritarianism. Bentham, a revolutionary philosopher and social theorist, adapted the idea from his brother Samuel. The panopticon was a design for a prison. It would be a single watchtower, surrounded by a circumference of cells. From the watchtower a guard could surveil every prisoner, whilst at the same time being concealed from their view. The guard could be on duty or not. Either way, the prisoners would never know if they were being watched. This uncertainty would keep them in check:

The building circular—A cage, glazed—a glass lantern about the Size of Ranelagh—The prisoners in their cells, occupying the circumference—The officers in the centre. By blinds and other contrivances, the inspectors concealed […] from the observation of the prisoners: hence the sentiment of a sort of omnipresence—The whole circuit reviewable with little, or if necessary without any, change of place. One station in the inspection part affording the most perfect view of every cell. 
(Bentham, Proposal for a New and Less Expensive mode of Employing and Reforming Convicts, 1798)

Bentham’s panopticon was never built, though certain real-life prisons got pretty close (e.g. the Presidio Modelo in Cuba). But many see echoes of the panopticon in the modern surveillance state. We are like the prisoners. Our world has become flooded with devices capable of recording and monitoring our personal data, and feeding them to various authorities (governments and corporations). And although nobody may care about our personal data at any given time, we can never be sure you are not being watched.

This is all pretty banal and obvious if you’ve been paying attention over the past few years. But as many futurists and technophiles point out, there is one critical difference between the panopticon and our current predicament. In the pantopticon, the information flows in one direction only: from the watched to the watchers. In the modern world, the information can flow in many directions: we too can be the watchers.

But is this a good thing? Should this ability to surveil and monitor everything be embraced or resisted? Those are the questions I wish to pursue in this post. I do so by focusing, in particular, on the writings of sousveillance advocate Steve Mann. On a previous occasion, I analysed and evaluated Mann’s case for the sousveillant society. Today, I want to do something slightly less ambitious: I want to review the possible future societies that are open to us depending on our attitude toward surveillance technologies. Following Mann, I’ll identify four possibilities and briefly comment on their desirability.


1. The Surveillance-Sousveillance Distinction
The four possible future arise from the intersection of two competing approaches to “veillance” technologies. These are the surveillant and sousveillant approaches, respectively. You may be familiar with the distinction already, or have a decent enough grasp of etymology to get the gist of it (“sur” means “from above”; “sous” means “from below”). Nevertheless, Mann offers a couple of competing sets of definitions in his work, and its worth talking about them both.

The first set of definitions focuses on the role of authority in the use of veillance technologies. It defines surveillance as any monitoring that is undertaken by a person or entity in some position of authority (i.e. to whom we are inclined/obliged to defer). The authority could be legal or social or personal in nature. This is the kind of monitoring undertaken by governments, intelligence agencies and big corporations like Facebook and Google (since they both have a kind of “social” authority). By way of contrast, sousveillance is any monitoring undertaken by persons and entities that are not in a position of authority. This is the kind of citizen-to-authority or peer-to-peer monitoring that is now becoming more common.

The second set of definitions shifts the focus away from “authorities” and onto activities and their participants. It defines surveillance as the monitoring of an activity by a person that is not a participant in that activity. This, once again, is the kind of monitoring undertaken by governments or businesses: they monitor protests or shopping activities without themselves engaging in those behaviours (though, of course, people employed in governments and businesses could be participants in other activities in which they themselves are surveilled). In contrast, sousveillance is monitoring of an activity that is undertaken by actual participants in the activity.

I’m not sure which of these sets is preferable, though I incline toward the first. The problem with the first one is that it relies on the slightly nebulous and contested concept of an “authority”. Is a small, local shop-owner with a CCTV camera in a position of authority over the rich businessperson who buys cigarettes in his store? Or does the power disparity turn what might seem in the first instance to be a case of surveillance into one of sousveillance? Maybe this is a silly example but it does help to illustrate some of the problems involved with identifying who the authorities are.

The second set of definitions has the advantage of moving away from the concept of authority and focusing on the less controversial concepts of “activities” and their “participants”. Still, I wonder whether that advantage is outweighed by other costs. If we stuck strictly to the participant/non-participant distinction then citizen-to-authority monitoring would seem to count as surveillance not sousveillance. For example protestors who record the behaviour of security forces would be surveilling them, not sousveilling them. You might think that’s fine — they’re just words after all — but I worry that it misses something of the true value of the sousveillance concept.

That’s why I tend to prefer the first set of definitions.


2. Four Types of Veillance Society
And the definitions matter because, as noted above, the surveillance-sousveillance distinction is critical to understanding the possible futures that are open to us. You have to imagine that surveillance and sousveillance represent two different dimensions or matrices along which future societies can vary. A society can have competing attitudes toward both surveillance and sousveillance. That is: they can reject both, embrace both, or embrace one and reject the other. The result is four possible futures, which can be represented by the two-by-two matrix below.




(Note: Mann adopts a slightly more complicated model in his work. He imagines this more like a coordinate plane with the coordinates representing the number of people or entities engaging in different types of veillance. There may be some value to this model, but it is needlessly complex for my purposes. Hence the more straightforward two-by-two matrix).

Let’s consider these four possible futures in more detail:


The Equiveillance Society: This is a society which embraces both surveillance and sousveillance. The authorities can watch over us with their machines of loving grace and we can watch over them with our smartphones, smartwatches and other smart devices (though there are questions to be asked about who really controls those technologies).
The Univeillance Society: This is a society which embraces sousveillance but resists surveillance. It’s not quite clear why it is called univeillance (except for the fact that it embraces one kind of veillance only, but then that would imply that a society that embraced surveillance only should have the same name, which it doesn’t). But the basic idea is that we accept all forms of peer-to-peer monitoring, but try to cut out monitoring by authorities.
The McVeillance Society: This is a society that embraces surveillance but resists sousveillance. Interestingly enough, this is happening already. There are a number of businesses that use surveillance technologies but try to prevent their customers or other ordinary citizens from using sousveillance technologies (like smartphone cameras). For example, in Ireland, the Dublin Docklands Development Authority tries to prevent photographs being taken in the streets of the little enclave of the city that it controls (if you are ever there, it seems like the streets are just part of the ordinary public highways, but in reality they are privately owned). The name “McVeillance” comes from Mann’s own experiences with McDonalds (which you can read about here).
The Counterveillance Society: This is a society that resists both types of veillance technology. Again, we see signs of this in the modern world. People try to avoid being caught by police speed cameras (and there are websites set up to assist this), or having their important life events being recognised by big data, and or having their photographs taken on nights out.


The modern world is in a state of flux. It is only recently that surveillance and sousveillance technologies have become cheap and readily available. As a result we are lurching between these possibilities. Still, it is worth asking what do we want the future to look like?


3. So what future should we aim for?
The answer to that question is difficult. Each society has its appeal to different interests and values. And it is possible that we could subdivide society into different “regions” with different possibilities prevailing in different regions. Also, it might be more worthwhile asking which of the four is really possible? Though one can certainly imagine circumstances in which each becomes a genuine reality, that’s not saying a whole lot. Imagination is often unconstrained by probability. In considering what is desirable we might be better off constraining ourselves to what it probable.

I suspect no one really wants to live in the McVeillant society. Even those businesses and authorities that want to stamp out sousveillance probably wouldn’t tolerate the same policy being imposed on them. But I think it is a genuine possibility and one we should guard against. I suspect some people would like to live in the univeillant society, but it’s not a real possibility. There will always be some kinds of social authority and they will always try to monitor and control behaviour. Similarly, the counterveillant society will hold appeal for some, but I’m not sure about its likelihood. There are some resistive technologies out there, but can they cope with everything? Will we all have to walk around in movable Faraday cages to block out all EM-fields?

That, of course, leaves us with the equiveillant society. I tend to think this has the best combination of desirability and probability, but I certainly wouldn’t like to give the impression that it would be a panacea. As I noted in a previous series of posts, widespread availability of sousveillant technologies is unlikely to solve issues of power imbalance or social injustice. Still, it could help and focusing on carefully engineering that possible future would probably be better than sleepwalking into the McVeillant society. What do you think?