The real nanny state.
Something we haven't had previously was one of those in favour of "default on" filtering coming out and properly responding to an article critiquing the entire approach. The Graun today gave space to John Carr, secretary of the Children's Charities' Coalition on Internet Safety, to reply to Laurie Penny's piece for the paper at the start of the year. Apart from Penny's opening paragraph, which imagined someone being phoned by their ISP and asked whether they wanted to still be able to see dripping quims, ravaged buttholes and terrorists beheading infidels (I paraphrase slightly), which isn't how the filtering is being implemented but is an accurate picture of how some, finding their access has been restricted, will be contacting their ISP's help desk, it was a decent summation of how we got where we were and how closely related it was to outright censorship.
Carr is of course having none of it. All the new approach is doing is helping parents, "making it a great deal easier ... to use filters if they want them". The decision is entirely theirs, not the state's. Except, by putting such pressure on ISPs to change their policies and implement "default on" filtering it, it seems remarkably close to it in my eyes. If the decision was entirely theirs, as adults, then surely the default should be off? Only those of age can take out a new account giving them internet access, so where's the harm in making such filters easily available but have them off by default? Indeed, isn't the very fact they are on unless you choose to switch them off a wonderful example of providing a false sense of security? Making parents believe the net will be safe for their children due to the new filters, meaning they don't have to do anything to protect them, seems even more irresponsible than the situation we had before to this layman.
Still, Carr says ISPs are just catching up with the mobile firms that have had their filters turned on since 2005. What he doesn't mention is that only those with a credit card (not a debit card) can turn the filtering off, which excludes a decent chunk of the population. Those without one who wish to have it turned off need to find their provider's local store and bring ID to prove they can look at mucky pictures if they so wish on their smartphone. It's a great example of infantilisation, but one we're apparently prepared to live with. Carr also dismisses Penny's claim that all we've heard about to do with filtering is pornography, or indeed child pornography, which the likes of Claire Perry willfully conflated in a successful campaign for something to be done. How would a 9-year-old sleep after viewing a double chainsaw murder, he asks? Considering that such extreme violence is even more difficult to stumble upon than pornography without specifically searching for it, this seems a rather moot argument. A better question is how many 9-year-olds would be seeking out gore videos, the likely answer being hardly any. Carr next mentions "women-hating violence", such as three men simultaneously beating and raping a woman. Unless I've missed something, one of the few things yet to be recorded and posted online is the actual rape of a woman, so one presumes Carr is talking about a scene cut out of a film. Again, how likely is a child to find something of that nature, especially one as young as 9 without looking for it?
This distracts in any case from the main arguments against, which are the filters should not be on by default and that if we must have filters, they should be a good deal smarter than the ones we seem to have been stuck with. They should also be as transparent as possible: as mocked as O2's short-lived filter checker was (this blog was blacklisted on the under-12 filter, along with much of the rest of the internet), it was the only example of an ISP being clear with those supposedly responsible on what was actually being blocked. Just as adults are able to see what ratings films and games have been given and make their own decision as to what to allow their children to see/play, why should there not be something similar to O2's checker where you can put in a URL and see its classification? Perhaps it's because, as Cory Doctorow wrote, that such lists are considered to be trade secrets. Carr says the filters will get better and errors can be easily rectified, but will they be? Mobile operators are still incredibly reticent about their blocking practices, not inspiring confidence ISPs will be any different.
Carr finishes by saying that parents shouldn't "feel obliged to provide unrestricted access to all its horrors", but this obfuscates the issue. Filters and censorware have long been available, only it was up to parents to make the decision for themselves as to whether to use them. Making filters available as they have been, just not demanding they be default on, would fulfil Carr's argument. We've reached a position where not just pornography but "extremist" material, file-sharing sites and everything in between is blacklisted by default, something you don't have to be a conspiracy nut to think is beneficial to both government and big business. This has all been achieved through the age-old method of asking "won't someone think of the children?", the default fall back argument of the censorious everywhere. Combined with the child porn angle pursued by the Mail, it's not a surprise the ISPs and Google gave in, or at least gave the impression they had. What ought to be surprising is that a party which has constantly derided the "nanny state" and urges personal responsibility at every turn has been the one to do so.