Merry Blogmas!

December 19, 2014

 

Once again Christmas dinner at Thomas Coram was just about perfect!

Once again Christmas dinner at Thomas Coram was just about perfect!

Giles Wilson is a lovely man. I met him a few weeks ago at new Broadcasting House (my second favourite place in the world). He said cheerfully, “How’s year 26 going?”

I replied “actually, it’s been a bit tricky…”.

Which it has. But if you happen to reading this you are one of the people who have helped me through another tricky year.

So I’d like to thank you, and wish you and your dependents a Merry Christmas and a happy and prosperous New Year.

Never give up!


The ethics of digital #6: Facebook and death threats

December 5, 2014

GCHQ’s budget is something less than 2 billion pounds.

Exactly how much less is hard to tell for obvious reasons.

Facebook’s operating budget is 5 billion dollars.

So here are two large, well funded organisations both of which are doing things with content and user data on the Internet.

When the story about the intelligence committee claiming that threats to Lee Rigby had been published on Facebook six months before his murder, and I posted a link about this on my Twitter stream, the first reaction from Nic Ferrier was:

“But surely gchq have their hooks in their so why didn’t they catch them?”

Whereas my first reaction was “why didn’t Facebook pick this up?”

Which probably tells you more about the mindset and biases of the two individuals talking than it does about the two organisations.

My remark saying that I thought Facebook had some responsibility for what’s said on its site  triggered what young people these days call a “flame war”.

It may be that my suggestion that Facebook could moderate their users content more is impracticable.

But what surprised me more than anything was the defeatism of the people in the conversation.

Are we really saying that nothing could be done to try and stop this happening? That we don’t even want to try?

With all the big data and all the big brains in technology we don’t even want to try to do this better?

I’ll bet there’s someone clever in Facebook working on it right now…


The ethics of digital #5: moderation

October 24, 2014

I’ve said this before but it bears repeating.

As #gamergate continues to garner column inches and the topic of preventing abuse online gathers ever more heat, it’s worth saying that moderation as an idea and a set of tools has been around since the very first online community.

I used to be in charge of the BBC’s moderation service. In my job as host on the Internet blog I sometimes have to gently remind users to stay within the House Rules. It’s business as usual. The BBC spends money on it, the technical systems are good, we take it seriously and examples of trolling are rare and dealt with.

So why can’t other platforms or publishers do this? It can’t be money, since most of them are far better funded that the BBC.

Like most things in life, it boils down to whether you care or not. If you think free speech and having a platform where users can say anything they like is more important than people being abused, then you’re not going to be inclined to moderate properly.

It can be done, though, if you want to…


The ethics of digital: round up #4

October 10, 2014

Leigh Alexander has published a useful list of “ethical concerns in video games”.

Ideology and taste are a toxic mixture. “You like different things from me therefore you must be bad/corrupt”. Any reasonable ethical framework has to include some sense of tolerance for others tastes. Abusing people because they like different things obscures real ethical problems that ought (in theory) to be easy to agree on (or at least discuss without resorting to abuse).

Kathy Sierra has written a heartbreaking history of the abuse she has suffered online, reproduced in its entirety with her permission by Wired Magazine. The much abused word “freedom” seems to be a trump card for some, a word which can excuse any other kind of bad behaviour or ethical failing.

Here’s another angle on the same subject: abuse and control online: “Everybody Watches, Nobody Sees: How Black Women Disrupt Surveillance Theory” by Sydette Harry in ModelViewCulture.

“souvelliance”: in a world where everyone is watched by the authorities citizens should use the same tools to watch them back and hold them to account. The trouble with souveillance is it implies that the citizen has enough status, power and access to the tools to start with. What if you are so low down the pecking order you are at a disadvantage before you even start?:

What we have decided to call surveillance is actually a constant interplay of various forms of monitoring that have existed and focused on black people, and specifically black women, long before cameras were around, let alone ubiquitous. Surveillance technology is a dissemination of cultural standards of monitoring. Our picture of surveillance needs to factor in not just tech developments, but the cultural standards that have bred surveillance, especially towards black culture, as part and parcel in our world.

Elahi can use the intrusion into his privacy to further his work. But if all you want to do is have space to mind your own business, handle your family issues in private, or exist without interference, sousveillance isn’t an answer… it’s a reminder of defeat. If what you want is representation as you are, what do you do when the reality is ignored for the easy win, even when it leaves you worse than before?

While I was putting this post together I came across this (again via Leigh Alexander): “Why Nerd Culture Must Die” by Pete Warden. It makes this post redundant, but I’m going to publish it anyway…

“We’re still behaving like the rebel alliance, but now we’re the Empire.”


The ethics of digital: round up #3

September 17, 2014

I’m not a gamer. Scrabble is about my limit. And I mean real Scrabble with real plastic pieces you can pick up and hold in your hand. But I do follow some people from the Games world on Twitter so I was half aware of “#gamergate” .

For a summary of #gamergate, this Forbes article is one place to start (thanks to Steve Bowbrick). Zoe Quinn who was unfortunate enough to be at the centre of it all, gives her take here: “Five Things I Learned As The Internet’s Most Hated Person”

This is an unpleasant example of what happens when a closed community gets challenged by outsiders, and then turns on them. How do we stop young men behaving like this?

Google’s consultation about the right to be forgotten continues its European tour. Here’s an interesting example of a Google takedown from the Worcester News: “Dan Roche’s plea to Google about the art he’d rather you forgot”

It’s easy to see this as frivolous. But search removes one sort of context and adds a different one. Isn’t the implication of any top Google search result “this is the most important thing about this subject right now?”. In this case that’s clearly wrong.

Lucy Bernholz’ focuses on “Apple’s Watch and the Ethics of Data”:

“…letting the data be used for “medical research” without specifying by whom and under what conditions doesn’t protect you in the least.”

Thanks to Martin Geddes I found this article: “The future of the internet is decentralised”. Right at the end there’s this:

Decentralization initiatives, by their very nature, do not favor any one application over another. There is no authority to dictate what should be published and what should not. The network, being autonomous, can be used for any purpose.

That can include jihadi forums, revenge and child pornography sites, or neo-Nazi propaganda. Typically with offending websites, law enforcement find out where the server running it is located and seize it by sending a legal demand to the hosting company. On a decentralized network, such actions become impossible because there is no server to target.

“It’s just not possible,” Irvine says. “Terror things, child porn—the real evil side of society could exist there. They’re going to be completely protected.”

Those campaigning for digital rights, however, think that the trade-off is worthwhile.

“The difference is that the average user will also have the ability to protect themselves from losing information or their privacy,” says Danny O’Brien, international director of the Electronic Frontier Foundation, “and there are far more average users than there are political dissidents or horrible criminals.”

I don’t think the trade-off is worthwhile. I’m also getting a strong sense of déjà vu. But it does point once again to one of the key ethical dillemas in digital. How much freedom is too much?


“Who Owns The Future?” by Jaron Lanier

September 16, 2014

So, I tried to read Adam Smith’s “Wealth of Nations”. I got to page 160 and then gave up. I didn’t understand most of it, and what little I did understand I disagreed with.

A relief then, to turn to a different book; “Who Owns The Future” by Jaron Lanier.

I’d recommend this book to anyone interested in the current state of tech and by extension, the state of the world. It’s wise, humane, intelligent, compassionate and comprehensible.

Lanier’s central point is simple: instead of giving away our data for free to others so they can amass huge concentrations of wealth and power, why don’t they pay us for our data instead?

But the real joy of this book is the way Lanier nails every bad idea and pernicious belief coming out of Silicon Valley. Lanier is a computer scientist and  a techno optimist. He’s not an outsider just being contrary for the sake of it. It’s such a relief to hear someone on the inside critique these barmy ideas.

There are a thousand great quotes in “Who Owns The Future”, here’s just one:

There’s a romance in that future, especially for hackers… it comes up in science fiction constantly: the hacker as hero, outwitting the villain’s computer security. But what a crummy world that would be, where screwing up something online is the last chance at being human and free. A good world is where there’s meaning outside of sabotage.

Buy it and read it!

who owns the future


Cornish landscape

August 27, 2014

20140827-160538.jpg
Possibly an abandoned WW2 installation.


The ethics of digital: round up #2

August 18, 2014

So are Facebook and Google publishers?

They’ve always said they’re not.

But when so much of people’s information is being curated and served up by them don’t they become something as near as makes no difference to a publisher?

And if Google and Facebook control so much of the information the public sees, then do they have any obligations to the public as well as to their advertisers?

For example, if there is a very important News event happening somewhere in the world, and their algorithms down play it in their users feeds and search results, isn’t that like a newspaper relegating a front page story to page 24?

Some thoughts from other people:

Zeynep Tufekci  on Medium: “Algorithms have consequences”        

David Holmes on Pando: “If Twitter implements a Facebook style algorithm you may not hear about the next Ferguson”

According to Aarti Shahani in this article for NPR Google does have a newsroom: “In Google Newsroom, Brazil Defeat Is Not A Headline”;

If you do a Google search on the World Cup game in which Germany slaughtered Brazil 7-1, the top results will say things like “destroy,” “defeat,” and “humiliate.”

But Google itself is choosing to steer clear of negative terms. The company has created an experimental newsroom in San Francisco to monitor the World Cup, and turn popular search results into viral content. And they’ve got a clear editorial bias…

…I ask the team why they wouldn’t use a negative headline. Many headlines are negative.

“We’re also quite keen not to rub salt into the wounds,” producer Sam Clohesy says, “and a negative story about Brazil won’t necessarily get a lot of traction in social.”

Mobile marketing expert Rakesh Agrawal, CEO of reDesign mobile, says that’s just generally true. “People on social networks like Twitter and Facebook — they generally tend to share happy thoughts. If my son had an A in math today, I’m going to share that. But if my son got an F in math, that’s generally not something you’re going to see on social media.”

In old-school newsrooms, the saying goes: if it bleeds, it leads. Because this new newsroom is focused on getting content onto everyone’s smartphone, Agrawal says, editors may have another bias: to comb through the big data in search of happy thoughts.”

Reddit has asked its users to “adhere to the same standards of behaviour online that you follow in real life”.

Although there does seem to be a problem if, as in real life, you try and fast forward through the boring adverts on your catch up TV:

Until an administrator changed the advice in response to questions from the Guardian, however, one rule also encouraged users to “link to the direct version of a media file when the page it was found on doesn’t add any value.”

That practice, known as “hotlinking”, is a common complaint of artists whose work regularly appears on Reddit, since it can send thousands of users to their site without a single one seeing an image credit or advertisement. The rule now only encourages hotlinking “if the page it was found on isn’t the creator’s and doesn’t add additional information or context”.

P.S. Google are looking for public comment and evidence about the right to be forgotten…


The ethics of digital: round up

July 20, 2014

What’s the right way to behave online?

If digital behaviour is different from real world behaviour what are the new rules?

The big tech giants that dominate our lives are running around trying to find answers. When they get it wrong, it doesn’t look good.

In June Google hired an ethics adviser Luciano Floridi. He argued in the Guardian for some “bold ideas”:

Most experts agree that current European data protection law is outdated. I see it as the expression of a time when there was a clear divide between online and offline. Today, that divide is being bridged in favour of the “onlife”, a mixture of analogue and digital, physical and virtual experiences, like driving a car following the instructions of a navigator.

The car metaphor is a dead giveaway. A driverless car? Like one of Google’s?

In July Google’s approach to implementing the EU’s Right to be Forgotten ruling became headline news courtesy of the BBC’s Robert Peston. The most interesting piece I read about this (apart from Robert’s own), was Andrew Orlovski’s new angle in The Register (“Google de-listing of BBC article ‘broke UK and Euro public interest laws'”)

Then there’s Facebook’s “Mood Manipulation” experiment. There was a lot of noise about this, but Jaron Lanier in the New York Times offered some humanity and humility.

All of us engaged in research over networks must commit to finding a way to modernize the process of informed consent.  Instead of lowering our standards to the level of unread click-through agreements, let’s raise the standards for everyone.

Duncan J Watts in “Lessons Learned from the Facebook study” said that the experiment may not have been as bad as the noise suggested but:

What we need is an ethics-review process for human-subject research designed explicitly for web-based research…

(Credit to @dianecoyle1859 for this link)

David Banks has some intelligent thoughts and practical suggestions on the ethics of wearable technology. I’d disagree with his suggestion that police forces should be banned from using Google Glass. Law enforcement might be the only place Google Glass serves a useful purpose, rather than just being an annoyance.

I couldn’t write this without mentioning Model View Culture, the online magazine set up by @Shanley. It’s a must read: a combative critique of the values, practices and morality of Silicon Valley. I don’t agree with all of it, but it’s an essential antidote to the complacency and lack of self awareness of too much of the tech scene.

The technology giants are relatively young (even “immature”? ). When you’re an adolescent you haven’t worked out the right thing to do yet. The BBC, like many other mainstream media organisations, has been trying to answer these questions for a lot longer. David Jordan, Director of BBC Values and Standards (Disclaimer: I use to work in that team), recently outlined the BBC’s guidance on removing content online (“Should the BBC unpublish any of its content online?”). It’s rational, nuanced, sensible and, mature.

Bold ideas? I’d rather have some old ones.

 


“Dad, can you lend me a fiver?” in a world without cash

June 20, 2014

“Dad, can you lend me a fiver?”

In a world with cash:

“Yes of course just let me dig through my loose change, there’s always some hanging about in the drawer”

In a world without cash:

“Oh, err, I’ll have to switch the broadband on… hang on a minute the wifi’s down, err… what about my phone… err, the Bluetooth isn’t working again I can’t sync our accounts together… err… where’s that contactless payment card,… err… it won’t let me transfer anything we must be over our limit… err… sorry… ”

“Contactless payment” assumes we all have more money than we need.  Most of us are not in that happy position.

Cash is good!

Now, where did I put that tenner? I must have dropped it somewhere…


Follow

Get every new post delivered to your Inbox.

Join 1,181 other followers