Archive for the 'ethics' Category

Links: “today’s Internet is a shanty town next to a festering garbage dump”

October 27, 2016

1. I don’t believe the robots are coming.

Because as soon as they got outside London they wouldn’t get a decent Wifi signal and would grid to a halt.

We are sold a shiny and/or scary vision of an automated future.  Martin Geddes (@martingeddes) explains why on current infrastructure it won’t happen:

“The Internet needs a security and performance upgrade”:

“The Internet’s security model is completely unsuitable for these connected devices. The default is that anyone can route to anyone, and that all routes are always active. This is completely backwards.”

2. “The Rise of Dating App Fatique” by Julie Beck (@juliebeck) is a fascinating study of how people have become disillusioned with the likes of Tinder.

I haven’t been dating for a long time, (in fact in those days we called it “going out”), but I thought that Julie’s analysis could apply more widely. Again we are promised a shiny world of efficiency. Often the reality is clunky and poor.

Is there a built in conflict of interest? App makers want you to carry on using their app, so its not in their interest for you to find what you want (whatever that is), because you would then stop using the app:

“So if there’s a fundamental problem with dating apps, one baked into their very nature, it is this: They facilitate our culture’s worst impulses for efficiency in the arena where we most need to resist those impulses.  Research has shown that people who you aren’t necessarily attracted to at first sight, can become attractive to you over time, as you get to know them better. Evaluating someone’s fitness as a partner within the span of a single date—or a single swipe—eliminates this possibility.”

3. Are Progressives Being Played By WikiLeaks And Julian Assange?” asks Katherine Cross.

Perhaps another sign of a current mood of increasing skepticism with the digital world.

At what point did Assange change from hero to villain?

“In the case of WikiLeaks, this includes especially egregious cases like leaking the name of a Saudi man arrested for being gay, or the names of rape victims in the Kingdom. In the wake of the failed Turkish coup, meanwhile, Assange also recklessly published troves of information on nearly every woman in the country, as well as potentially outing anti-government demonstrators and rank and file government party voters—hardly wise in the wake of a violent coup attempt. The leak was supposed to be Premier Recep Erdogan’s private emails, exposing more of his increasingly authoritarian government; instead, the dump contained nothing from Erdogan and reams of sensitive information on private Turkish citizens.”

(“You can judge a nation, and how successful it will be, based on how it treats its women and its girls.” @BarackObama )


#Facebook / links

September 25, 2016

1. An entertaining summary of the story so far on Facebook’s news feed and algorithmic biases from the PBS ideas channel

2. “Facebook ‘overestimated’ video viewing time” from BBC News

3.  “Low-income families face eviction as building ‘rebrands’ for Facebook workers” from the Guardian

4. “Facebook and Israel to work to monitor posts that incite violence” from the Guardian

Gulp… so Facebook is going to get involved in policing one of the most contentious conflicts in the world… good luck with that one…

5. Interesting article from Tanya Kant which contains this paragraph:

In an in-depth qualitative study of 36 web users, upon seeing advertising for weight loss products on Facebook some female users reported that they assumed that Facebook had profiled them as overweight or fitness-oriented. In fact, these weight loss ads were delivered generically to women aged 24-30. However, because users can be unaware of the impersonal nature of some personalisation systems, such targeted ads can have a detrimental impact on how these users view themselves: to put it crudely, they must be overweight, because Facebook tells them they are.

So maybe Facebook has a lot of power… or maybe we think it has more power than it actually does…

Facebook Facebook Facebook Facebook Facebook

September 13, 2016

So Facebook can censor a iconic photograph from the Vietnam war featuring a naked child

…then uncensor it

…and apologise (privately) to a head of state

…but it can’t stop another picture of a naked girl appearing on Facebook

…and it’s being used (among other social media platforms) by inmates in American prisons to co-ordinate prison strikes

(echoes of a blog post I wrote two years ago)…

So Facebook is now a platform, a network and a media company

“Norway is a big investor in Facebook. Its $891bn sovereign wealth fund, the world’s biggest, had a stake of 0.52% in Facebook, worth $1.54bn at the start of 2016.”

So just to be clear to @Aelkus in case he’s reading…

Do I think Facebook has “undue power over information flow?

Possibly… that’s a question for regulators…

Do I think Facebook is in danger of becoming a totalitarian control system?

Obviously not, since if it did have that kind of power, it wouldn’t be making these kinds of mistakes in public..



The ethics of digital #6: Facebook and death threats

December 5, 2014

GCHQ’s budget is something less than 2 billion pounds.

Exactly how much less is hard to tell for obvious reasons.

Facebook’s operating budget is 5 billion dollars.

So here are two large, well funded organisations both of which are doing things with content and user data on the Internet.

When the story about the intelligence committee claiming that threats to Lee Rigby had been published on Facebook six months before his murder, and I posted a link about this on my Twitter stream, the first reaction from Nic Ferrier was:

“But surely gchq have their hooks in their so why didn’t they catch them?”

Whereas my first reaction was “why didn’t Facebook pick this up?”

Which probably tells you more about the mindset and biases of the two individuals talking than it does about the two organisations.

My remark saying that I thought Facebook had some responsibility for what’s said on its site  triggered what young people these days call a “flame war”.

It may be that my suggestion that Facebook could moderate their users content more is impracticable.

But what surprised me more than anything was the defeatism of the people in the conversation.

Are we really saying that nothing could be done to try and stop this happening? That we don’t even want to try?

With all the big data and all the big brains in technology we don’t even want to try to do this better?

I’ll bet there’s someone clever in Facebook working on it right now…

The ethics of digital #5: moderation

October 24, 2014

I’ve said this before but it bears repeating.

As #gamergate continues to garner column inches and the topic of preventing abuse online gathers ever more heat, it’s worth saying that moderation as an idea and a set of tools has been around since the very first online community.

I used to be in charge of the BBC’s moderation service. In my job as host on the Internet blog I sometimes have to gently remind users to stay within the House Rules. It’s business as usual. The BBC spends money on it, the technical systems are good, we take it seriously and examples of trolling are rare and dealt with.

So why can’t other platforms or publishers do this? It can’t be money, since most of them are far better funded that the BBC.

Like most things in life, it boils down to whether you care or not. If you think free speech and having a platform where users can say anything they like is more important than people being abused, then you’re not going to be inclined to moderate properly.

It can be done, though, if you want to…

The ethics of digital: round up #4

October 10, 2014

Leigh Alexander has published a useful list of “ethical concerns in video games”.

Ideology and taste are a toxic mixture. “You like different things from me therefore you must be bad/corrupt”. Any reasonable ethical framework has to include some sense of tolerance for others tastes. Abusing people because they like different things obscures real ethical problems that ought (in theory) to be easy to agree on (or at least discuss without resorting to abuse).

Kathy Sierra has written a heartbreaking history of the abuse she has suffered online, reproduced in its entirety with her permission by Wired Magazine. The much abused word “freedom” seems to be a trump card for some, a word which can excuse any other kind of bad behaviour or ethical failing.

Here’s another angle on the same subject: abuse and control online: “Everybody Watches, Nobody Sees: How Black Women Disrupt Surveillance Theory” by Sydette Harry in ModelViewCulture.

“souvelliance”: in a world where everyone is watched by the authorities citizens should use the same tools to watch them back and hold them to account. The trouble with souveillance is it implies that the citizen has enough status, power and access to the tools to start with. What if you are so low down the pecking order you are at a disadvantage before you even start?:

What we have decided to call surveillance is actually a constant interplay of various forms of monitoring that have existed and focused on black people, and specifically black women, long before cameras were around, let alone ubiquitous. Surveillance technology is a dissemination of cultural standards of monitoring. Our picture of surveillance needs to factor in not just tech developments, but the cultural standards that have bred surveillance, especially towards black culture, as part and parcel in our world.

Elahi can use the intrusion into his privacy to further his work. But if all you want to do is have space to mind your own business, handle your family issues in private, or exist without interference, sousveillance isn’t an answer… it’s a reminder of defeat. If what you want is representation as you are, what do you do when the reality is ignored for the easy win, even when it leaves you worse than before?

While I was putting this post together I came across this (again via Leigh Alexander): “Why Nerd Culture Must Die” by Pete Warden. It makes this post redundant, but I’m going to publish it anyway…

“We’re still behaving like the rebel alliance, but now we’re the Empire.”

The ethics of digital: round up #3

September 17, 2014

I’m not a gamer. Scrabble is about my limit. And I mean real Scrabble with real plastic pieces you can pick up and hold in your hand. But I do follow some people from the Games world on Twitter so I was half aware of “#gamergate” .

For a summary of #gamergate, this Forbes article is one place to start (thanks to Steve Bowbrick). Zoe Quinn who was unfortunate enough to be at the centre of it all, gives her take here: “Five Things I Learned As The Internet’s Most Hated Person”

This is an unpleasant example of what happens when a closed community gets challenged by outsiders, and then turns on them. How do we stop young men behaving like this?

Google’s consultation about the right to be forgotten continues its European tour. Here’s an interesting example of a Google takedown from the Worcester News: “Dan Roche’s plea to Google about the art he’d rather you forgot”

It’s easy to see this as frivolous. But search removes one sort of context and adds a different one. Isn’t the implication of any top Google search result “this is the most important thing about this subject right now?”. In this case that’s clearly wrong.

Lucy Bernholz’ focuses on “Apple’s Watch and the Ethics of Data”:

“…letting the data be used for “medical research” without specifying by whom and under what conditions doesn’t protect you in the least.”

Thanks to Martin Geddes I found this article: “The future of the internet is decentralised”. Right at the end there’s this:

Decentralization initiatives, by their very nature, do not favor any one application over another. There is no authority to dictate what should be published and what should not. The network, being autonomous, can be used for any purpose.

That can include jihadi forums, revenge and child pornography sites, or neo-Nazi propaganda. Typically with offending websites, law enforcement find out where the server running it is located and seize it by sending a legal demand to the hosting company. On a decentralized network, such actions become impossible because there is no server to target.

“It’s just not possible,” Irvine says. “Terror things, child porn—the real evil side of society could exist there. They’re going to be completely protected.”

Those campaigning for digital rights, however, think that the trade-off is worthwhile.

“The difference is that the average user will also have the ability to protect themselves from losing information or their privacy,” says Danny O’Brien, international director of the Electronic Frontier Foundation, “and there are far more average users than there are political dissidents or horrible criminals.”

I don’t think the trade-off is worthwhile. I’m also getting a strong sense of déjà vu. But it does point once again to one of the key ethical dillemas in digital. How much freedom is too much?

“Who Owns The Future?” by Jaron Lanier

September 16, 2014

So, I tried to read Adam Smith’s “Wealth of Nations”. I got to page 160 and then gave up. I didn’t understand most of it, and what little I did understand I disagreed with.

A relief then, to turn to a different book; “Who Owns The Future” by Jaron Lanier.

I’d recommend this book to anyone interested in the current state of tech and by extension, the state of the world. It’s wise, humane, intelligent, compassionate and comprehensible.

Lanier’s central point is simple: instead of giving away our data for free to others so they can amass huge concentrations of wealth and power, why don’t they pay us for our data instead?

But the real joy of this book is the way Lanier nails every bad idea and pernicious belief coming out of Silicon Valley. Lanier is a computer scientist and  a techno optimist. He’s not an outsider just being contrary for the sake of it. It’s such a relief to hear someone on the inside critique these barmy ideas.

There are a thousand great quotes in “Who Owns The Future”, here’s just one:

There’s a romance in that future, especially for hackers… it comes up in science fiction constantly: the hacker as hero, outwitting the villain’s computer security. But what a crummy world that would be, where screwing up something online is the last chance at being human and free. A good world is where there’s meaning outside of sabotage.

Buy it and read it!

who owns the future

The ethics of digital: round up #2

August 18, 2014

So are Facebook and Google publishers?

They’ve always said they’re not.

But when so much of people’s information is being curated and served up by them don’t they become something as near as makes no difference to a publisher?

And if Google and Facebook control so much of the information the public sees, then do they have any obligations to the public as well as to their advertisers?

For example, if there is a very important News event happening somewhere in the world, and their algorithms down play it in their users feeds and search results, isn’t that like a newspaper relegating a front page story to page 24?

Some thoughts from other people:

Zeynep Tufekci  on Medium: “Algorithms have consequences”        

David Holmes on Pando: “If Twitter implements a Facebook style algorithm you may not hear about the next Ferguson”

According to Aarti Shahani in this article for NPR Google does have a newsroom: “In Google Newsroom, Brazil Defeat Is Not A Headline”;

If you do a Google search on the World Cup game in which Germany slaughtered Brazil 7-1, the top results will say things like “destroy,” “defeat,” and “humiliate.”

But Google itself is choosing to steer clear of negative terms. The company has created an experimental newsroom in San Francisco to monitor the World Cup, and turn popular search results into viral content. And they’ve got a clear editorial bias…

…I ask the team why they wouldn’t use a negative headline. Many headlines are negative.

“We’re also quite keen not to rub salt into the wounds,” producer Sam Clohesy says, “and a negative story about Brazil won’t necessarily get a lot of traction in social.”

Mobile marketing expert Rakesh Agrawal, CEO of reDesign mobile, says that’s just generally true. “People on social networks like Twitter and Facebook — they generally tend to share happy thoughts. If my son had an A in math today, I’m going to share that. But if my son got an F in math, that’s generally not something you’re going to see on social media.”

In old-school newsrooms, the saying goes: if it bleeds, it leads. Because this new newsroom is focused on getting content onto everyone’s smartphone, Agrawal says, editors may have another bias: to comb through the big data in search of happy thoughts.”

Reddit has asked its users to “adhere to the same standards of behaviour online that you follow in real life”.

Although there does seem to be a problem if, as in real life, you try and fast forward through the boring adverts on your catch up TV:

Until an administrator changed the advice in response to questions from the Guardian, however, one rule also encouraged users to “link to the direct version of a media file when the page it was found on doesn’t add any value.”

That practice, known as “hotlinking”, is a common complaint of artists whose work regularly appears on Reddit, since it can send thousands of users to their site without a single one seeing an image credit or advertisement. The rule now only encourages hotlinking “if the page it was found on isn’t the creator’s and doesn’t add additional information or context”.

P.S. Google are looking for public comment and evidence about the right to be forgotten…

The ethics of digital: round up

July 20, 2014

What’s the right way to behave online?

If digital behaviour is different from real world behaviour what are the new rules?

The big tech giants that dominate our lives are running around trying to find answers. When they get it wrong, it doesn’t look good.

In June Google hired an ethics adviser Luciano Floridi. He argued in the Guardian for some “bold ideas”:

Most experts agree that current European data protection law is outdated. I see it as the expression of a time when there was a clear divide between online and offline. Today, that divide is being bridged in favour of the “onlife”, a mixture of analogue and digital, physical and virtual experiences, like driving a car following the instructions of a navigator.

The car metaphor is a dead giveaway. A driverless car? Like one of Google’s?

In July Google’s approach to implementing the EU’s Right to be Forgotten ruling became headline news courtesy of the BBC’s Robert Peston. The most interesting piece I read about this (apart from Robert’s own), was Andrew Orlovski’s new angle in The Register (“Google de-listing of BBC article ‘broke UK and Euro public interest laws'”)

Then there’s Facebook’s “Mood Manipulation” experiment. There was a lot of noise about this, but Jaron Lanier in the New York Times offered some humanity and humility.

All of us engaged in research over networks must commit to finding a way to modernize the process of informed consent.  Instead of lowering our standards to the level of unread click-through agreements, let’s raise the standards for everyone.

Duncan J Watts in “Lessons Learned from the Facebook study” said that the experiment may not have been as bad as the noise suggested but:

What we need is an ethics-review process for human-subject research designed explicitly for web-based research…

(Credit to @dianecoyle1859 for this link)

David Banks has some intelligent thoughts and practical suggestions on the ethics of wearable technology. I’d disagree with his suggestion that police forces should be banned from using Google Glass. Law enforcement might be the only place Google Glass serves a useful purpose, rather than just being an annoyance.

I couldn’t write this without mentioning Model View Culture, the online magazine set up by @Shanley. It’s a must read: a combative critique of the values, practices and morality of Silicon Valley. I don’t agree with all of it, but it’s an essential antidote to the complacency and lack of self awareness of too much of the tech scene.

The technology giants are relatively young (even “immature”? ). When you’re an adolescent you haven’t worked out the right thing to do yet. The BBC, like many other mainstream media organisations, has been trying to answer these questions for a lot longer. David Jordan, Director of BBC Values and Standards (Disclaimer: I use to work in that team), recently outlined the BBC’s guidance on removing content online (“Should the BBC unpublish any of its content online?”). It’s rational, nuanced, sensible and, mature.

Bold ideas? I’d rather have some old ones.