Thursday 12 November 2015

Digital Ethics

Went to an interesting session on the ethics around artifice intelligence, data mining and tracking etc.

Lots of public discussions going on at the moment, but should we be worried?

Reasons we might be:

  • Robots will take over lots of jobs
  • AI developing too quickly, once they get smarter than us...
  • Power+ bad people= disaster
  • Need to worry about artificial stupidity
  • Experts don't see the obvious

Or is there really no need to worry?

  • Like in all other eras new jobs will be created
  • Law of diminishing returns will kick in
  • Why would robots turn against us
  • AI in its infancy
  • We learn and adapt

Just 6% of adults think the government can be trusted to keep our data secure

Between 66% and 75% are not confident that their activity with social media, advertisers and search engines is private and secure

70% of CIOs are. worried that there is no logical place to raise these issues

Lot of confusion about. On the one hand we believe that in freedom and that Governments shouldn't snoop. On the othe hand we are concerned about safety and think the Government should protect society

 

This is what Facebook can predict about you with what accuracy

Should we be more worried about the accuracy of the top categories or the inaccuracies of the lower ones?

How organisations find themselves crossing the creepy line:

Digital ethics is a system of values and moral principles for the conduct of digital interactions among people, business and things

It determines what is good, what is bad and is all about the discussion and debate. It's not about compliance.

Compliance has a role, is the baseline of ethical behaviour, that's all.

One level up in our motivation to do the right thing is Risk. But, risk is not in charge of the digital ethics discussions.

Differentiation, competitive advantage, might be gained by investing in digital ethics.

But, it should all be based on our values.

Do the right thing because we feel it is the right thing to do.

Real example. Workforce analytics. Someone comes to you with this proposal:

Let's pilot predictive analytics for flight risk ( ie looking at who might be thinking of leaving the company) using text analytics to mine emails, social media analytics, monitoring use of corporate computers and following productivity indicators. We expecting 60-70% accuracy! and are starting the pilot with offshore operations.

Would you approve this pilot?

Interesting discussion followed with the audience. Some points made: What would you do with the results. It feels wrong. What about false positives. Up to employees whether they leave. If it's transparent and employees know about it, could do it. Creates lazy management. If you care, treat people well. Generally people very unhappy with the suggestion.

Dataterminism. Because the data is there, we can use it.

Well known story about Google street view. As the car drove round taking pictures it collected data on wifi signals. Got fined in 12 countries. Their defence was they didn't do anything with the data, and it's publicly available.

The more open information is, the more careful you need to be with how you use it

Danger is seeing patterns which aren't there. An example of someon who loved cooking and gardening ordering scales and fertiliser form Amazon. Unwittingly these are two ingredients involved in drugs. This was their next set of recommendations

.

The concept that "the user is responsible" is being challenged. What is the definition of a user? If you use a hammer to kill someone, it's not the hammers fault. But, are we fully in control when we use technology?

As machines become smarter, we stop being users and become interactors. Who is responsible for the outcome of the interactions?

in Switzerland an art installation had a robot randomly buying things from the Internet. Unfortunately two of the things that got delivered included ecstasy tablets and a fake passport. Story here.

Sandra the Orang U Tang in an Argentine zoo was granted limited human rights when animal activists took out a court cases acing thatbwe'ce was being held captive against her will. First judge ruled that that there could be something as a non human person. Then overruled. But, when will a smart machine become a non human person, responsible for their own behaviour.

 

Recommendations

Mind unexpected consequences. There are always unintended consequences

Take responsibility. Monitor what's happening.

Be disciplined.

 

 

No comments: