hero__bg
Back to all articles

Ethics In Artificial Intelligence: Why Values For Data Matter?

While you might not consider ethics in AI a primary concern for your business, consider this: A whopping 50% of business processes will be fully automated by 2022, compared to around 30% today.

Most of the advantages in digital transformation are enabled by task augmentation through artificial intelligence (AI) or through AI powered Robotic Process Automation (RPA).

Using such vast quantities of data in automated processes is already having a massive impact on business and society. Many analysts have projected that by 2020, around 70% of the data that a company uses will come from Internet of Things (IoT) enabled devices and external data streams – in other words, external, non-transactional sources.

The Double-Edged Sword of AI and Predictive Analytics

This rising impact can be both a blessing and a concern. It is a blessing — for example when AI and Predictive analytics are using big data to monitor growing conditions, to help an individual farmer make everyday decisions that can determine if they will be able to feed their family (or not).

Yet it can also be real concern when biased information is applied at the outset, leading machines to make biased decisions, amplifying our human prejudices in a manner that is inherently unfair.

As Joaquim Bretcha, president of ESOMAR says, “technology is the reflection of the values, principles, interests and biases of its creators”.

Who is Supplying Your Data: Companies Must Ask Critical Questions About Ethics in AI

Companies should ask themselves: “Is data the core asset that I monetize?” or “Is data the glue that connects the processes that have made my products or services successful?”

This is especially urgent as companies start to use third-party data sources to train their algorithms — data about which they know relatively little. Companies also need to ask themselves:

  • What is the quality of the internal and external data we’re using to train and to input algorithms?
  • What unknown and unintended biases could our data train into algorithms? How will machines know under which biases they operate if we don’t share how algorithms arrive at its answers?
  • What will the impact of this automation be on our business, people, and society? How can we detect and quickly mitigate unanticipated impacts?

In terms of accountability and ownership, it begs the question of creating algorithms in a ‘black box’; how does artificial intelligence arrive at its decisions and recommendations?

This question becomes more and more important when artificial intelligence is applied in fully automated processes that judge eligibility for a clinical trial, performs resume selection, or evaluates applications for mortgages. 

In the end, the answer around ethics in AI seems to boil down to transparency; having the applications able to demonstrate what data was used, who trained the AI, and making clear how the AI came to its answers.

In many cases, this will just mean providing an “explain button” or a “best of five” answer with confidence percentages and links to the source data. This approach is already being used within medical and legal AI-powered applications.

In other cases, this could vary between a simple “what’s the logic behind this mortgage application approval or rejection and a more prominent “show me what happened under the hood in providing this answer.”

For AI Transparency, Human Supervision is Required

Yet, that’s not enough. It will take some human supervision, aka governance, to achieve true ethics in AI. Consider the following questions when mapping out your AI: Who within our organization owns the training and supervision process? When things go haywire with unintended outcomes, who is then accountable, and who mitigates?

An interesting fact: making an AI appear more human by adding voice, emotional concepts, or even a visual avatar or face doesn’t seem to increase the trust that people have in an AI’s recommendations. However, knowing who trained the AI does! Hence, on the risk of sounding like a broken record, transparency is one of the most key guiding concepts for successful ethics in AI.

Data Is an Asset, and It Must Have Values

Already, 22% of U.S. companies have attributed part of their profits to AI and advanced cases of (AI infused) predictive analytics.

According to a recent study SAP conducted in conjunction with the Economist’s Intelligent Unit, organizations doing the most with machine learning have experienced 43% more growth on average versus those who aren’t using AI and ML at all — or not using AI well.

One of their secrets: They treat data as an asset. The same way organizations treat inventory, fleet, and manufacturing assets.

They start with clear data governance with executive ownership and accountability (for a concrete example of how this looks, here are some principles and governance models that we at SAP apply in our daily work).

So, do treat data as an asset, because, no matter how powerful the algorithm, poor training data will limit the effectiveness of Artificial Intelligence and Predictive Analytics.

To Govern in The Future, We Must Act Today

What’s the takeaway from this? We need to apply and own governance principles that focus on providing transparency on how Artificial Intelligence and Predictive Analytics achieve its answer.

I will close by asking one question to ponder when thinking about how to treat data as an asset in your organization:

“How will machines know what we value if we don’t articulate (and own) what we value ourselves?”*

Originally posted by Marc Teerlink, SAP, Global Vice President of Intelligent Enterprise Solutions & Artificial Intelligence

If you found this article insightful, you might enjoy reading this blog too. 


Digital analytics, optimisation, data science or programmatic expert, and looking for a job? 

Digital agency looking to expand your team with top-tier talent?

Get in contact with us!

loader