Skip links

Ethics in Artificial Intelligence

Did you know, The New York Times, Washington Post, Reuters, and more rely on artificial intelligence to write? Not just this, but artificial intelligence can hear, speak and even smell! Researchers are developing AI models that will detect illnesses — just by smelling a human’s breath. Market research is aided by AI tools that track human emotions.

Initially, AI was assumed to eliminate human intervention in repetitive tasks that lack decision-making like customer service, data entry, etc. But, thanks to powerful computers and big data sets, it can now enable a lot more, including strategic decision making. Self-driving cars and facial recognition tools are the most common examples of it.

Indeed it is exciting to see how AI is elevating human lives through its ability to perform activities with far more efficiency and effectiveness. As a result, global spending on AI systems is also expected to jump from $85.3 billion in 2021 to more than $204 billion in 2025, according to the Spending Guide from International Data Corporation (IDC).

But such large and wide-scale adoption of technology as life-altering as AI makes its ethical implementation all the more critical.

What is Ethics in AI?

AI ethics is nothing but developing and using artificial intelligence systems without violating moral principles. There is no consolidated list of these principles, but it includes everything from safety to privacy to fair use.

Why Is There a Need for Ethics in Artificial Intelligence (AI)?

One of the reasons AI is appealing is that it takes away human subjectivity and bias from decision-making. But it just so happened that we are discovering AI’s to be replicating the same biases that already exist in our society.

Algorithmic decision-making isn’t objective.

For instance, Optum, a healthcare company, is being probed by New York regulators for its algorithm that allegedly is racially biased. It prioritizes white patients over black patients for doctors and nurses.

It happened because the algorithm uses health costs as a representative of health needs. Since less money is spent on black patients with the same level of medical conditions, the algorithm inaccurately concludes them to be healthier than white patients and thus the de-prioritization.

Not just this, Goldman Sachs is also being investigated by regulators for using an AI algorithm that allegedly discriminated against women by granting larger credit limits to men on their Apple cards.

There are also concerns around lack of transparency with AI tools and issues with surveillance practices for data gathering etc.

All of us must know about the infamous incident where Facebook granted access to the personal data of more than 50 million users to a political firm. IBM was also sued for selling user data; they deceivingly extracted the private locations of users using their weather channel app and sold it to advertising and marketing companies.

Earlier, the talks around the fair/ethical implementation of these technologies were limited. However, as the end-users become aware of the effects of these technologies, it becomes increasingly important for firms to get up to speed and solve for it.

Why Should Companies Use Ethical AI Models?

With scaling artificial intelligence uses, the reputational, regulatory, and legal risks associated with it are also on the uptrend. Hence the biggest tech companies in the world are setting up teams to tackle ethical problems arising from the collection, managing, and usage of data.

They realize that failing to operationalize data can expose them to multiple risks. It can also lead to wasted resources — human and monetary. Think about it, you spend months building an AI algorithm, and when you run it, you realize that it is conflicting with the ethical norms :/

But despite the risks associated with AI that fails to comply with ethical standards, most companies have it as the last thing on their list. There is no clear protocol as yet to build ethical AI systems. Companies need a plan to identify ethical pitfalls within multiple functions and systematically approach this to mitigate risks.

Also, unethical algorithms can do more bad than good for society; hence it’s important to drive attention to the topic.

How to build Ethical Artificial Intelligence Models?

It can’t be a one-size-fits-all approach, and the AI model will need to be tailored depending on the industry’s use case.

But to give you a broader framework –

Companies can create an AI ethical risk framework customized to their industry after onboarding the internal data governance board. The framework can talk about the ethical standards and the ethical horrors of the company along with a governance structure.

After a high-level process is established, you can have a closer look at the complications at the product level. One roadblock here is that product managers struggle to make outputs explainable while ensuring accuracy. They need to be equipped with tools to evaluate the importance of explainability in a given product to decide the trade-off.

If explainability is required to eliminate bias from an algorithm that does not worry about discrimination, then accuracy automatically outweighs explainability here. But in algorithms where unbiased output is more important, accuracy can come secondary.

Companies must also drive initiatives to create awareness around the ethical risks posed by AI and encourage employees to actively identify any such threats.

Wrapping Up

Operationalizing AI ethics is not easy; it requires multiple stakeholders to collaborate. But companies who crack it will see positive results in the form of mitigated risk and efficient adoption of technologies. Above all, they will win the trust of their clients, consumers, and employees.

Leave a comment

This website uses cookies to improve your web experience.