As UNESCO says, AI must be for the greater interest of the people, not the other way around.  

From driverless cars to AlphaFold, (a novel artificial intelligence program designed by DeepMind, a subsidiary of Alphabet, that can predict protein structures with atomic accuracy) artificial intelligence is increasingly becoming an integral part of our world. In recent years, there has been an exponential increase in the number of companies and industrial institutions that actively leverage AI as a source of durable competitive advantage for their businesses. Moreover, as the name suggests, AI is the only modern, cutting-edge technology that will be able to imitate most of the human behaviors and will work in tune with the intelligence of a human-being, if developed right.  

Many experts have already warned us about the possibilities of moral dangers when letting AI make decisions for us. No matter the type and magnitude of innovations an organization or individual comes up with, it is important to remember that ethics and responsibility plays an integral part of an innovation. This blog goes into the emerging importance of ethics and principles in building and implementing AI in a fair and just way.  

Core areas of ethical focus to build responsible AI 

There is no question in my mind that artificial intelligence needs to be regulated. The question is how best to approach this. 

 Sundar Pichai, Google CEO.  

As the international market value of AI is projected to increase by over 13 times over the next 8 years, certain industries such as retail, healthcare, manufacturing, including some government institutions, need to adhere to the local and international regulatory policies that ensure that AI benefits humanity by far and large. Below are the major elements of ethics which help AI-centric organizations mitigate biases when building AI and foster a sense of responsibility in the minds of the people who work with AI.  

Transparency 

Recently published AI ethics research says that 69% of C-suite executives from top conglomerates say they understood the issues of transparency in AI engagements, which was 36% in 2019.  

Should people who use it know how AI works? The answer is yes. People should know and be aware of how AI works, especially when the decisions taken by AI impact their lives. As the decision-making process of AI is highly complex and loaded with neural networks, which might perplex machine learning experts at times, building AI models and programs in accordance with universal transparency principles will positively impact how AI makes decisions with sensitive data.  

Example 

Here is an example that emphasizes the importance of applying the principle of transparency when building ethical AI. When Apple unveiled its credit cards in August 2019, it enabled everyone who wanted a credit card to apply online and in seconds they were provided with their credit limits. Though the process was simple and convenient, it turned out that women were given with much lower credit limits than men. Well, with the above incident, we can assume that the company could have focused more on transparency principles when developing the model.  

Fairness 

Unlike any other technology that is made to work for humans, AI works alongside us and our diverse interests. When something like a machine is empowered with the ability to make decisions based on the data and algorithms that are fed into it, it is the developer’s responsibility to ensure there won’t be any bias. The only reason we might want to trust AI more than ourselves is that humans are biased in nature. The blame game starts when humans are held responsible and accountable as we have the tendency to be a little bit biased on how we were raised, our ethnicity and other factors. To eliminate bias in the business decision-making processes and other areas where prejudice and blatant partisanship might affect the true outcome, AI should be designed with sensitivity to a wide range of cultural norms and values.  

Example  

Here is an example why fairness matters when building ethical AI programs. In 2018, Amazon, which has been building computer programs since 2014 to review job applicant resumes with the aim of mechanizing the search for top talent, scraped an AI-based recruiting tool that showed bias against women. This is one of the thousand examples why organizations need to be very careful about what kind of attributes and data are fed into AI.  

Accountability 

There is no profession where mistakes do not happen. Looking at how some people are born with disabilities; it seems even nature is no exception. When AI is set out to disrupt the way problems are solved, it is imperative that AI must be held to account, so must the organization that owns it. 

So, what happens when AI systems run amok and deliver outputs that are biased or morally unacceptable results? Who is held responsible for such mistakes? Well, below is a real-life incident that warns us how things could go wrong when there is no ethics behind AI. In 2003, the US Patriot Missile system, which was on automated mode, “accidentally” shot down a US navy jet over Iraq. Later, it was known that it was not the first time this happened but the “third friendly incident,” said Military officials. 

User Data Rights and Privacy 

Recent research from The Pew Research Centre found that 74% of US citizens believe that being in control of their privacy and user data is very important.  

As powerful as it is sensitive, user data concerns a lot of people and organizations that benefit from it. When AI collects and processes a vast amount of data, it should be governed and regulated for enhanced privacy and user data rights. It is vital to locate risks and work towards finding resolutions to avoid such mistakes that ruin an individual’s peace of mind when certain data becomes public or misused. This raises a question: Can companies augment their business operations while also prioritizing user privacy and data protection? Yes, companies can always come up with effective user data rights and privacy policies by employing multiple encryptions and enabling users with utmost control over how their data can help steer clear of potential privacy problems. 

AI is only as ethical as the people who create and use it 

Though millions of people and a plethora of businesses are benefiting from AI today, there are also justifiable concerns on the implementation of AI in certain areas of our daily lives. For many people, it is about replacing their intelligence and skills with AI. For some, it is about how insensitive AI can be in certain situations. For some, it is robots taking over the world.  

A survey conducted by a reputed IT firm highlights the fact that 77% of global IT consumers think organizations must be held accountable for any misuse of AI. 

As far as AI goes, AI is a double-edged sword that needs to be handled with care. It can be a boon and bane at the same time. It goes without saying that anything that gets trained or made with a malicious intent can become dangerous. And AI is no exception, as it is just a piece of technology that involves data and algorithms. After all, it is us, humans, and our data that give birth to AI, the onus is on the organizations to ensure artificial intelligence isn’t doing more harm than good to humankind.