top of page

Realising Practical AI, Part Two

Although AI is oftentimes thought of as a replacement of human ability, reality shows that the relationship between humans and machine is reciprocal – each providing the information the other needs in order to properly perform. As Artificial Intelligence (AI) becomes intertwined with human decision-making processes however, it is important for humans to be able to trust the decisions made by AI. In this blog post, we discuss ethical considerations of AI, specifically, accountability, transparency and responsibility. We argue that any AI system design should incorporate these principles as they are key to its success.

Accountability is the ability of AI is to provide an understandable explanation as to why it performed a certain action (e.g. why it made a certain decision). A first step towards establishing accountability in AI systems, is the detection and reporting of bias in AI models. Bias is the result of the way Ai was trained and indicates an erroneous preference (i.e. discrimination) of AI towards a specific result or group or results. Oftentimes, bias is inadvertent, as result of available training data. For example, consider the healthcare sector, and an AI to predict a disease based on given symptoms and using data from a military hospital. Using this type of data as source will unintentionally lead the AI to discriminate against women, simply because most patients in the military hospital are men. This can have serious repercussions to healthcare of women patients who are underrepresented in the data provided. Exposing this detected bias to humans who can – together with the AI in some cases – decide the best strategy to mitigate it, is a first step towards AI being accountable for its actions.

Transparency is the ability of AI to explain all aspects of its operation. Such aspects include clarity on how data is collected, who owns the data (governance), how it is processed and how is the output generated. In addition to establishing trust, in some cases, transparency is important from both legal and regulatory perspective. Especially for some heavily-regulated industries such as finance, it may even be a prerequisite. On the other hand, excess transparency may divulge decision-making information to competitors or other parties, thus opening the door to an array of potential threats: from exposing secrets such as the inner-workings of AI algorithms to making the AI susceptible to security attacks. The design of a transparent AI therefore is a balancing act between exposure of operational actions and protection from external threats.

Finally, responsibility refers to the actions that human AI system designers and operators themselves can take in order to ensure that AI systems are built ethically, securely and efficiently. The term encapsulates technological tools, processes and thought leadership (e.g. way of thinking, way of working), that lead to the development of better AI systems for the customers. A quick google search on responsible AI reveals several companies that have already produced information and tools for practicing responsible AI.

In conclusion, ethics are not only an important part of AI system design, they are also critical to the success of such systems when deployed in the real world. This success is not only dependent upon the technological sophistication of these systems, but also part of it rests with human collaborators – who also develop a new way of thinking and collaborating.


bottom of page