Ethical Guidelines for Transparent Development and Implementation of AI - an Overview

Author: Fenna Woudstra

Abstract

Over the last couple of years (2016-2019), many companies, institutions, governments and ethicists have proposed their own principles for the development of Artificial Intelligence (AI) in an ethical way. Most of the guidelines include principles about fairness, accountability and transparency, but there is a wide range of interpretations and explanations of what these principles entail. This paper focuses on the principles for transparency when it comes to the development and implementation of AI. Eighteen different ethical guidelines have been analysed in order to extract all different interpretations and practical requirements of transparency. All the mentioned principles have been organised into a new framework consisting of nine main topics: environmental costs, employment, business model, user’s rights, data, algorithms, auditability, trustworthiness and openness. Hereby, this paper aims to provide a complete overview of the existing ethical guidelines and the practical specifications for transparent development and implementation of AI.

Introduction

Because of the growing use of Artificial Intelligence (AI) in many different fields, problems like discrimination caused by automated decision-making algorithms are becoming more pressing. The underlying problem is the unknown existence of biases in the system that can arise in various ways.[1]

These biases can be difficult to discover, due to the opacity of the systems.[2] According to Burrell, there are three types of opacity: 1) intentional; the developers do not want to reveal their secrets, 2) illiterate; people have too little knowledge of programming to understand the code and working of algorithms, and 3) intrinsic; the system itself is too complex to explain the exact workings.[3]

The logical response to reduce the problems caused by opacity is a greater demand for transparency. Transparency is all about an information exchange between a subject and an object, where the subject receives information about the performance of a system or organisation the object is responsible for.[4] Transparency can be a means to achieve different goals. First of all, the availability of information enables subjects to make informed decisions in situations of (democratic) participation[5] or purchasing.[6] Secondly, the information enables the subject to monitor the object, to notice and to expose mistakes or even corruption.[7] Thirdly, the insights can enhance the subject’s trust in the object (or its system), provided that it is the right kind and amount of information.[8] Finally, if the aforementioned trust has been established, new technologies are more likely to be accepted by the public.[9]

However, putting transparency into practice is easier said than done, as not everything can simply be made publicly available. Other important values like privacy, national security and corporate secrecy limit the possibilities to publish all the relevant information in order to be transparent.[10] There is also a risk of people ‘gaming the system’ when it is publicly known how an algorithm works.[11]

In recent years (2016-2019), there seems to have been an increasing demand for rules and ethical guidance for the development of AI, since many companies, institutions, governments and ethicists have proposed their own ethical guidelines (see list in Methods section, Table 1). Besides fairness and accountability, transparency is one of the most mentioned principles.[12] It remains nevertheless unclear how transparency should be realised, because all these guidelines differ in their interpretations and requirements on how to put transparency into practice.  

This paper aims for a more comprehensive framework of transparency in the field of development and implementation of AI. Eighteen different guidelines have been analysed on their principles about transparency. As a result, a new framework has been created that consists of nine main topics and their specifications on how to put transparency into practice. We believe this overview contributes to a better understanding of transparency and its application in the field of AI.

Methods

In this research, eighteen different ethical guidelines were analysed. They were all retrieved from a website that is making an inventory of existing guidelines for AI.[13] This list is still growing, since new guidelines are still being made and added to the website. All guidelines available at the time of research were quickly assessed on the basis of clarity, amount of information on transparency, the possible new insights on transparency and the origin of the guideline. Based on this preliminary work, by hand the eighteen guidelines as shown in Table 1 have been selected for further research.

1. AI Now Report 2018 - AI Now Institute - December 2018

2. The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems  - Amnesty International & Access Now- May 2018

3. Statement on Algorithmic Transparency and Accountability - Association for Computing Machinery - January 2017

4. AI4People’s Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations - Atomium – EISMD (AI4People)- 2018

5. Beijing AI Principles - Beijing Academy of Artificial Intelligence - May 2019

6. Digital Ethics - CIGREF - October 2018

7. Data Ethics Principles - Data Ethics Thinkdotank - December 2017

8. Artificial Intelligence Ethics and Principles - Smart Dubai - November 2018

9. European ethical Charter on the use of Artificial Intelligence in judicial systems and their environment - European Commission for the efficiency of justice (CEPEJ) - December 2018

10. Principles for Accountable Algorithms and a Social Impact Statement for Algorithms - FAT/ML - 2016

11. Ethics guidelines for trustworthy AI - High Level Expert Group on AI - April 2019

12. Everyday Ethics for AI & IBM’s Principles for Trust and Transparency - IBM 2018

13. Declaration on Ethics and Data Protection in Artifical Intelligence - ICDPPC - October 2018

14. Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector - Monetary Authority of Singapore - November 2018

15. Understanding artificial intelligence ethics and safety - The Alan Turing Institute - 2019

16. Universal Guidelines for Artificial Intelligence - The Public Voice - October 2018

17. Top 10 principles for ethical artificial intelligence - UNI Global Union

18. Data Ethics Decision Aid (DEDA) - Utrecht University

Table 1: The eighteen guidelines that have been analysed.13

 

The used guidelines come from all over the world (Europe, Asia and the US) and from all different kinds of organisations; governments, universities, technical companies and ethical organisations. It was believed that this list of guidelines would give a sufficient amount of information on transparency for AI.

After the selection, each guideline was analysed by searching for parts about transparency and related concepts like openness and explainability. All the relevant parts were then analysed in NVivo 12, a qualitative data analysis program, to divide the information in different topics. All principles that stated something about the same topic were put together. Later on, clarifying names for these topics were created. To create the specifications of each topic, the corresponding principles were analysed and similar principles were compiled.

Results

The organisation of all the principles into different topics made it possible to create a new framework that provides a more complete overview of transparency and its application in the field of AI. There turned out to be nine different topics; environmental costs, employment, business model, user’s rights, data, algorithms, auditability, trustworthiness and openness (see Table 2). Each topic is explained by multiple practical specifications that were extracted from the existing guidelines.

The first three topics are about the developing company and how they work. Organisations can be transparent about the energy use of the system and the employment of the people that contributed to the development of the system. The third, business model, ensures transparency about the purpose of the system and why the organisation wants to use it.

The fourth topic is about the user’s rights. Most of the guidelines stated that people who are subjected to an AI system should be informed about the interaction with an AI system and that they should be able to request explanations. Going even further, guidelines stated that it should be clear that people have the possibility to do so. This means that organisations should be transparent about what people can or may ask for. That is why “user’s rights” has been created as a new topic in the framework of transparency.

The next three topics cover the data, algorithms and the auditing of the system. These principles require explanations of the used data, the training and testing of the algorithm and the rationale behind the outcomes. It is worth noting that there is no explicit use of ‘explainability’ as a principle of transparency in the new framework, although this notion shows up in many guidelines. During research, there used to be one group of principles about explainability, but after examination there appeared to be many different topics (like the data use, the algorithm, the development process) one was expected to explain. Being transparent is inherently connected to explaining, or disclosing information about something. The principles on explainability were therefore implicitly divided over all different topics.

The last two topics are ‘trustworthiness’ and ‘openness’. These topics include principles on respectively transparency of the reliability of the system by disclosing information about the accuracy and limitations, and transparency by publishing the actual code, data, test results or software, if this is possible and safe to do.

The order shows the whole process of developing, using and reviewing AI systems, starting by the less-technical topics about the environment, the business and the users, followed by the technical topics about the data, algorithms and the performance of the system.

Table2.1.png
Table2.2.png

Table 2: Framework for transparent development and implementation of AI.

Discussion

The existing guidelines offer many interesting principles, but they also cause confusion by their variations and different requirements for transparency. This paper has presented a new framework based on the existing ethical guidelines, in order to provide an overview that contributes to the understanding of transparency and its application in the development and implementation of AI.

            Unexpectedly, the principles of transparency do not only address algorithmic transparency or the explainability of the AI system, but also the transparency of the company itself. This is most notable in the first two topics, ‘environmental costs’ and ‘employment’. These principles were not mentioned in most of the guidelines, but these principles are interesting additions to the framework.[14] They show that how a system is made, at what costs and under which circumstances, is also a part of the development of a system. Being transparent about these aspects could probably result in a greater trust and acceptance, as Dignum already mentioned that ‘trust in the system will improve if we can ensure openness of affairs in all that is related to the system. As such, transparency is also about being explicit and open about choices and decisions concerning data sources and development processes’.[15]

            The ‘environmental costs’ are also a relevant topic in light of the present climate change. Strubell, Ganesh and McCallum investigated the amount of energy complex AI systems need for the training and testing of the algorithms and concluded that authors should report the training time and computational resources to facilitate further research.[16] Transparency of developing companies about their energy use can in this way also contribute to the scientific research on this subject.

            As mentioned before, the first two topics were not included in most of the guidelines. Still they have been placed in the new framework, since this was meant to provide an overview of the existing guidelines. This shows that not all principles are valued equally; the principles about for example the data use and the algorithms are mentioned much more and are probably the more crucial topics to be transparent about in the development of AI. Furthermore, it should be mentioned that not all existing guidelines have been analysed and neither can we be sure that the existing guidelines already include all the important principles, due to the novelty of this subject. So clearly, additions to this framework can be made when important principles are yet missing.

            That is why this framework is not meant to provide strict rules, but aims at providing an overview of the possible topics to be transparent about. Organisations can decide which aspects are relevant for their system and organisation. This framework also tries to show that being transparent does not necessarily mean that everything must be made publicly available, but that explaining why a principle cannot be fulfilled is also a form of transparency. Not publishing the used data can be perfectly ethical if it is done to ensure people’s privacy or the national security, as Stiglitz already noted.[17]

            Finally, this framework shows that there are many aspects to be transparent about and many ways to achieve transparency. So even when an algorithm’s working is highly complex, there are still many aspects to be transparent about that could help reveal or prevent biases in the system. There is much more to an AI system than only a black box.

———————————————————

References

[1] Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems (TOIS), 14(3), 330-347. See also: Suresh, H., & Guttag, J. V. (2019). A framework for understanding unintended consequences of machine learning. arXiv preprint arXiv:1901.10002.

[2] O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books. See also: Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology, 31(4), 611-627.

[3] Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 2053951715622512.

[4] Meijer, A. (2013). Understanding the complex dynamics of transparency. Public administration review, 73(3), 429-439.

[5] Stiglitz, J. E. (1999). On liberty, the right to know, and public discourse: the role of transparency in public life. Globalizing rights: The Oxford amnesty lectures, 149.

[6] Sloan, R. H., & Warner, R. (2018). When Is an Algorithm Transparent? Predictive Analytics, Privacy, and Public Policy. IEEE Security & Privacy, 16(3), 18-25.

[7] Berliner, D. (2014). The political origins of transparency. The journal of Politics, 76(2), 479-491.

[8] Kizilcec, R. F. (2016, May). How much information? Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 2390-2395).

[9] Van Belkom, R. (2017). In Innovation we Trust. Right?! (Master Thesis) EURIB, Rotterdam.

[10] Stiglitz, J. E. (1999). On liberty, the right to know, and public discourse: the role of transparency in public life. Globalizing rights: The Oxford amnesty lectures, 149.

[11] Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic  decision-making processes. Philosophy & Technology, 31(4), 611-627.

[12] Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication, (2020-1).

[13] Ethics Guidelines Global Inventory. https://algorithmwatch.org/en/project/ai-ethics-guidelines-global-inventory/. Accessed on February 18, 2020.

[14] The principles on energy use and employment were mentioned in the AI Now Report (2018) as a part of the whole development process that should be fair, accountable and transparent. The Ethics guidelines for trustworthy AI (HLEG, 2019) has a principle on its own that states AI should be environmental-friendly.

[15] Dignum, V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer International Publishing.

[16] Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. arXiv preprint arXiv:1906.02243.

[17] Stiglitz, J. E. (1999). On liberty, the right to know, and public discourse: the role of transparency in public life. Globalizing rights: The Oxford amnesty lectures, 149.

Previous
Previous

Appathon en verantwoordelijkheid

Next
Next

Hoe wenselijk is een ‘smart’ wijk?