DS/PAS 2500-1:2020

Artificial Intelligence – Part 1: Transparency

About

Transparency is necessary for validating results and establishing trust in systems that support and make decisions. This is true regardless of whether artificial intelligence is used in a decision-making process. Nevertheless, the use of artificial intelligence has made the need for transparency more apparent.

Artificial intelligence can leverage and acquire knowledge and skills to solve (complex) problems – typically to a greater extent than human-only systems. For this very reason, systems that use artificial intelligence often require much larger amounts of data than traditional systems to make decisions or act as decision support.

If decisions cannot be rationalised, it can be difficult to trust them. Artificial intelligence is often used in collaboration with humans, who must be able to understand what is being done and why.

One way to rationalise decisions is to provide one or more explanations. Explanations can take various forms, depending on the recipient’s needs. For example, the need may be a justification for an action, the relevance of a question, or an explanation of what an object is (conceptualisation).

Regardless of the type of explanation needed, transparency is usually the foundation. Transparency is therefore important for building trust in the use of systems. Transparency has also been identified as an important principle for artificial intelligence by numerous international experts.

Fill out the form and access the publication

Virksomhed*
DS/PAS 2500-1:2020