AI Ethics Self-Checklist AI Ethics Self-Checklist Contents screen

Self-Checklist to Practice the National Guidelines for AI Ethics

National Guidelines for AI Ethics > General-Purpose Self-Checklist > Field-Specific Self-Checklists

  1. Level 0 The National Guidelines for AI Ethics
  2. Level 1 AI Ethics Self-Checklist : General-Purpose
  3. Level 2 AI Ethics Self-Checklist : Field-Specific

1. The National Guidelines for AI Ethics

Provide national standards* that should be followed by all members of society, throughout the entire AI lifecycle, from development to deployment, to shed new light on human-centered social values and to enhance social acceptance and trust

**(Three Basic Principles) Respect for Human Dignity, Common Good of Society, Proper use of technology (Ten key Requirements ) ①Safeguarding Human Right, ②Protection of Privacy, ③Respect for Diversity, ④Prevention of Harm, ⑤Public Good, ⑥Solidarity, ⑦Data Management, ⑧Accountability, ⑨Safety, ⑩Transparency

2. AI Ethics Self-Checklist : General-Purpose

As a concrete measure for putting the National Guidelines for AI Ethics into practice, developed a self-checklist that allows AI actors to independently examine their adherence to the Guidelines

Close connectivity with the Key Requirements of the Guidelines*, A self-checklist that offers universal application**

* The Self-Checklist covers philosophical and social discourses, including ethical considerations concerning the development and utilization of AI, as well as values to be pursued and social norms.

** The Self-Checklist provides general questions so that, regardless of field or domain, all those seeking to utilize the Self-Checklist can select appropriate questions and flexibly customize them to meet relevant purposes, characteristics, and features.

3. AI Ethics Self-Checklist : Field-Specific

Provide specific usage examples derived from the Common AI Ethics Self-Checklist, which focuses on universality and comprehensiveness, so that it can be easily applied in real-world settings in a manner appropriate for the particular purposes, characteristics

Features pertinent to a given field and Selected questions that needed to be emphasized in certain fields and customized them; created new questions to handle emerging AI ethics issues (for the chatbot, writing, and image recognition in 2022)

AI Ethics Self-Checklist’s Scope of Application by Field

  • AI chatbot

    a chatbot used for information provision, customer advice, complaint handling, personalized recommendation, casual conversation, and other purposes

  • AI for writing

    an AI designed to assist with writing tasks such as document and email writing, social media posting, and copywriting

  • AI image recognition system

    an AI used for video analysis, video monitoring and object detection, and other purposes

AI actors that consult the AI Ethics Self-Checklist are encouraged to select and flexibly customize its questions to meet their respective needs and purposes.

Example of deriving checklist

  • Level 0

    The National Guidelines for AI Ethics

    Key Requirement 9: Safety
    Throughout the entire process of AI development and utilization, efforts should be made to prevent potential risks and ensure safety.
    Efforts should be made to offer functions that allow users to control the operation of the AI system when clear errors or infringements occur during AI use.

    (Provide common questions to check compliance with the Key Requirement )

  • Level 1

    Self-Checklist
    (General-Purpose)

    E09.03.
    Are there procedures for continued evaluation of the safety of AI-powered outputs (e.g., periodic expert evaluations by internal departments or outside organizations, reflection of user feedback)?

    (Based on the common questions in the Safety category, provide questions customized specifically for each field)

  • Level 2

    Self-Checklist
    (Field-Specific)

    Chatbot
    Are there procedures for continued evaluation of safety (e.g., periodic expert evaluations by internal departments or outside organizations, reflection of user feedback) to prevent the chatbot from generating obscene, aggressive, or biased sentences?
    Writing
    Are there procedures for continued evaluation of the safety of the outputs produced (e.g., periodic expert evaluations by internal departments or outside organizations, reflection of user feedback)by AI for writing, concerning aspects such as accuracy, clarity, and validity?
    Image Recognition
    Are there procedures for continued evaluation of safety (e.g., periodic expert evaluations by internal departments or outside organizations, reflection of user feedback) concerning the AI’s video analysis and processing outcomes?
[27872] 18 Jeongtong-ro, Deoksan-eup, Jincheon-gun, Chungcheongbuk-do, 27872, Republic of Korea
Call : 043-531-4114
  • KISDI 정보통신정책연구원
  • 과학기술정보통신부
TOP