The National Guidelines for AI Ethics

  1. HOME
  2. INTRODUCTION
  3. The National Guidelines for AI Ethics
  4. The National Guidelines for AI Ethics

On December 23, 2020, the Presidential Committee on the Fourth Industrial Revolution deliberated on and adopted 「the National Guidelines for Artificial Intelligence (AI) Ethics」 prepared by the Ministry of Science and ICT (MSIT) and the Korea Information Society Development Institute (KISDI).

The National Guidelines for AI Ethics

Basic and comprehensive standards that should be followed by all members of society to implement human-centered AI

인공지능 윤리기준

인권보장
[인간중심, 인간의 권리와 자유 보장, 사람중심 서비스]
프라이버시 보호
[사생활 보호, 개인정보 오용 최소화]
다양성 존중
[다양성, 접근성 보장, 비차별적, 편향·차별 최소화]
침해금지
[침해금지, 인간에 무해한 목적으로 활용]
공공성
[공공성 증진, 인류의 공동 이익 목적, 순기능 극대화, 교육]
연대성
[집단간 연대성, 이해관계자의 참여 기회 보장, 국제사회 협력]
데이터 관리
[목적 외 용도 활용 금지, 데이터 편향 최소화, 품질·위험 관리]
책임성
[책임의 명확화, 주체별 책임]
안전성
[잠재적 위험 방지, 안전 보장]
투명성
[설명 가능성, 활용 내용 및 유의사항 사전 고지]
  1. The Best Value : Humanity
  2. Three Basic Principles : Principles that should be considered during the development and utilization of AI to achieve AI for humanity
    • ① Respect for human dignity
    • ② Common good of society
    • ③ Proper use of technology
  3. Ten Key Requirements : Ten essential requirements that should be met throughout the AI system lifecycle to abide by the aforementioned three basic principles
    ① Safeguarding Human Right
    • AI should be developed and utilized in a way that respects equal human rights and guarantees diverse democratic values and rights stipulated in international human rights laws and similar standards.
    • AI should not be developed or utilized in a way that violates human rights and freedom.
    ② Protection of Privacy
    • The privacy of individuals should be protected throughout the entire process of AI development and utilization.
    • Efforts should be made to minimize the misuse and abuse of personal information throughout the entire AI system lifecycle.
    ③ Respect for Diversity
    • Throughout every stage of AI development and utilization, the diversity and representativeness of the AI users should be ensured, and bias and discrimination based on personal characteristics, such as gender, age, disability, region, race, religion, and nationality, should be minimized. Commercialized AI systems should be generally applicable to all individuals.
    • The socially disadvantaged and vulnerable should be guaranteed access to AI technologies and services. Efforts should be made to ensure equal distribution of AI benefits to all people rather than to certain groups.
    ④ Prevention of Harm
    • AI should not be used for the purpose of inflicting direct or indirect harm on humans.
    • Efforts should be made to develop measures to handle risks and negative consequences associated with AI.
    ⑤ Public Good
    • AI should be utilized not only for the pursuit of personal happiness but also for the public good of society and the common benefit of humanity.
    • AI should be utilized toward creating positive social change.
    • Diverse education programs should be implemented to maximize the benefits and minimize the negative impacts of AI.
    ⑥ Solidarity
    • AI should be utilized in a way that helps maintain solidarity among various groups and takes into account the needs of future generations.
    • Diverse stakeholders should be provided with equitable participation opportunities throughout the entire AI system lifecycle.
    • The international community should make concerted efforts for the ethical development and utilization of AI.
    ⑦ Data Management
    • Data, such as personal information, should not be used for purposes other than its intended use.
    • Throughout the entire process of data collection and utilization, data quality and risks should be carefully managed so as to minimize data bias.
    ⑧ Accountability
    • Responsible parties should be clearly defined during the process of AI development and utilization to minimize potential damage.
    • Roles and responsibilities should be clearly defined among the designers, developers, service providers, and users of AI.
    ⑨ Safety
    • Throughout the entire process of AI development and utilization, efforts should be made to prevent potential risks and ensure safety.
    • Efforts should be made to provide functions that allow users to control the operation of the AI system when clear errors or infringements occur during AI use.
    ⑩ Transparency
    • In order to build social trust, efforts should be made, while taking into account possible conflicts with other principles, to improve the transparency and explainability of AI to a level suitable for the use cases of the AI system.
    • When providing AI-powered products or services, the AI provider should inform users in advance about what the AI does and what risks may arise during its use.
[27872] 18 Jeongtong-ro, Deoksan-eup, Jincheon-gun, Chungcheongbuk-do, 27872, Republic of Korea
Call : 043-531-4114
  • KISDI 정보통신정책연구원
  • 과학기술정보통신부
TOP