Knowledge Base

1. Unacceptable Risk AI Systems

AI systems considered particularly dangerous to people and society are completely banned.

This includes:

  • Behavioral manipulation systems that can influence users’ subconscious choices.
  • Real-time mass surveillance technologies, such as facial recognition systems in public spaces.
  • Systems based on China’s “social credit” method that assess citizens based on their behaviors.

2. High-Risk AI Systems

AI systems that have a potentially serious impact on fundamental rights, health, and safety must meet specific legal requirements. This includes:

  • AI in critical systems such as medicine, transport, education, law enforcement, and justice.
  • Systems that determine access to services, e.g., credit scoring systems or AI used in recruitment.

These systems are required to:

  • Undergo conformity assessment before deployment: AI systems must undergo detailed safety, accuracy, and data protection compliance testing before entering the market.
  • Be monitored and supervised: High-risk systems are subject to ongoing supervision and must be monitored for potential threats.
 

3. Limited Risk AI Systems

These systems may pose some risks, but they are considered less significant. In this case transparency is recommended:

  • Chatbots and virtual assistants interacting with users must clearly inform that communication is happening with AI.
  • Requirements for transparency and correct user information.
 

4. Minimal or Negligible Risk AI Systems

Most AI systems in this category are not directly regulated by the AI Act. Examples include recommendation systems in e-commerce or filters on social media.

In the context of Artificial Intelligence (AI), the International Organization for Standardization (ISO) has developed and implemented a series of standards aimed at harmonizing principles related to AI system design, deployment, and evaluation. These standards are designed to help companies and organizations create, evaluate, and apply AI safely, transparently, and in compliance with regulations.

Here are the most important ISO standards concerning AI:

1. ISO/IEC 22989 – Artificial Intelligence Concepts and Terminology
Scope:Defines fundamental concepts, terminology, and theoretical frameworks related to AI. This standard is essential for understanding and communication about AI, especially in cross-industry and international projects.
Purpose:Facilitates consistent communication among different stakeholders, ensuring a uniform understanding of AI definitions and key concepts.

2. ISO/IEC 23053 – Framework for Artificial Intelligence (AI) Systems Using Machine Learning
Scope: Develops frameworks and guidelines for building AI systems using machine learning (ML). Describes the basic stages of designing, training, and deploying ML models.
Purpose: Helps organizations apply best practices in creating and implementing ML systems, particularly in data management and processing.

3.ISO/IEC 23894 – Artificial Intelligence Risk Management
Scope: Focuses on managing risks associated with AI system implementation and usage. Describes processes for identifying, assessing, and mitigating technological, ethical, and legal risks in AI systems.
Purpose:Enables organizations to implement appropriate risk management mechanisms, which is crucial, especially in the context of EU regulations such as the AI Act.

4. ISO/IEC TR 24027 – Bias in AI Systems and AI-Aided Decision Making
Scope: This standard describes methods for identifying, assessing, and reducing bias in AI systems and AI-supported decision-making. It outlines how to monitor and improve algorithms to ensure they are free from technological biases and do not lead to discrimination.
Purpose:Increases organizational accountability by helping them build fairer and more transparent AI systems.

5. ISO/IEC 24029 – Assessment of the Robustness of Neural Networks
Scope: This standard focuses on methods for assessing the robustness of neural networks, particularly against attacks and unpredictable behaviors. It provides tools for testing neural networks for their resilience to disruptions and data changes.
Purpose:Ensures high quality and operational stability of neural networks in critical applications, such as autonomous systems or medicine.

6. ISO/IEC TR 24028 – Overview of Trustworthiness in Artificial Intelligence
Scope: This standard provides guidelines for building trust in AI systems. It describes how to design AI systems that are transparent, responsible, and trustworthy, ensuring appropriate mechanisms for explaining decisions made by AI.
Purpose:Supports organizations in building systems that gain the trust of users and stakeholders by ensuring explainability, accountability, and safety.

7. ISO/IEC TR 24030 – Use Cases and Applications of AI
Scope: Provides an overview of various AI use cases and their potential applications in different sectors of the economy, such as industry, healthcare, finance, or transportation.
Purpose: Helps companies and organizations assess how AI can be integrated into their processes and what benefits and challenges are associated with implementing AI systems.

8.ISO/IEC JTC 1/SC 42 – Committee on Artificial Intelligence
Scope: This is a technical committee of the joint ISO and IEC group that develops broad guidelines, standards, and regulations related to AI and related technologies. The committee focuses on key aspects such as ethics, trust, safety, privacy, and AI system interoperability.
Purpose:To create international standards that facilitate the safe and sustainable deployment of AI.

9. ISO/IEC TR 24372 – Overview of Computational Approaches for AI Systems
Scope: This standard focuses on describing various computational approaches used in AI systems, including algorithms and methods employed in the machine learning process.
Purpose: Helps in selecting appropriate computational methods that best fit a given AI system application.

10.ISO/IEC 25012 – Data Quality Model
Scope: While not specific to AI, this standard addresses data quality, which is crucial in the context of artificial intelligence. It includes guidelines on the quality of data used for training AI models, such as completeness, accuracy, consistency, and data currency.
Purpose:Ensures that data used by AI algorithms are properly processed, error-minimized, and representative of the problem, preventing technological bias.