Skip to content

Instantly share code, notes, and snippets.

@anderzzz
Created March 25, 2024 12:36
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save anderzzz/3ead5b157d9b46cb045384fbcb1eed17 to your computer and use it in GitHub Desktop.
Save anderzzz/3ead5b157d9b46cb045384fbcb1eed17 to your computer and use it in GitHub Desktop.
Article Nr. Article Abstract
1 The text is an excerpt from the EU AI Act, outlining its purpose and key regulatory provisions for the use, implementation, and monitoring of artificial intelligence (AI) systems within the Union, with an emphasis on safety, fundamental rights, and innovation support.
2 The text outlines the scope and applicability of the EU AI Act, detailing the types of entities and activities it applies to, as well as specific exceptions and conditions under which it does not apply. It also mentions how this regulation interacts with other Union laws and regulations.
3 The text is a comprehensive list of definitions related to the EU AI Act, providing detailed explanations of terms and concepts associated with AI systems, their operation, regulation, and potential risks. It would be useful for anyone seeking to understand the legal and technical language used in the context of AI regulation within the European Union.
4 This text is a provision from the EU AI Act, outlining the responsibility of AI system providers and deployers to ensure a sufficient level of AI literacy among their staff and other individuals involved in the operation and use of these systems.
5 The text is a detailed enumeration of prohibited AI practices according to the EU AI Act. It specifically outlines restrictions on the use of AI in various contexts, such as manipulation of behavior, exploitation of vulnerabilities, social scoring, risk assessments, facial recognition, emotion inference, biometric categorization, and real-time remote biometric identification systems, particularly in law enforcement. It also discusses the conditions and procedures for exceptions to these prohibitions.
6 The text provides detailed rules and conditions for classifying high-risk AI systems according to the EU AI Act. It includes criteria for high-risk designation, exceptions, obligations for providers, and provisions for future amendments to these rules.
7 The text outlines the conditions and criteria under which the European Commission can amend Annex III of the EU AI Act, specifically regarding the addition, modification, or removal of high-risk AI systems. It also details the factors to consider when assessing the potential risks and benefits of these AI systems.
8 The text discusses the compliance requirements for high-risk AI systems under the EU AI Act, including the responsibilities of providers to ensure their products meet these requirements and the integration of testing and reporting processes.
9 This text details the requirements and procedures for establishing and maintaining a risk management system for high-risk AI systems, as per the EU AI Act. It outlines the steps for risk identification, evaluation, and mitigation, as well as the testing procedures for these systems.
10 The provided text is a detailed section from the EU AI Act, specifically focusing on the data and data governance requirements for high-risk AI systems. It outlines the quality criteria for training, validation, and testing data sets, data governance practices, considerations for bias detection and correction, and conditions for processing special categories of personal data.
11 The text details the requirements for the technical documentation of high-risk AI systems according to the EU AI Act, including the need for such documentation before the system is put on the market, the information it should contain, and the provision for SMEs to provide this information in a simplified manner. It also mentions the Commission's role in amending these requirements in light of technical progress.
12 This text is a section from the EU AI Act that outlines the requirements for record-keeping and logging capabilities for high-risk AI systems, including what events need to be recorded and the minimum information that must be logged.
13 This text is a section from the EU AI Act, detailing the transparency and information provision requirements for high-risk AI systems. It outlines the necessary design and development considerations, instructions for use, and specific information that must be included for deployers, such as system characteristics, performance limitations, human oversight measures, and data handling mechanisms.
14 The provided text is a detailed section from the EU AI Act, specifically focusing on the requirements and guidelines for human oversight of high-risk AI systems. It outlines the responsibilities of both the AI system provider and the deployer, and the measures to ensure safety, health, and fundamental rights.
15 The text discusses the requirements and guidelines set by the EU AI Act for the design and development of high-risk AI systems, focusing on their accuracy, robustness, and cybersecurity. It also outlines the measures to be taken to ensure these aspects, including technical solutions to address AI-specific vulnerabilities.
16 The text outlines the obligations of providers of high-risk AI systems under the EU AI Act, detailing various requirements related to compliance, quality management, documentation, registration, corrective actions, and accessibility.
17 The text is a detailed enumeration of the requirements and procedures that providers of high-risk AI systems must follow according to the EU AI Act. It includes aspects such as regulatory compliance, design control, data management, risk management, post-market monitoring, and accountability framework among others.
18 This text details the documentation keeping requirements for providers of high-risk AI systems under the EU AI Act, including the types of documents to be kept, the duration for keeping them, and the conditions under which they should be made available to national competent authorities.
19 This text discusses the requirements for providers of high-risk AI systems, particularly in the European Union, to maintain automatically generated logs, including the duration for which these logs should be kept and specific stipulations for financial institutions.
20 This text outlines the responsibilities and corrective actions required of providers of high-risk AI systems under the EU AI Act, particularly when these systems are not in compliance with the regulation or present a risk. It also details the necessary communication with distributors, deployers, representatives, importers, and market surveillance authorities.
21 This text outlines the obligations of providers of high-risk AI systems in the EU to cooperate with competent authorities, including providing necessary information and documentation to demonstrate compliance with requirements, granting access to system logs, and adhering to confidentiality obligations.
22 This text outlines the responsibilities and obligations of authorized representatives of providers of high-risk AI systems, particularly those established in third countries, under the EU AI Act. It details the tasks these representatives must perform to ensure compliance with the regulation, including verification of conformity, cooperation with authorities, and termination of mandate under certain conditions.
23 The text outlines the obligations and responsibilities of importers under the EU AI Act, particularly in relation to high-risk AI systems. It details the steps importers must take to ensure conformity with the regulation, their duties in case of non-conformity, and their cooperation with national competent authorities.
24 The text outlines the obligations and responsibilities of distributors under the EU AI Act, particularly in relation to high-risk AI systems. It details the requirements for compliance, the actions to be taken in case of non-compliance, and the cooperation expected with national competent authorities.
25 The text outlines the responsibilities of various parties involved in the AI value chain under the EU AI Act. It details the conditions under which a party is considered a provider of a high-risk AI system, the obligations of the initial provider, the role of the product manufacturer, and the agreement between the provider of a high-risk AI system and the third party supplying components or services. It also addresses the protection of intellectual property rights and confidential business information.
26 The text is a detailed enumeration of obligations and guidelines for deployers of high-risk AI systems, as outlined in the EU AI Act. It covers aspects such as usage, legal authorizations, data handling, cooperation with authorities, human oversight, and reporting requirements.
27 This text is a detailed legal provision from the EU AI Act, outlining the requirements and procedures for conducting a fundamental rights impact assessment prior to deploying high-risk AI systems. It includes specifics on what the assessment should cover, when it should be updated, and how the results should be reported.
28 This text outlines the roles, responsibilities, and operational guidelines for notifying authorities in the context of the EU AI Act, including their establishment, conflict of interest management, confidentiality, and personnel competence requirements.
29 The text details the process and requirements for conformity assessment bodies to apply for notification under the EU AI Act, including the necessary documentation, the use of existing designations, and the need for continuous compliance monitoring.
30 The text outlines the notification procedure in the EU AI Act, detailing how notifying authorities interact with conformity assessment bodies, the Commission, and other Member States. It discusses the requirements for notifications, the process for raising objections, and the steps taken when objections occur.
31 This text outlines the requirements and responsibilities of notified bodies under the EU AI Act, including their establishment, competence, independence, impartiality, confidentiality, and liability insurance. It also details their involvement in conformity assessment activities and coordination with European standardisation organisations.
32 This text discusses the conditions under which a conformity assessment body is presumed to comply with certain requirements set out in Article 31 of the EU AI Act, specifically in relation to its conformity with relevant harmonised standards.
33 This text outlines the regulations and responsibilities of notified bodies in the EU AI Act, particularly when they subcontract tasks or use subsidiaries, including the requirement for these entities to meet certain standards, the need for transparency, and the retention of relevant documents.
34 The text outlines the operational obligations of notified bodies in the context of the EU AI Act, detailing their responsibilities in verifying the conformity of high-risk AI systems, considering provider characteristics, and sharing relevant documentation with the notifying authority.
35 The text discusses the process by which the Commission assigns identification numbers to notified bodies under the EU AI Act, and how it maintains and makes available a public list of these bodies, their numbers, and their notified activities.
36 This text details the procedures and responsibilities of notifying authorities and notified bodies under the EU AI Act, specifically in cases of changes to notifications, cessation of activities, failure to meet requirements, and the restriction, suspension, or withdrawal of a designation. It also outlines the conditions under which certificates remain valid in these scenarios.
37 This text discusses the procedures and responsibilities of the European Commission in investigating the competence of notified bodies under the EU AI Act, including the handling of sensitive information and the potential corrective measures to be taken if a body does not meet requirements.
38 This text discusses the coordination of notified bodies in relation to high-risk AI systems under the EU AI Act, detailing the responsibilities of the Commission and notifying authorities in ensuring cooperation, participation, and knowledge exchange.
39 This text discusses the conditions under which conformity assessment bodies from third countries can be authorized to perform activities of notified bodies under the EU AI Act.
40 The text discusses the regulations and standards for high-risk AI systems in the European Union, including the process of standardisation, the role of the Commission in issuing standardisation requests, and the objectives of participants in the standardisation process. It also touches on the importance of legal certainty, competitiveness, and global cooperation in AI standardisation.
41 This text details the process and conditions under which the European Commission can adopt common specifications for requirements and obligations related to AI systems, as outlined in the EU AI Act. It also discusses the role of harmonised standards, the consequences of non-compliance, and the procedure for amendments if a Member State finds the common specification insufficient.
42 This text discusses the conditions under which high-risk AI systems are presumed to be in compliance with certain requirements, according to the EU AI Act. It covers both the context of use and cybersecurity certification.
43 This text outlines the conformity assessment procedures for high-risk AI systems as per the EU AI Act. It details the conditions under which different procedures are applied, the role of notified bodies, the requirements for substantial modifications, and the Commission's authority to update or amend these procedures.
44 The text discusses the rules and procedures related to the issuance, validity, extension, and potential suspension or withdrawal of certificates for AI systems by notified bodies, as per the EU AI Act. It also mentions an appeal procedure against the decisions of these bodies.
45 This text outlines the information obligations of notified bodies under the EU AI Act, detailing what they must report to the notifying authority and to other notified bodies, particularly regarding Union technical documentation assessment certificates, quality management system approvals, and conformity assessment activities.
46 The text outlines the conditions and procedures under which exceptions can be made to the conformity assessment procedure for high-risk AI systems in the EU, including the roles of market surveillance authorities, law enforcement, and the Commission in granting, objecting to, or withdrawing such authorisations.
47 The text details the requirements and procedures for providers of high-risk AI systems in the EU to draw up, maintain, and update an EU declaration of conformity, which asserts that their AI system meets specific regulations. It also outlines the conditions under which the declaration should be submitted to national authorities.
48 The text discusses the rules and regulations regarding the application of the CE marking on high-risk AI systems under the EU AI Act, including its visibility, placement, digital application, and the inclusion of the identification number of the notified body responsible for conformity assessment procedures.
49 The text provides detailed instructions on the registration process for high-risk AI systems in the EU, specifying different procedures based on the risk level and application area of the AI system, and the type of entity deploying it. It also outlines the information required for registration and who has access to this information.
50 The text outlines the transparency obligations for providers and users of certain AI systems as per the EU AI Act. It details the requirements for AI systems interacting with humans, generating synthetic content, recognizing emotions or biometrics, and generating or manipulating deep fake content. It also discusses the exceptions to these obligations, the timing and manner of information disclosure, and the role of the AI Office and the Commission in implementing these obligations.
51 This text is a legal excerpt from the EU AI Act, detailing the criteria for classifying a general-purpose AI model as one with systemic risk, including the conditions under which such a model is presumed to have high impact capabilities and the Commission's role in adjusting these criteria in response to technological developments.
52 The text outlines the procedures outlined in the EU AI Act for handling general-purpose AI models that present systemic risks, including the notification process, the possibility for providers to argue against the risk classification, the Commission's power to designate models as risky, and the process for reassessment of risk designation. It also mentions the maintenance of a public list of such models, respecting intellectual property rights and confidential business information.
53 This text details the obligations and regulations for providers of general-purpose AI models according to the EU AI Act. It includes requirements for documentation, cooperation with authorities, compliance with copyright laws, and the handling of trade secrets and confidential information.
54 The provided text is a section from the EU AI Act, detailing the obligations and responsibilities of authorized representatives of providers of general-purpose AI models, particularly those established in third countries. It outlines the procedures for appointing such representatives, their tasks, and the conditions under which they operate, including their interactions with the AI Office and national competent authorities.
55 The text outlines the obligations for providers of general-purpose AI models with systemic risk under the EU AI Act, detailing requirements for model evaluation, risk assessment, incident reporting, cybersecurity, and compliance demonstration. It also addresses the treatment of confidential information.
56 This text is a detailed description of the procedures and responsibilities of the AI Office and the Board in the EU AI Act, specifically regarding the creation, implementation, monitoring, and adaptation of codes of practice for AI models. It also outlines the involvement of various stakeholders in this process.
57 The text outlines the regulations and guidelines for the establishment and operation of AI regulatory sandboxes by Member States under the EU AI Act. It details the roles and responsibilities of competent authorities, the objectives of the sandboxes, and the procedures for testing, supervision, and reporting.
58 This text details the regulations and procedures for the establishment and operation of AI regulatory sandboxes within the European Union, including eligibility criteria, application processes, and conditions for participation. It also discusses the role of these sandboxes in facilitating AI development and testing, while ensuring protection of fundamental rights and societal safety.
59 The text details the conditions under which personal data can be processed in an AI regulatory sandbox for the purpose of developing, training, and testing AI systems in the public interest, according to the EU AI Act. It outlines the specific areas of public interest, data protection measures, risk monitoring and mitigation mechanisms, and documentation and transparency requirements.
60 This text outlines the regulations and conditions under which high-risk AI systems can be tested in real-world conditions outside of AI regulatory sandboxes, as per the EU AI Act. It details the requirements for providers or prospective providers, the role of market surveillance authorities, the protection of vulnerable subjects, and the legal liabilities involved.
61 The text outlines the requirements for obtaining informed consent from participants in real-world testing of AI systems outside regulatory sandboxes, as per the EU AI Act. It details the information that needs to be provided to the subjects, their rights, and the documentation process for consent.
62 The text outlines the measures and actions that Member States and the AI Office should undertake to support providers and deployers, particularly SMEs and start-ups, in complying with the EU AI Act. This includes providing priority access to AI regulatory sandboxes, organizing training activities, facilitating communication, and adjusting fees for conformity assessments.
63 This text discusses specific exemptions and obligations for microenterprises under the EU AI Act, including the conditions under which they can use a simplified quality management system and the requirements they must still meet despite these exemptions.
64 This text discusses the establishment and responsibilities of an AI Office within the European Union, as mandated by the EU AI Act.
65 The text outlines the establishment, structure, and operational guidelines of the European Artificial Intelligence Board, as per the EU AI Act. It details the composition of the board, the roles and responsibilities of its members, the creation of sub-groups, and the procedural rules for its functioning.
66 This text outlines the responsibilities and tasks of the Board in the EU AI Act, detailing its advisory and supportive roles in the consistent and effective application of the regulation, its contribution to coordination and harmonization, and its involvement in various aspects of AI regulation implementation, including technical expertise, regulatory practices, market surveillance, and international cooperation.
67 The text outlines the establishment, composition, responsibilities, and operational procedures of an advisory forum under the EU AI Act. It details the forum's role in providing technical expertise, its diverse membership, term of office, meeting frequency, and its capacity to produce reports, opinions, and recommendations.
68 This text details the establishment, composition, responsibilities, and operational guidelines of a scientific panel of independent experts, as outlined in the EU AI Act. The panel is designed to support the enforcement of AI regulations, provide advice to the AI Office, and ensure impartiality and objectivity in their tasks.
69 This text discusses the provisions in the EU AI Act that allow Member States to access a pool of experts for support in enforcing the regulation, including the potential for fees, the organization of support activities, and the Commission's role in facilitating access to these experts.
70 The text outlines the responsibilities and requirements of national competent authorities in EU member states, as per the EU AI Act. It details the process of their establishment, their communication with the Commission, resource provisions, obligations, and their role in providing guidance and advice, especially in relation to AI systems.
71 This text describes the establishment, maintenance, and accessibility of an EU database for high-risk AI systems, including the types of data to be entered, who enters it, and who can access it. It also addresses the handling of personal data within the database and the role of the Commission as the controller.
72 The text details the requirements and procedures for post-market monitoring of high-risk AI systems as per the EU AI Act. It discusses the responsibilities of providers in establishing a monitoring system, the collection and analysis of data, the creation of a monitoring plan, and how these rules interact with existing legislation.
73 The text is a detailed enumeration of the reporting obligations and procedures for providers of high-risk AI systems in the EU, particularly in the event of serious incidents. It also outlines the responsibilities of national competent authorities and market surveillance authorities in such scenarios.
74 The text is a detailed explanation of the market surveillance and control of AI systems in the European Union, as outlined in the EU AI Act. It discusses the roles and responsibilities of various authorities, the conditions under which they can access AI system data, and the reporting obligations related to AI system compliance and potential competition law issues.
75 The text discusses the regulatory oversight and compliance enforcement of general-purpose AI systems under the EU AI Act, detailing the roles and responsibilities of the AI Office and national market surveillance authorities in monitoring, evaluating, and ensuring compliance with the Act.
76 The text discusses the role and powers of market surveillance authorities in supervising testing of AI systems in real world conditions, as per the EU AI Act. It outlines the circumstances under which these authorities can intervene, modify, suspend or terminate testing, and the communication process between authorities in different member states.
77 This text outlines the powers and responsibilities of national public authorities in the European Union in relation to the supervision and enforcement of fundamental rights obligations under Union law, specifically in the context of high-risk AI systems. It details the process for accessing necessary documentation, the requirement for member states to identify and list these authorities, and the procedures for testing AI systems if documentation is insufficient.
78 The text is a detailed legal provision from the EU AI Act, outlining the rules and obligations regarding confidentiality, data protection, and information exchange for various entities involved in the application of the regulation, with a focus on the protection of intellectual property rights, business secrets, public and national security interests, and other sensitive information.
79 The text outlines the procedures at the national level for dealing with AI systems that present a risk, according to the EU AI Act. It details the responsibilities of market surveillance authorities, operators, and other relevant bodies in evaluating, reporting, and taking corrective actions for non-compliant AI systems.
80 The text outlines the procedures and regulations set by the EU AI Act for handling AI systems that are classified as non-high-risk by the provider. It details the responsibilities of the market surveillance authority and the provider, the potential for reclassification to high-risk, the necessary corrective actions, and the penalties for non-compliance.
81 The text outlines the Union safeguard procedure from the EU AI Act, detailing the process and timelines for objections raised by a market surveillance authority of a Member State or the Commission against a measure taken by another market surveillance authority, and the subsequent actions to be taken based on the Commission's evaluation of the measure.
82 The text is a section from the EU AI Act, outlining the procedures and responsibilities of various parties (including AI system operators, market surveillance authorities, and the Commission) when a high-risk AI system, despite being compliant with the regulation, is found to pose a risk to health, safety, fundamental rights, or public interest.
83 This text outlines the procedures and actions taken by the market surveillance authority of a Member State under the EU AI Act when there is formal non-compliance with regulations related to CE markings, EU declarations of conformity, registration in the EU database, and other technical requirements for high-risk AI systems.
84 This text discusses the designation and responsibilities of Union AI testing support structures as per the EU AI Act, including their role in performing certain tasks and providing independent technical or scientific advice.
85 This text outlines the right to lodge a complaint with a market surveillance authority under the EU AI Act, detailing the process and conditions for such complaints, and their role in conducting market surveillance activities.
86 This text discusses the right to explanation of individual decision-making in the context of high-risk AI systems, as outlined in the EU AI Act. It details the conditions under which a person affected by such a decision can demand clear explanations, and also mentions exceptions to this rule.
87 This text discusses the application of a specific EU directive related to the reporting of infringements and the protection of individuals who report such infringements, in the context of the EU AI Act.
88 This text discusses the enforcement responsibilities and powers of the Commission and the AI Office in supervising providers of general-purpose AI models, according to the EU AI Act. It also mentions the potential role of market surveillance authorities in this process.
89 This text outlines the monitoring actions that the AI Office may take to ensure compliance with the EU AI Act by providers of general-purpose AI models, and the process for downstream providers to lodge a complaint about potential infringements of the regulation.
90 This text outlines the process and conditions under which the scientific panel can issue an alert about systemic risks posed by a general-purpose AI model to the AI Office, as per the EU AI Act. It also details the subsequent steps the AI Office and the Commission may take upon receiving such an alert.
91 The text outlines the powers and procedures under the EU AI Act for the Commission to request documentation and information from providers of general-purpose AI models, including the conditions, legal basis, and potential fines for non-compliance.
92 The text is a section from the EU AI Act, detailing the powers and procedures for conducting evaluations of general-purpose AI models, including the circumstances under which evaluations may be conducted, the potential involvement of independent experts, the process for requesting access to AI models, and the legal obligations of AI providers.
93 This text outlines the powers of the Commission under the EU AI Act to request measures from AI providers for compliance, risk mitigation, and product restriction, and the process of structured dialogue before such measures are requested.
94 This text discusses the application of a specific regulation from the EU AI Act to providers of a general-purpose AI model, highlighting the procedural rights of these economic operators.
95 This text discusses the encouragement and facilitation by the AI Office and Member States of the creation of codes of conduct for the voluntary application of specific requirements to AI systems. These codes, which can be developed by various stakeholders, aim to foster ethical, sustainable, and inclusive practices in AI development and use, while considering the specific interests and needs of SMEs and start-ups.
96 This text outlines the European Union's AI Act, specifically focusing on the Commission's responsibility to develop guidelines for the practical implementation of the regulation, including various requirements, obligations, prohibited practices, transparency obligations, and the definition of an AI system. It also discusses the need to consider the state of the art on AI, relevant standards, and the needs of SMEs, local public authorities, and sectors most affected by the regulation.
97 The text is a legal document from the EU AI Act, detailing the conditions and procedures for the delegation of power to the Commission to adopt delegated acts, including the duration, potential revocation, consultation process, notification requirements, and the process for these acts to enter into force.
98 This text outlines the procedural rules for a committee established by the EU AI Act, referencing its formation, function, and the application of specific regulations.
99 The text outlines the penalties and enforcement measures for non-compliance with the EU AI Act, including the process for determining fines, the factors considered in assessing penalties, and the obligations of Member States in reporting and implementing these measures. It also discusses the specific penalties for different types of infringements and considerations for SMEs and start-ups.
100 This text outlines the process and considerations for imposing administrative fines on Union institutions, bodies, offices, and agencies under the EU AI Act, including the factors taken into account when deciding the amount of the fine, the maximum fine amounts for different types of non-compliance, and the rights of the parties involved in the proceedings.
101 The text outlines the conditions and procedures under which the European Commission may impose fines on providers of general-purpose AI models, according to the EU AI Act. It details the potential infringements that could lead to fines, the factors considered in determining the fine amount, and the rights of the providers in such cases.
102 This text is an amendment to a regulation, specifically Regulation (EC) No 300/2008, regarding the adoption of measures related to technical specifications and procedures for approval and use of security equipment concerning Artificial Intelligence systems. It references the requirements set out in another regulation, the EU AI Act.
103 This text is an amendment to a specific article in Regulation (EU) No 167/2013, detailing the addition of a subparagraph related to the adoption of delegated acts concerning artificial intelligence systems as safety components, in accordance with the requirements of another regulation.
104 This text is an amendment to a specific article in Regulation (EU) No 168/2013, detailing the addition of a new subparagraph related to the adoption of delegated acts concerning Artificial Intelligence systems, particularly those that are safety components. It references requirements from another regulation and asks for the insertion of specific regulation numbers.
105 This text is an amendment to Directive 2014/90/EU, detailing how the Commission should consider certain requirements when dealing with Artificial Intelligence systems that are safety components, as per a specific Regulation of the European Parliament and Council.
106 The text is an amendment to a directive from the EU AI Act, specifically adding a paragraph that outlines the requirement for considering certain regulations when adopting delegated and implementing acts related to Artificial Intelligence systems that are safety components.
107 This text is an amendment to Regulation (EU) 2018/858, adding a provision about the adoption of delegated acts related to Artificial Intelligence systems as safety components, and the need to consider certain requirements from another regulation. It also includes placeholders for specific regulation details to be inserted.
108 The text details amendments to Regulation (EU) 2018/1139, specifically adding paragraphs to various articles that mandate the consideration of certain requirements from another regulation (Regulation (EU) 2024/…) when adopting acts related to Artificial Intelligence systems that are safety components.
109 This text is an amendment to a specific article in Regulation (EU) 2019/2144, which adds a paragraph about the adoption of implementing acts related to artificial intelligence systems as safety components, and the need to consider certain requirements from another regulation. It also includes instructions for inserting the number of a specific regulation.
110 This text discusses an amendment to Directive (EU) 2020/1828, introducing a new point related to the regulation of artificial intelligence, known as the Artificial Intelligence Act, within the European Union's legislative framework.
111 The text discusses the compliance requirements and deadlines for different types of AI systems under the EU AI Act, including those already in use, high-risk systems, and general-purpose AI models, in relation to the Act's enforcement date.
112 This text is a detailed outline of the evaluation and review procedures for the EU AI Act, including the frequency of assessments, the areas to be evaluated, the reporting process, and the potential for amendments based on these evaluations. It also discusses the roles of various entities in this process.
113 The text is a legal provision from the EU AI Act, detailing the timeline for the enforcement of different chapters and sections of the regulation after its official publication. It also states the binding and direct applicability of the regulation in all Member States.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment