The AI Act is based on, and at the same time aims to protect fundamental rights, implying their protection, while fulfilling the safety requirement prescribed by the AI Act within the whole lifecycle of AI systems. Based on a risk classification, the AI Act provides a set of requirements that each risk class must meet in order for AI to be legitimately offered on the EU market and be considered safe. However, despite their classification, some minimal risk AI systems may still be prone to cause risks to fundamental rights and user safety, and therefore require attention. In this paper we explore the assumption that despite the fact that the AI Act can find broad ex litteris coverage, the significance of this applicability is limited.
See how this article has been cited at scite.ai
scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.