Responsible AI : redefining the boundaries of technology regulation
Υπεύθυνη τεχνητή νοημοσύνη : επανακαθορίζοντας τα όρια της ρύθμισης της τεχνολογίας

View/ Open
Keywords
Τεχνητή νοημοσύνη ; Artificial intelligence ; AI ; ΤΝAbstract
AI is a driving force in today’s world. Its immense capabilities, however, come with a myriad risks. Automated weapons, privacy infringement, biometric surveillance systems, workforce replacement, to name but a few, present grave threats for humanity’s core values. To minimize those risks, experts have set apart certain principles that AI systems shall uphold. Transparency, robustness, safety, data governance and human oversight are among the most commonly sighted of these principles. AI systems with such characteristics could earn the characterization of Responsible AI systems (RAI).
To enforce these principles, the European Union (EU) has voted into effect the AI Act (AIA). This Regulation categorizes AI systems according to the risks they pose. Systems that are deemed to pose unacceptable risks are banned from being put into service altogether. High-risk systems must comply with extensive requirements such as human oversight, transparency prerequisites, built-in fail-safes, log keeping and many more. From a technical standpoint, the International Organization for Standardization (ISO) has issued two ISO standards related to AI safety, while the National Institute of Standards and Technology has published its own AI risk management playbook.
AIA is a truly ambitious piece of legislation. Attempting to regulate what might prove to be the final technological frontier. However, its importance is greatly undermined by the lack of the EU's AI development programs to date. Most of the AI-related research and development is undertaken by US-based and Chinese companies. Their attitude towards the compliance requirements set by the AI Act is mainly focused on ignoring them.
Moreover, both the USA and the PRC seem more focused on becoming the leading AI force than becoming the leading force in Responsible AI. This attitude is evident in the latest decisions made by the US Administration, and it is rapidly gaining ground amongst the leading AI-related companies as well. Fuelled by China’s latest advancements in the sector, the USA’s reaction tends to establish a cold war-like arms race environment. This climate isn’t kind to regulation attempts. In fact, the latest trend in the USA is to deregulate AI research, in an attempt to maximize the sector’s agility. The allocation of astronomical funds for AI research coupled with a sense of combative urgency constitute the perfect prerequisites for the creation of AI systems that will in no way be responsible.
The ineffectiveness of regional legislation such as AIA, combined with the international power dynamics developed, underline the importance for the creation of an international AI governance framework. Such a framework could lead to the adoption of common AI safety practices, slowing down the pace of innovation, defusing the arms race dynamic and hopefully better gatekeep the goal of Responsible AI development.
This plan isn’t without soft spots. Its implementation will require great political will from all the countries involved, an ingredient that, at the moment, seems to be missing. In addition, developments during this millennium have brought similar international initiatives under question. The outbreak of major wars and military conflicts, global public health breakdowns and the management of the existential climate crisis have cast doubt on the effectiveness of international organizations and multi-national cooperation schemes. Last but not least, the question of the technical aspect of AI regulation remains. Many of the challenges presented by AI, such as the alignment and control problems, the issues of opacity and untraceability, the demand for explainable systems, subject to human oversight, might not even be sustainably technically possible, despite the most robust international Regulations we can vote into effect and follow religiously.


