On October 7, Ukraine presented its regulatory roadmap for artificial intelligence that is the AI regulation. This initiative is detailed on the Ministry of Digital Transformantion’s official website. Additionally, it is strategically designed to assist local companies in preparing for the adoption of a law comparable to the European Union’s AI Act.
Moreover, it places a strong emphasis on educating citizens about safeguarding themselves against potential AI risks.
Bottom-Up Approach for Proactive Business Readiness
The core philosophy of this roadmap is rooted in a bottom-up approach, suggesting a gradual progression from less to more. It proposes providing businesses with the necessary tools. These tools will proactively prepare for future regulatory requirements before any laws are officially enacted.
To ensure a smooth transition, the roadmap outlines a preliminary period. This will further allow companies to adapt to potential laws over the next two to three years. Deputy Minister of Digital Transformation, Oleksandr Borniakov, elucidates the vision.
He stated,“We plan to create a culture of business self-regulation through various means.”
One key aspect involves encouraging companies to sign voluntary codes of conduct, attesting to their ethical use of AI. Additionally, a White Paper is in the works to familiarize businesses with the proposed approach, timing, and stages of regulatory implementation.
Drafting Ukrainian AI Legislation and Learning from EU’s AI Act
Anticipating the need for a comprehensive legal framework, the roadmap envisions the draft of the Ukrainian AI legislation to be unveiled in 2024. Importantly, this timing aligns with, but does not precede, the EU’s AI Act. It will further allow the national law to integrate the latest insights from the European regulations.
Referencing the recent passing of the EU AI Act in June by the European Parliament, the Ukrainian roadmap takes cues from its European counterpart. The EU AI Act lays down stringent regulations, prohibiting certain AI services and products while imposing limitations on others.
Strategic Prohibitions and Permissions
Notable restrictions in the EU AI Act include bans on biometric surveillance, social scoring systems, predictive policing, “emotion recognition,” and untargeted facial recognition systems. Interestingly, generative AI models like OpenAI’s ChatGPT and Google’s Bard receive conditional approval. This will allow their operation if AI-generated outputs are clearly labeled.