Presumably many of us have watched Bonanza when they were younger. Bonanza is an American Western TV series about the Wild West of the 1860s. It wasn’t an easy life – lawlessness reigned, and justice was only for the strong.
However, in 2024, it’s a different story – the laws are in place and being effectively enforced. The development and use of artificial intelligence is no longer a lawless environment. Existing regulations remain in place, and now a new Artificial Intelligence Regulation has entered into force, carrying hefty fines of up to €35 million or up to 7% of a company’s global turnover. So, the answer to the question in the headline is already a pre-emptive ‘no’.
How will the new AI regulation affect everyone?
Roughly speaking, artificial intelligence regulation does two things: it divides different artificial intelligence systems into four classes and then specifies what obligations must be met for each class. These obligations are primarily important for the developers of artificial intelligence, but to some extent also for the users.
Artificial intelligence systems are divided into classes according to their risks:
Examples of systems that pose an unacceptable risk include assigning a social score to citizens (as is done in China, for example) and real-time remote biometric identification of people by law enforcement (for instance, with cameras). Such activities will be banned as they unduly infringe on people’s privacy. In the case of remote identification, however, there was a long debate about whether it was more important to facilitate the fight against terrorism or to protect people’s privacy. The latter won.
High-risk systems are physical products in which artificial intelligence is a security component of the product, such as self-driving cars. Also, artificial intelligence programs which are designed for use in a specific field, such as education and healthcare. For example, this includes a system that uses AI to grade students or diagnose patients. The Regulation imposes many obligations on such artificial intelligence programs, both for developers and implementers. For example, it obliges the system to register, implement a risk and quality management system, ensure the quality of training data, log the system’s activities, and so on.
General-purpose systems include programs that are less risky and not designed for a specific use. An example is ChatGPT, which is a general-purpose tool. There are significantly fewer obligations, but the developer needs to share technical information about the use of the system.
Finally, there are systems with transparency obligations. These are, for example, chatbots or deep link generators. The primary obligation is the information obligation. For example, if a bank uses chatbots to communicate with its customers, they must be informed about it at the beginning of the conversation. There is no particular risk associated with this practice.
There are, in fact, more obligations and areas, but in the interests of brevity, I will not list them all here.
What else should you bear in mind?
It’s important to remember that the new Regulation will, in practice, impose obligations on the development and use of AI under existing legislation, even if they are not specific to artificial intelligence.
In Estonia, as elsewhere in the world, copyright applies when a person creates an original work that is protected by their rights. Articles, programs, visuals, books, images, videos, and even this newsletter, for example, are protected. You must have the author’s permission to incorporate someone’s work into an artificial intelligence program. It is also essential for developers to maximise the input of material into the artificial intelligence program and to allow it to evolve. However, in doing so, it must not be forgotten that the author’s permission must be obtained. Developers often forget to do this, which is why there are several lawsuits in America. The New York Times is also in dispute with artificial intelligence developers over the unauthorised use of newspaper articles.
Another critical issue is the protection of personal data. Processing personal data, such as a person’s name and picture, requires a specific legal basis, such as consent or legitimate interest. Thus, an artificial intelligence developer cannot indiscriminately and thoughtlessly take people’s personal data and process it in the name of developing a program. The legal basis for doing so needs to be thought through and put on paper.
In conclusion
Artificial intelligence is not the Wild West as seen in the old American TV series. Its developers and users should know what their rights and obligations are. The fines in the new Regulation are hefty, and it’s cheaper to comply with the law than to break it.
If you have any questions, contact Henri Ratnik, Co-Head of IT, IP, and Data Protection and an attorney at law at WIDEN (henri.ratnik@widen.legal).
Мы – WIDEN, балтийская юридическая фирма, которая предлагает своим клиентам полный спектр услуг и гордится тем, что предоставляемые ей юридические консультации ориентированы на клиентский опыт.
Нам доверяют