11/6/2023 - technology-and-innovation

AI and new paradigms in assigning responsibility

By nicolas cuello

Imagen de portada
Imagen de portada

If we don't read what you are HACER?

Article written by Pedro Leon Cornet (Abogado U.N.T – Maestrando Direito e Economia UTDT – Head Legal LUCODS)

The Royal Spanish Academy defines AI (Artificial Intelligence) as a scientific discipline that is engaged in creating computer programs that perform operations comparable to those that realize the human mind, such as learning or logical reasoning. Although there is no clear consensus on the concepts of AI (Artificial Intelligence) and IoT (Internet of Things or Internet of Things) or its scope.

Advancement and innovation have always generated short circuits in pre-established regulatory frameworks. At this point, we believe that there is a way to address this problem and implies conceiving the right to innovation, with the purpose of establishing legal solutions for tangible disruptions caused by the advancement of new technologies. In this way, we understand legal tools as instruments at the service of emerging changes.

This implies the need to create a legal framework that adapts to the new reality generated by the emergence of new technologies and the complexities that its implementation entails.

We travel in time, when the electric lamp was created, let's think about the amount of candle sales businesses that broke, the accidents that were generated by electricity manipulation, the new regulations in relation to power supply, added to the amount of people that will be opposed to such change. Is that enough reason for the ban on the electric lamp? No. Is a regulatory framework necessary for electricity? Absolutely.

In parallel with the development of technology, the regulatory framework of the right of damage was evolving towards what progress demanded. The evolutionary scenario of the right of damage has gone from a cemented decimononic construction in a strong notion of guilt, so, in tune with the industrial revolution to run to an optic nailed by the objective cutting factors. This evolution was not capricious, but it came to confer a certain legal certainty to allow the marking of a strong distinction between the consequences caused by man's own guilt in a subjective plan to responsibility for things

damage or risks of your invention.

In this sense, following the recently published essay “Comparative law study on civil liability for artificial inteligence”, the European legal systems had distinct variations and conformed optical in the approach to civil responsibility, which led them to have little differences in the regulation of these technologies. It seemed that Scandinavian and Roman systems are inclined to a more objectivistic cut when dealing with the theme. However, the Germanic tradition has encompassed much more subjective parameters focused on the conduct, which allows to extend the milestone of assigning responsibility to new objects

capable of deploying actions.

Europe is approaching unanimity as to the allocation of objective responsibility, but within the jurisdictional nuances, it is important to see differences and differences, especially in countries such as Germany, the Czech Republic, Croatia, Denmark and Poland, where the main factors of responsibility attribution turn around the subjective factor born of guilt.

Argentine law and doctrine is disputed about the type of assignment of responsibility caused by these technologies, a responsibility, undoubtedly, of objective type.

As for the regulation and location in the regulatory framework the Argentine law assumed to the responsibility arising from the application of AI and IoT under the provided for in the articulate ranging from 1756 to 1759 CCyC and 1769 CCyC. Articulate that disaggregates in two types of responsibility, on the one hand, responsibility for the fact alien - responsibility for the children - and on the other the responsibility for the damage caused by animals.

As for the autonomy of a machine it is difficult to measure what is the perception of the environment and the risk it has.

Is it a machine like a son or an animal?

Recently in a United Nations Report (“Letter dated 8 March 2021 from the Panel of Experts on Libya Established pursuant to Resolution 1973 (2011) addressed to the President of the Security Council”) It was alerted to the first autonomous drone attack designed to operate and keep territories from interventions carried out by Turkey in Libya, Syria and the Caucasus.

The weapon used by STM Kargu 2, is a dron capable of carrying swarm-like operations and explains the company “can be effectively used against static or moving targets thanks to its real-time image processing capabilities and automatic learning algorithms integrated into the platform”. Proofs of electro-optic and infrared video cameras and a laser imaging system (LIDAR) that allow them to operate fully autonomously.

Thanks to automatic learning, devices can be taught to detect and interpret movements of troops or military units as tanks.

These autonomous systems have been designed to attack objectives without requiring authorization between an operator and ammunition, their way of operating completely autonomous generates that the same deployment a behavior based on their own perception of the environment.

And who answers for these attacks? The company argues that it only creates the technology that others use, transferring responsibility to those who have already created the system.

Whoever bought the object to use it arguye that this intelligence observes, analyzes and acts based on their own considerations. We could add a responsible third, which configures the machine to direct these attacks, but the truth is that we have seen that once overcome the so-called ‘training set’ these intelligences escape from human control and begin to make proper evaluations due to their ability to generate proper judgments according to their own experience.

I wonder, is it enough to solve this problem with the statements from 1756 to 1759 CCyC and the 1769 CCyC? Are we getting short with the assignment of responsibility? Will we be entitled to innovation and create specific institutes?

Do you want to validate this article?

By validating, you are certifying that the published information is correct, helping us fight against misinformation.

Validated by 0 users
nicolas cuello

nicolas cuello

CEO & Founder of LUCODS. Expert in new technologies, with more than 10 years in the innovative ecosystem, consulting private companies, technology linker, lecturer on technology events such as Virtuality, Smart Cities, Bafici, among others. INTI Advisor on topics related to Virtual Reality, Augmented Reality. Member of METAVERSE STANDARD FORUM. Professor at IMAGE CAMPUS from the METAVERSO route.

Linkedin

Total Views: 12

Comments