Related news
The UK’s decision to leave the European Union (EU) was forced redefining the way in which the two parties will relate after Brexit, some changes that affect many aspects such as foreign policy, security and defense, public order, police cooperation, fisheries or commercial relations.
To this one can now be added in the field of technology. And what is it? The UK has just launched its data law reform proposal and the draft points to the Boris Johnson Government’s intention to move away from the most humanist position in Europe on technological developments and, in particular, artificial intelligence (AI).
This UK decision to distance itself from the European Union (EU) on data protection This can be seen in the same preface to the 146-page document on data law reform, which is from 2018 and where the country is changing General Data Protection Regulation Europe (GDPR).
In the document, which until last Wednesday the UK Minister for Digital, Culture, Media and Sport, Oliver Dowden, indicated that, after leaving the EU, the country is now free to create a “brave new” data regime.
In this sense, he showed that some aspects remain “unnecessarily complex or fuzzy” and still cause constant uncertainty three years after its introduction. “Our ultimate goal is to create a data regime that fosters growth and innovation, while maintaining the UK’s world-leading data protection standards.”
Goodbye human surveillance
The main changes brought about by the text are modification or deletion of article 22, which guarantees that a person has the right to request a review of the decisions made by an algorithm. This will affect, for example, the AI systems that banks use to decide whether a person can have access to the credit or human resources department to assess candidates for the selection process.
In your new proposal, The UK government recognizes the “significant” Article 22 protections in some cases use, as there may be a legitimate need for certain “high-risk” AI derived decisions to require human review, even if this process limits the scope of use of the system or slows it down.
However, he warned that The operation and effectiveness of Article 22 is currently subject to uncertainty for two main reasons: lack of certainty about how and when the current safeguards are intended to be applied and the limited application of the article.
In this context, he added, it should be borne in mind that the use of automated decision-making is likely to increase rapidly in many industries in the coming years. “The need to maintain the ability to provide a human review may, in the future, be inappropriate or disproportionate, and it is important to assess when this protection is needed and how it works in practice,” he said.
The text points out that it is imperative that the UK GDPR provisions for automated decision making and profiling work for both organizations and individuals. For this reason, it is considered important check whether Article 22 and its provisions are compatible with the possible evolution of a data-driven economy and society, and if it provides the necessary protection.
Before making a final decision on the final design of this article, The government has launched a public consultation on this issue. It also recalls that the Working Group on Innovation, Growth and Regulatory Reform, created by the UK Executive and made up of three conservative British MPs, had recommended the removal of the article.
Relations between Europe and the UK
To D+I, professor and legal expert in the field of Law, Strategy and Digital Communication, Borja Ad Suara, believes that The English decision is a “bad idea” because it won’t improve the current wording of the text in terms of data protection and unilateral modification will disconnect the country from the continent and from European regulations.
In this sense, he states that, if this change is finally confirmed, will have consequences in the process of exchanging data between the country and the Old ContinentBecause, no matter how powerful a data system the UK makes, it will be “isolated” from the rest of the economy.
“If they omit article 22, they will build an embankment in the English Channel because there can’t be exchange of data with the European continent, because” The UK puts itself at a lower level of data protection by not complying with the entire GDPR“, Advoice stressed, highlighting the “chaos” that could happen for a London company to request an international data transfer in this scenario.
Meanwhile, Idoia Salazar, president of OdiseIA, Observatory of the Social and Ethical Impact of AI, also considers that if the UK finally goes ahead with this initiative, it would distance itself from European laws on AI that require, as a rule, surveillance by people, as long as it is not just automation, without any impact, both at the social and labor levels.
Furthermore, he added that It will also pick up the opposite trend to the one currently followed by most of the countries in the world, which seeks to put humans at the center of decision-making and AI systems to assist in this process to help us develop as humans.
In line with this, law firm Linklaters points out that, if it asserts that the UK suppresses or modifies this article, the country will “swim against the tide” at a time when the EU and China are trying to regulate this area to better protect consumers.
Solutions to problems that don’t exist
On the other hand, Advoice also rejects the argument made by the UK Government that this article could limit the development of a data-driven economy in the country, as it is today, three years after taking effect, there is no problem that there are millions of people asking for human intervention.
“I don’t know if this has any real basis,” said the former director general of Red.es, who added that it makes no sense to change the law for a matter that is not currently raised.
According to him, behind this decision may be the fact that many times it is no longer possible to explain the internal logic of the algorithm. At most it would be possible to explain what they were programming at that time, but with machine learning the system continues to learn and there is a time the programmers or the algorithm supervisors themselves don’t know why they made that decision.
Law firm Linklaters, in the text posted on its website, agrees with Ad Suara that, although what is included in article 22 are “interesting and potentially significant” rights, rarely applied very often in practice and rarely seems to cause problems.
Despite everything, Adudara remembered that he was very critical of Article 22, because it looks very good on paper, but it is not easy to fulfill it. And no matter how many people ask for this supervision, they will not be able to understand the explanation they receive.
In this case, he points out that what worries him is that no one has asked for an explanation of how the internal logic of the algorithm’s procedures works. So, advocates create an intermediary authority with trained personnel tasked with channeling these requests human supervision and can then respond to consumer inquiries. Furthermore, it could be a response to a hypothetical massive demand for future surveillance.
Distrust of AI
On the other hand, Salazar pointed out that it is a fact that it is necessary to promote the development and implementation of artificial intelligence, but warned that If there is no attention to protect the inherent rights of every human being, this goal will not be achieved. “because mistrust surrounding this technology will be further fostered.”
So, it affects that of the European Union (EU), “and Odyssey supports it 100%”, the goal is to always put humans at the center, and artificial intelligence systems as a tool to support these humans.
“Because of that, it is not allowed for the AI system to make autonomous decisions that are relevant to a person’s life (jobs, loans, health …) without the supervision of individuals who specialize in these issues “, said Salazar, who recalled that this issue was included in the proposed EU law on AI published last April.
Likewise, he adds that this is also compounded by the current developments in which this technology is found in terms of explanability. “There are many AI algorithms that are ‘black box’, i.e., they don’t allow you to see the process they follow when ‘making a decision’. Therefore, until this problem is resolved, have faith in this technology (if this type of algorithm is used deep learning) is still very relative,” he said.
wrong decision
Correspondingly, think tank We The Humans (WTH) considers the possibility of the UK removing human oversight of decision-making from AI-based systems as “wrong” because leave decision-making control in unknown interest groups and will not support AI development due to lack of transparency.
Faced with the UK Government’s argument that human surveillance is impractical, expensive and impractical in driving a data-driven economy, at WTH they believe that such oversight is “possible and necessary”.
“That’s probably because the more we have technology capable of explaining the results of intelligent algorithms. This is necessary because every citizen has the right to know who made the decisions and why. Behind the algorithm there are always people or institutions,” he said.
For all this, they defended it, removing human oversight will lead to decision-making by unknown stakeholders and they emphasize that “modern and democratic societies cannot neglect their responsibility and oversight in making decisions that affect their citizens.”
Follow the topics you are interested in
“Problem solver. Proud twitter specialist. Travel aficionado. Introvert. Coffee trailblazer. Professional zombie ninja. Extreme gamer.”