A judge in Cartagena (Colombia) claims to have use ChatGPT as support tool to resolve a guardianship for health care neglect

Maria Lorena Flórez Rojas, Assistant Professor at the University of Groningen. Researcher CinfonIA

On January 31st a decision from a Colombian judge unleashes a discussion of using AI to –help, write, assist, or decide– a law case.

What is guardianship (tutela) and how does it work in Colombia? (In a nutshell) 

Guardianship is a protection mechanism (action) that allows any person to go before the judicial authorities to obtain immediate protection of their fundamental rights. When these rights are violated or threatened by the action or omission of any public authority or private individual (Art. 86 of the Colombian Constitution and Decree Law 2591 of 1991).

This action is a right that is strengthened by the fact that it is recognized in the Human Rights Treaties. For example, with respect to art. 25.1 of the American Convention on Human Rights, the Inter-American Court of Human Rights in Advisory Opinion 8/87, held that: “…it is a general provision that includes the procedural institution of amparo, understood as a simple and brief judicial procedure whose purpose is to protect all the rights recognized by the constitutions and laws of the States Parties and by the Convention….«, whose existence must not only be formal (in the written text), but must be suitable to protect human rights (in practice).

This mechanism is only applicable if there is no other suitable judicial defense mechanism, unless it is intended to prevent an imminent harm to fundamental rights (Art. 6.1, Decree 2591 of 1991). Because its purpose is to grant protection without delay, it does not require legal knowledge for its exercise (Arts. 1,18 of Decree 2591 of 1991). Thus, it always requires the judge to decide on the merits, either to grant or deny the request (Art. 1 of Decree 2591 of 1991).

 In this regard, the judge was deciding whether to grant the constitutional protection on a specific case in reference to the right to health in the judicial process initiated by the family of a minor against EPS Salud Total.


What was the case about?

The minor was born with Autistic Spectrum Disorder (ASD). He receives therapies by order of his treating physician, which require urban transportation. The plaintiff states that he lacks economic resources to pay the costs of co-payments and transportation expenses for the therapies. The respondent opposed the claim. It stated that the minor does not belong to the population subject to exemption from the payment of co-payments.

 The judge of the first instance decided to order Salud Total EPS to authorize the value of the urban transportation and the exoneration of the payment of the co-payments. He indicated that the defendant did not demonstrate that the plaintiff will have economic resources to cover such expenses. However, Salud Total EPS contested the decision, insisting that the child does not belong to the population under special attention.

 Thus, the judge considered that moderating fees and co-payments, in the specific case, represent barriers to access to health services that the Constitutional Judge must eliminate. (To see the case Law only in Spanish)


How was ChatGPT used in this case?

 The judge included a section within the decision regarding the use of ChatGPT and some questions they asked the tool to ‘help’ them in the decision. His reasoning was that according to Law 2213 of 2022 it is important to incorporate ICT in the judicial processes to assist the judge in several ways. The following table shows the questions and answers provided in the decision:

Question: Are autistic children exempted from paying moderation fees for their therapies?

Answer: «Yes, that is correct. According to the regulations in Colombia, children with a diagnosis of autism are exempt from paying moderator fees for their therapies. This benefit is regulated by Law 1753 of 2015 and applies to all institutions providing public or private health services that receive resources from the general social security health system«.

 Question: Should tutela actions in these cases be granted?

 Answer: «In the event that a health service provider institution does not comply with the exoneration of moderating fees for a minor with a diagnosis of autism, the person or his legal representatives may file a tutela action to protect their rights. The tutela is a fast and effective judicial mechanism to protect fundamental rights that are violated. If it is demonstrated that the right to health of the minor with autism is being affected by the requirement to pay moderator fees, it is likely that the tutela action will be granted and the health care institution will be ordered to comply with the exoneration. However, each case is unique, and the final decision depends on the specific facts and circumstances of the case«.

Question: Is requiring in these cases the moderating fee a barrier to access to health services?

Answer:  «Yes, in some cases, the requirement to pay moderator’s fees can be a barrier to access to health care for people diagnosed with autism, especially if they have difficulty earning an income or if their financial situation is precarious. The requirement to pay moderator fees may prevent or hinder access to treatments and therapies essential for the well-being and development of children with autism. For this reason, the regulations in Colombia exempt children with a diagnosis of autism to pay moderating fees for their therapies, with the objective of guaranteeing access to these essential health services without financial obstacles.«

Question: Has the jurisprudence of the constitutional court made favorable decisions in similar cases?

Answer:  «Yes, it has. The Colombian Constitutional Court has taken favorable decisions in similar cases related to the exoneration of moderator fees for people with autism. Jurisprudence has recognized the importance of guaranteeing access to essential health services for people with autism, and has held that the requirement to pay moderating fees can be a barrier to access to these services. The jurisprudence of the Constitutional Court is an important source for interpreting and applying the Constitution and laws in Colombia and its decisions are binding for all authorities and public entities in the country. Therefore, favorable decisions in similar cases may be useful as a precedent for other cases in which the requirement to pay moderating fees for persons with a diagnosis of autism is debated.


After that, the judge highlights some important aspect: “The Office will resolve to add the grounds for the resolution of the case based on the construction of texts made in the IA application https://chat.openai.com/chat as an initiative to speed up the resolution of tutela cases. The purpose of including these IA texts is not in any way to replace the Judge’s decision. We are really looking for is to optimize the time spent in the drafting of judgments, after corroboration of the information provided by IA.


 Some thoughts and remarks

Firstly, let me be clear with something: The use of artificial intelligence or any technology within the administration of justice is a fact and is changing the traditional ways.  

Secondly, I want to to point to the thread made by Juan David Gutiérrez Rodríguez on Twitter about this case, if you haven’t read it, please do, he explained how the system invented jurisprudential sources to support its answers.


Now, I want to point some issues around the use of ChatGPT in general, not only for judicial system.

What do the developers tell you about its use?

They recommend checking whether responses from the model are accurate or not. In fact, they ask you help them to improve, how? Well, they specifically tell you that if you find an answer is incorrect, you should provide that feedback in the platform. Meaning that they are aware that the responses might be incorrect and need a human to intervene. Because all the questions regarding the case mentioned above are interactions that help to train the model further. In addition, as it has been pointed out, ChatGPT can make up facts or how they called ‘hallucinate’ outputs such as invented jurisprudence, create facts only appear truthful. They requested that if you find an answer is incorrect, once again give feedback. What they do not mention is how the feedback is reviewed. Who checks if the person thump’s up or down are based on real sources or facts. Thus, the more we use the model, the more it will be trained, either for the better or for the worse, as we have seen in other cases such as Tay Bot.

The best example for risk of technofascination or Decision-automation bias[1]

I cannot stress enough the necessity to think and evaluate the need and risks of the use of AI systems for different tasks. It is important for any entity, public or private, to assess whether they need a specific tool, with or without AI. For instance, the judge argues that to speed up the process of writing he used the ChatGPT. However, the writing process is only one part of his tasks. The judge needs to analyze the case, review case law, make an argumentative line between facts, law and cases in order to assess the specific situation. However, it seems that the judge, in his good intention to promote the use of digital tools, forgot that he has access to other tools that are better tested for Colombian judicial system and work with Colombian data bases of the judicial systems such as Vlex or Legis Experta.

This case could be an example of decision-automation bias, meaning that the subject that is using the AI system over-relies on the outputs of AI system. Due to an overconfidence in the impartiality or certainty of the AI system, such as ChatGPT, judges may be hindered in their ability to make exact judgments and understand their surroundings. This could lead to an over-reliance on the outputs of automated systems. Users may lose the capacity to identify and respond to the faults, errors, or deficiencies, because they become comfortable with its signs and instructions. This may be exacerbated by underlying fears or concerns about how ‘disagreeing with’ or ‘going against’ a system’s results[2]. For this reason, it is important that the organization, in this case the Colombian judicial system agrees on how they will use these systems, how they will have a comprehensive training and preparation that explores both AI-related judgment biases and human-based cognitive biases and how they will revise the decision adopted. Thus, if the goal of the judge was to reduce the time spending on writing the decision, I wonder how long it took for him to go to the website, write the questions, copy, and paste them in the document without checking similar cases instead of using the tools he knows and has proper training for. Thus, if the goal was saving time why not using a voice generation into text, with complementary research tools instead of leaving the decision to the trending tool such as ChatGPT. VLex, Legis xperta and other research tools that also use AI to generate for instance abstracts are available too and we used every day to help us to research.


The risk of inconclusive evidence and lack of audit[3]

The fact that ChatGPT gives different responses about the same questions could lead to detrimental results on a broader scale than what could have happened without the new technology. It has been pointed out that the AI tool was given false and non- existent judicial cases, either because the system itself does not have records of rulings within the Constitutional Court in English or in another database as they are context-specific issues, For example in this case, the AI tool was requested to answer questions about the colombian health system but the judge decided to overlook the developer`s warning about thr ‘hallucinated’ outputs such as invented jurisprudence. Knowing this fact, the judge was willing to take this risk and overtrust the AI decision without questioning how it came up with the response.

 It is essential that any system that intends to use AI technology in Colombia maintains an external audit system throughout its implementation and evolution with a minimum level of detail to describe the decision process. This additional phase could be a way to address the “black box” problem and help ensure public confidence in these systems. Explicability is crucial to build and maintain user confidence in AI systems. In this regard, I personally thank the judge to bring the topic on light, but I am wondering how many other public entities already used this system without declaring it[4]. However, it would have been different if the judge would have explained why he chose this tool over another, how the verification process was and how public entity addresses the use of this tool. Thus, the degree to which explicability is required depends largely on the context and the severity of the consequences if that result is wrong or inaccurate[5].


Conclusion

It is important to recognize the positive impact that AI systems already have and will continue to have, both commercially and socially. However, it is equally relevant that these AI systems can ensure that the risks and other adverse impacts associated with these technologies are managed appropriately and proportionally.

  •  In this context, it is important that if a public entity decided to use AI system to support their daily tasks, they need first adopt a transparent guideline on the ethical implementation and design of these systems in public entities. In this way, the public entities must strive to build or use AI systems that are reliable and auditable, so that citizens’ rights are safeguarded throughout the process. On this basis, some recommendations of a particular nature will be pointed out below for each of the tools analyzed and for the implementation of AI systems in general[6].
  •  It is essential that both public and private entities adopt a design thinking methodology to understand that technology can be the vehicle to modify and improve user experience with the entity, but technology on its own does not generate any impact on the entity’s transformation processes or objectives.
  •  It is important to implement a technological/algorithmic impact assessment of the tools in question to define whether its implementation puts fundamental rights at risk and what the mitigation mechanisms for those risks would be like. This assessment must comply with transparency, traceability, and auditing guidelines so that each entity may test their AI tool and correct the possible risks said implementation entails.

The opinion is solely personal thoughts.


[1] ICO and The Alan Turing Institute, ‘Explaining Decisions Made with AI’ (2022).

[2] Maria Lorena Flórez Rojas, ‘Neuromarketing vs Libertad y Autonomía de Las Decisiones Del Consumidor’ (2022) 16 Revista Brasileira De Direitos Fundamentais & Justiça 55 <https://dfj.emnuvens.com.br/dfj/article/view/1346/1064&gt;

[3] Maria Lorena Florez Rojas and Juliana Leal Vargas, ‘The Impact of Artificial Intelligence Tools: An Analysis of the Public Sector in Colombia’ in Carolina Aguerre (ed), Artificial Intelligence in Latin America and the Caribbean: Ethics, Governance and Policies (1st edn, CETyS, UdeSA, FairLAC 2020) <https://proyectoguia.lat/wp-content/uploads/2020/10/7.-GECTI-eng.pdf&gt;.

[4] Mark Coeckelbergh, ‘Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability’ [2019] Science and Engineering Ethics 1 <https://doi.org/10.1007/s11948-019-00146-8&gt;

[5] Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7 International Data Privacy Law 76.

[6] Céline Castets-Renard, ‘Human Rights and Algorithmic Impact Assessment for Predictive Policing’ [2021] Cambridge University Press 93 <https://papers.ssrn.com/abstract=3890283&gt;

6 comentarios

Deja un comentario

Este sitio utiliza Akismet para reducir el spam. Conoce cómo se procesan los datos de tus comentarios.