By Leonardo Neri
The National Council of Justice (CNJ) will examine a decision signed by a federal judge of the 1st Region, which, in reality, was generated by an artificial intelligence system, using ChatGPT.
The incident was discovered because the decision was based on fictitious case law from the Superior Court of Justice (STJ). Artificial intelligence invented this non-existent case law to support the decision. The defeated lawyer realized the fraud and reported the case to the Regional Inspectorate of Federal Justice of the 1st Region.
Judge Néviton Guedes, the Federal Court's ombudsman for the 1st Region, issued a circular reporting the incident. He advised judges and appellate court judges not to use generative artificial intelligence tools that have not been approved by the Judiciary's oversight bodies to research case law precedents.
Furthermore, Néviton warned that the indiscriminate use of artificial intelligence entails responsibility for the competent magistrate, and all employees, interns and collaborators involved must contribute to this responsibility.
The judge highlighted that the CNJ, through Resolution 332/2020, authorized the use of artificial intelligence by the Judiciary, but established ethical guidelines to ensure that its use meets the objective of promoting the well-being of those under its jurisdiction and the equitable provision of jurisdiction. According to him, the tools can at most assist judges.
The judge who used ChatGPT attributed the incident to a “mere mistake” resulting from work overload, stating that part of the sentence was written by a civil servant. Although the initial investigation was filed with the 1st Region’s Inspectorate, the case will now be analyzed by the National Justice Council.