The Gujarat High Court introduced a policy to regulate the use of artificial intelligence (AI) in its judicial and administrative functions Saturday. Acknowledging “rapid technological advancement” and the influx of “techno-savvy judicial officers,” the policy prohibits the application of AI in judicial decision-making, evaluating evidence or drafting substantive orders.
While it permits judges and court staff to use AI as a controlled administrative and research aid – such as for translating documents, checking grammar and managing cause lists- it mandates human oversight. The policy also bars the feeding of confidential case data to public AI platforms to protect privacy.
With this move, it becomes the second High Court to formalise a policy for AI use. In July 2025, the Kerala High Court became the first to issue an AI policy. This was followed by a detailed white paper on AI and the judiciary published by the Supreme Court in November 2025.
The Gujarat High Court’s ‘Policy on the Use of Artificial Intelligence’ applies to all judicial officers, registry staff, legal assistants, interns and administrative personnel across the High Court and the District Judiciary. The policy explicitly states that there shall be “no autonomous or unreviewed AI action in any judicial or administrative process”, mandating that a “qualified human officer” must always verify AI-generated outputs.
Under its permitted uses, the policy allows court personnel to leverage AI for administrative and productivity tasks, such as drafting circulars whose information is already in the public domain, and managing cause lists based on “anonymised metadata” – which means basic details about a case without any personal information identifying the people involved.
AI permitted for identifying precedents
It also permits the use of AI for legal research, including the extraction of the ratio decidendi – the core legal principle upon which a judgement is based – and identifying precedents. AI can also be used for drafting assistance, such as improving the language and structure of orders, checking for grammatical errors and machine-assisted translation, provided the “substantive legal analysis and reasoning remains entirely that of the judge”.
In the section on prohibited uses of AI, the policy states that AI “shall never be employed for any form of decision-making, judicial reasoning, substantive order drafting or judgment preparation, bail/sentencing considerations, or any substantive adjudicatory process.”
Story continues below this ad
It also bans the use of AI to evaluate evidence or assess witness credibility. To protect the privacy of litigants, the policy forbids entering confidential case details, personal data of litigants or privileged communications into any public AI tool. Using AI-generated case citations without independent verification from authoritative primary sources is also barred.
The policy’s overarching philosophy is rooted in preserving the “unique constitutional mandate of dispensing justice through human conscience.”
It lays down its core guiding principles as judicial independence, human supervision, accuracy and reliability, confidentiality and data protection, fairness and non-discrimination, and competence and continued learning. Any violation of the policy would be treated as professional misconduct and would attract departmental or disciplinary proceedings.
Kerala High Court AI policy
The policy echoes the one adopted by the Kerala High Court last year. The Kerala policy, applicable to its district judiciary, similarly aimed to ensure that AI tools are used “solely as an assistive tool, and strictly for specifically allowed purposes” – these may be “routine administrative tasks such as scheduling of cases or court management” and the translation of legal texts and judgements.
Story continues below this ad
It directed that the use of cloud-based AI services “should be avoided”, noting that submitting facts of a case or personal identifiers to such platforms could result in “serious violations of confidentiality.” According to the policy, AI tools “shall not be used to arrive at any findings, reliefs, order or judgment under any circumstances.”
Four months later, the Supreme Court of India released a white paper titled ‘Artificial Intelligence and Judiciary’ – in response to increased instances of as well as concerns over the use of AI by lawyers and judicial officers. It cautioned against AI “hallucinations” – a phenomenon in which AI systems generate factually incorrect or entirely fabricated information that appears coherent.
Citing instances of lawyers submitting fake, AI-generated case citations in Indian courts, the Supreme Court’s Centre for Research and Planning, which drafted the paper, said that “complete reliance on AI systems in the judiciary in the absence of proper safeguards could reduce such human articulable intervention in the legal process, potentially reducing the transparency of justice served.”
It also flagged algorithmic bias – whereby “AI-systems may disproportionately harm or benefit certain social groups at the expense of others” and the breach of privacy and confidentiality among risks associated with the use of AI in the judiciary.
Story continues below this ad
Across both the Kerala and Gujarat High Court’s policies and the Supreme Court’s white paper, the most prominent shared principle is the “Human in the Loop” concept, which dictates that the ultimate responsibility and accountability for any judicial action must rest with a human judge.
