This is a guest post by the brilliant law interns Hamna, Ananya, Sakshi and Satyam who interned with the Chambers a few months back. One of the most illuminating columns I’ve read on the subject. [1]

Bail law in India is all about judicial discretion. There have been various criticisms, from various quarters, of discretion and decision making in bail proceedings. Some have critiqued the Courts for attaching too much importance to the ‘gravity’ of the offence. In 2012, the Supreme Court in Sanjay Chandra vs. CBI, held that the gravity and seriousness of an offence should not be the sole factors for rejecting an accused’s bail. Whereas considerations relevant to carrying out a smooth investigation and whether accused is a flight risk are relevant in denying or granting bail. The same was re-iterated by the Apex Court recently in Prabhakar Tewari v. State of U.P. & Ors. Additionally, the 2017 Law Commission Report on bail in the Code of Criminal Procedure has also recommended excluding the nature and gravity of an offence from the factors considered while undertaking risk assessment during bail proceedings.[2]

Nevertheless, various sessions courts, high courts, and even the Supreme Court often ignore this judicial yardstick in bail jurisprudence and consider the nature of an offence to be a crucial factor in their risk assessment. It was evident when the Delhi High Court rejected P. Chidambaram’s bail due to the fact that he was accused of a serious socio-economic offence even after specifically taking note of the fact that the threat of absconding or tampering with the evidence was absent. Later, the Supreme Court granted him bail, but they rejected the Appellant’s argument that only the ‘triple test’ is relevant in granting bail, and not the gravity of an offence. Many legal scholars and lawyers have criticised the courts for not clarifying the position of law on this issue and judge’s discretion being the norm. [3]

Abhinav Sekhri has argued that this discretion has become more unbridled over time as there is no statutory basis for the Supreme Court’s proposition highlighted in the first paragraph. Hence, the absence of uniformity, he says- “Till the exercise of discretion in bail remains a black-box into which we cannot peer, the only conclusion is that no procedure established by law decides how bail applications are denied or granted.” [4]

Malcolm Gladwell, in ‘Talking to Strangers’, explores the conundrum that Judge Solomon faces while granting bail by asking himself, “Does this perfect stranger deserve his freedom?” He attacks the assumption that a judge will make a better decision if he meets the accused in-person by highlighting a study which compares bails granted by the judges in New York to an artificial intelligence system’s data on the same cases. The study did this by feeding the system with the same information that the judges had while deciding those cases. And alas, machine destroyed man because the list on the AI system had more people who could have been given bail as they were less likely to commit a crime than people who were given bail by the judge.

Talking to Strangers by Malcolm Gladwell

India ranks 13th out of 200 countries in developing AI and has already integrated AI in agriculture, health and judiciary. The question that we ask here is: can we address the inconsistencies in bail jurisprudence pointed above through AI?[5] John Tierney[6] analyses the paralysing effects of ‘decision fatigue’ and argues that it can lead to a reluctance in making trade-offs. A mentally fatigued judge will most probably deny parole to a person and keep him or her in jail so as to not risk a wrong decision.[7]     Today, it is an open secret that the Indian judiciary is ailing under the pressure of doing more with limited resources. We want to propose an A.I. assisted algorithm for assisting the judge with maintaining consistency in her decisions regarding bail applications as implemented in other countries as well. If this pandemic has revealed anything about our judiciary, it is that a judge’s role in adjudication is indispensable. This is not just true for us, but across the globe A.I. has only assisted and not substituted the judges. A brief analysis of these systems can help India in making a case for AI while deciding bail significantly. 

Artificial Intelligence in U.S.A’s Criminal Justice System

The U.S. criminal justice system has been through a series of reforms to balance an individual’s liberty and public safety. The current wave of reform seeks to correct the system’s frequent failure to determine who poses the greatest risk when released at the pre-trial stage. This is the context in which they adopted ‘bail algorithms’ using artificial intelligence. It relies on empirical data to determine the ‘risk’ involved in releasing an accused at a pre-trial stage, thus determining future outcomes using past trends.

Today, as many as 60 risk assessment tools are used in jurisdictions across the U.S.[8] A uniform national risk assessment tool, though can establish uniformity and simplicity, cannot account for local idiosyncrasies. Different states have different demographic, economic and educational structures, and consequently have access to different types of data, subsequent risk factors and preferences about which crime to prioritize.[9]  

Colorado Pretrial Assessment Tools (CPAT), Ohio Risk Assessment System Pretrial Assessment Tools (ORAS-PAT), Public Safety Assessment (PSA) and Virginia Pretrial Risk Assessment Instrument (VPRAI) are some of the risk assessment tools, where risk factors are formulated on the basis of pretrial misconduct.[10]  

These scores come to the rescue of the decision fatigue of the judges and the arbitrariness or unfairness in the pre-trial releases[11] which eventually lead to mass incarceration of the persons accused even for a petty offence. These tools help in combining the “human judgment” with “sophistication of machine learning”[12], where the ultimate decision lies with the judge.

In comparison to COMPAS, the PSA is more transparent. The risk factors used by PSA and how they are weighted when calculating risk are publicly available. [13] PSA has nine factors, synonymously called the ‘checklist’ which contains factors such as a person’s age at the time of arrest, pending charges at the time of current offence, failure to appear at a pre-trial hearing more than two years ago, etc. The points for all nine factors are calculated and the total score becomes the public safety assessment of the accused. It helps judges to estimate the likelihood of an accused failing to appear for their court date or their risk of committing a violent crime.[14]

Artificial Intelligence in the U.K.

In 2016, Durham Constabulary in collaboration with Cambridge University, developed a jurisdiction specific Harm Assessment Risk Tool (HART), an artificial intelligence algorithm, to assess the degree of risk associated with a suspect by categorising them into low, moderate or high risk of committing crimes in the succeeding two years. It is used by police officials in the UK. It focuses on whether she should be referred to the ‘Checkpoint’[15], a rehabilitation program. It considers 34 criteria including age, history of crimes, gender, etc. to place people in different risk categories. HART, like the American model, is not meant to replace human agency, but only supplement it. 

Another system is the Offender Assessment System (OAS) used by a Probation Officer to check whether an offender under supervision will re-offend, the risk they pose, develop a sentence plan for them and continuously assess the offenders through their prison tenure/supervision period.[16]

AI to the Rescue of Brazil’s Overburdened Judiciary

In 2017, the ratio between the number of cases awaiting judgement and Brazil’s population was 1:3. Moreover, some of these lawsuits took more than seven years to be adjudicated. Justifiably, Brazil Supreme Court or Supremo Tribunal Federal (STF) announced VICTOR, an artificial intelligence-based system, to salvage the judiciary. VICTOR is a dataset built from STF’s digitized legal documents[17] containing four TB worth of data on extraordinary appeals and millions of cases from the years 2017 to 2019. VICTOR classifies documents and identifies cases within 29 themes of ‘general repercussion’[18] i.e., those which have social relevance and therefore must be decided by the STF. 

Unfortunately, there is no recourse to VICTOR’s evaluation of a general repercussion case. As of today, VICTOR falls under the ambit of Brazil’s General Personal Data Protection Law (LGPD) which ‘covers the right to explanation, with clear and adequate information regarding the criteria and procedures used for the automated decision’[19]. Brazil’s use of AI in its judiciary is expanding and can be seen in its implementation of AI programs SOCRATES and SIGMA. SOCRATES reads new cases and groups the cases which raise similar issues together. SIGMA on the other hand assists in preparing reports, decisions and judgments in the Electronic Judicial Process (PJe) system[20].

high angle photo of robot
Photo by Alex Knight on

Artificial Intelligence in Indian Judiciary

ManCorp Innovations Lab (MCIL) has assisted the judiciary of India in adopting A.I. to counter the difficulty of pending cases. Former Chief Justice S.A. Bobde launched the Supreme Court Portal for Assistance in Court Efficiency (SUPACE) in early 2021. This system has everything from e-office, user and task management to automatic conversion of documents, automated facts of extraction, ChatBot, automatic creation of synopsis.[21]

In the latter half of 2018, MCIL developed Optical Character Recognition (OCR) and ChatBot for the Jharkhand High Court due to the huge difference between their criminal cases and number of judges. The former converts scanned documents into readable texts, fixes its orientation, etc. and the latter is controlled by both text and voice command. Together, the system tells the specifics of a particular case, like number of victims, accused persons, what are the crimes committed by the accused, etc. This is done by feeding the algorithm with approximately 150 questions that usually come to a judge’s mind when dealing with criminal cases. [22]

To assist the Patna High Court with its problem of speedy allocation of cases, MCIL developed a system where cases were auto-allocated based on an algorithm which was informed by the kind of cases heard by each Bench previously and their rate of disposal. [23]

Various other AI-informed technologies like Natural Language Processing, JUDi, etc. are also being developed to assist the court with speedy disposal, pending cases, etc. There is even an Artificial Intelligence committee of the Supreme Court formulated in 2019.[24]However, we propose a specific integration of an AI-informed algorithm for judges to implement when adjudicating upon bail cases. 

Envisioning A.I. in Indian Bail Jurisprudence

AI based systems can potentially play a key role in bail jurisprudence by using an amalgamation of data already available from various pillars of the criminal justice system[25]. Presently, stakeholders in the criminal justice system namely police and prison authorities, trial courts, National Investigation Agency (NIA), and forensics among others have their dedicated online databases storing a wealth of information about crimes, criminals and judicial decisions. All these can and should be leveraged. With more than 97%[26] of police stations across the country registered on Crime and Criminal Tracking Network and Systems (CCTNS), these centralized police reports are already being used by the passport authorities for the first level verification of applicants by checking their criminal antecedents on the platform with a name search. 

The different information pillars are joined together by the Interoperable Criminal Justice System (ICJS) which connects together individual databases thereby ensuring seamless transfer of data from one wing of the Criminal Justice System to the other[27]. The ICJS data is already being put to use by National Database on Sexual Offenders (NDSO), which is an analytics tool that creates profiles of sexual offenders based on data available from multiple branches on the ICJS and helps in the identification of repeat offenders[28]

Given these technological developments have already been made, it is not very difficult to envisage an AI-based system that uses the data already available on platforms like CCTNS, ICJS, e-courts, face recognition systems (FRS), e-prisons, etc to help judges in the assessment of bail pre-conditions by assigning every applicant a quantifiable value on each relevant criterion. Some of the bail conditions like flight risk, previous criminal records and the possibility of repeating a crime can be deduced by an efficient AI-based system profiling the applicants by extracting data from the existing databases. Collection of new data points by prison, forensic and investigating agencies can be introduced gradually so as to improve the efficiency of the AI system by giving it more parameters for profiling an individual.

It is important, however, to ensure that the algorithm that informs this AI system be devoid of bias. In this context, lessons learnt from HART and COMPAC must be kept in mind. For example, of all the 34 predictors used in HART, they did not include factors like assessment of family circumstances, value of the accused’s job to her self-esteem, which although are non-computable factors, require the human reasoning to take into consideration while deciding outcomes. Moreover, it used two forms of residential postcodes in its predictions[29]. This could lead to an adverse feedback-loop targeting communities of a particular residential area.[30] Moreover, the database used to inform the algorithm must be refreshed at regular intervals so as to encompass the current trends in deciding bails.

Even though the decision-making algorithm will not (and should not) replace the agency of the Judge, its functioning must be transparent and accessible by the public and the offenders. In the State of Wisconsin v Erik Loomis (2016) the case went to the Supreme Court of Wisconsin on the ground that the creator would not reveal how the COMPAS algorithm determined its risk scores or how it weighted certain factors, citing trade secrets. It is therefore of essence that the private sector involvement in developing the algorithm not breach upon the fundamental right to know and the right to freedom, citing disproportionately relevant reasons of trade secrecy and ownership of the intellectual property rights of the algorithm. There still will remain aspects of the algorithm function beyond the comprehension of an average individual. The HART model uses 4.2 million decision points to decide on each case making it infeasible to study the process of algorithm’s decision making.

Evaluating lessons learnt through implementation of AI, it appears that simply asking an AI technology assistant the rhetorical ‘Bail…or Jail?’ is too simplistic. Each bail application, being unique, rests within a wide spectrum of such applications. Evidently thus, AI technology needs to be amply robust to evaluate each application accurately, considering its unique traits. 

Algorithm Good Practices in the UK

Algorithm use in assisting judges necessitates a jurisdiction specific algorithm assessment framework of checks and balances. This must be in place to evaluate the relevance of the AI tool used in bail provisions. India therefore has much to learn from ALGO-CARE™ in the UK.

Jamie Grace et al.[31] proposed that shortcomings of algorithm in pre-custodial decisions necessitate that the proportionality[32] of the use of an algorithm be determined on a case to case basis. Moreover, the availability and use of data on an offender/individual on the database fed to the algorithm must be informed to the offender or the concerned individuals[33]. In 2018, The National Police Chiefs’ Council (NPCC) adopted and promoted the use of the ALGO-CARE™ framework. Developed[34] as a ‘proposed decision-making framework for the deployment of algorithmic assessment tools in the policing context’[35] ALGO-CARE™ is a mnemonic which stands for Advisory; Lawful; Granularity; Ownership; Challengeable; Accuracy; Responsible; Explainable.[36] This is a wholesome framework that seeks to filter the entire process of predictive policing through algorithms (especially in the case of HART) questions every step of algorithm usage ethically. The framework filters range from the owner of the algorithm, its audit and maintenance, to its fair usage, detailed transparency, lawful obtaining of data and third-party justification of the outcome of the algorithm use[37]. It is expected that using ALGO-CARE™’s framework will help reduce the gaps in accuracy, efficacy and proportionality of an algorithm.

Innovating Justice Through A.I.

AI systems can also assist the judges in more ways than just providing a bail eligibility score to the applicant. Firstly, they can help in bringing to their knowledge cases of illegal detention which escape judicial scrutiny because of the massive workload and absence of real-time data. Our criminal procedure provides to the accused many safeguards to prevent his arbitrary arrest and detention by the state, default bail being one of them. However, a number of undertrial prisoners aren’t able to secure their rights either because of a lack of effective legal representation or the inability to provide bonds and sureties. AI systems can help in bringing such cases before the magistrate by maintaining a dashboard of all such detainees eligible for default bail by continuously analysing the police and prison data. 

Secondly, AI can assist the judges in finding the most relevant precedents and thus ease the decision-making process. This role of the AI assumes greater significance as the judges at trial courts don’t have the assistance of law clerks and thus the burden of research and writing has to be borne by the judge alone. AI-enabled systems can fill in this gap by searching through the case law databases and listing the most relevant cases based on the facts of the application before the judge. The precedents can also be assigned a rating based on how closely they adhere to the model bail-guidelines which will be computed based on the codified law on bail, the constitutional principles and the apex court directions in previous cases. With the help of these, the judge would then have a list of best-cases to refer to, thereby helping him apply his discretion without being debilitated by decision fatigue.  

Once such systems are introduced, constant evaluation of their efficiency in predicting behavioural patterns of bail applicants would have to be done and the results compared with those being given by courts working without the assistance of such AI-powered systems. Effective functioning would also require regularly updating the data sets, bias-free data collection by the agencies and constant monitoring of the algorithm to detect any prejudice against any class of persons. Also, to secure the rights of the accused, it would be essential to share the parameters based on which their profiling was done by the AI system, so as to provide them with sufficient opportunity to defend their position and rebut the relevancy of those parameters.  

At this moment in time, adopting A.I. to enhance governance and administration in various Indian sectors is not an inconceivable idea. As demonstrated above, the Indian judiciary is already re-imagining the problem of judicial backlog inter alia via artificial intelligence. Even adjudication of bail applications needs to be re-imagined by means of artificial intelligence to not just promote efficiency, but objectivity and accountability as well. Algorithms like SUPACE, OCR or JUDi have been limited to the High Courts and the Supreme Court, and are yet to be used at the trial court level, where approximately 3.9 crore cases are pending. [38] An algorithm for bail jurisprudence should be implemented at the trial court first because that is where most bail applications begin their journey. Additionally, a human (and humane) judge is indispensable and irreplaceable in adjudicating upon bail applications. An algorithm’s sole objective remains to assist the judge, never to substitute her. Given how resource crunched our judiciary is, assistance in the form of A.I. technology and algorithms will go a long way in ensuring qualitative decision making.

[1] This research article is co-authored by the following interns at The Chamber of Bharat Chugh:

  1. Hamna Rehan: Faculty of Law, Jamia Millia Islamia, New Delhi, III year BA LL.B student.
  2. Ananya Narain Tyagi:  Jindal Global Law School, Sonipat, Haryana, V year BA LL.B student. 
  3. Satyam Srivastava: Campus Law Centre, Faculty of Law, University of Delhi, New Delhi, I year LL.B student.
  4. Sakshi Kakkar: MA in International Relations, King’s College London, commences LL.B in fall of 2021


[3] Abhinav Sekhri has argued that because determination of gravity of an offence requires some assessment of facts(or merits) of the case, courts have to draw a fine line so as to not examine too many facts at the stage of bail. And often courts go too far in their assessment, thereby crossing this line arbitrarily. These factors have contributed towards making jail the norm and bail the exception in the case of non-bailable offences. This is highlighted by the Prison statistics 2019 which show that 70% of the total prison inmates languishing in Indian prisons are undertrials.





[8] Monograph_March2017_Demystifying Risk Assessment_1.pdf (

[9] Can AI help judges make the bail system fairer and safer? | Stanford School of Engineering

[10] Tool Selector | Bureau of Justice Assistance (

[11] What Is a Bail Algorithm? How Are Bail Algorithms Used? | Nolo

[12] Can AI help judges make the bail system fairer and safer? | Stanford School of Engineering

[13] [Pretrial Risk Assessment Explained | WisContext

[14] Can AI help judges make the bail system fairer and safer? | Stanford School of Engineering




[18] Ibid.











[29] `



[32] p. 630-660, 658 



[35] Ibid. 

[36] Ibid.

[37] Ibid.



  1. Since they (USAians) have the best of the artificial intelligence to aid judiciary in order to dispense justice.but they still failed to give justice to two who were unreasonably shot dead by Kyle Rittenhouse(17y/o).


  2. Respected Sir,the article is bright written by brightest.The article is futuristic and as technology usage increases in criminal justice administration,AI-powered tools and machine learning can provide deep insights into people. These technologies excel at detecting patterns we humans sometimes miss. They use algorithms to analyze large sets of data to find solutions and make predictions. Kind Regards/SRA


  3. This article is insightful and really helpful for developing a perspective. AI offers a huge potential for social benefit and achieving the SDGs. While it is being utilised to address many of humanity’s most pressing social challenges, it is also generating concerns about potential violations of human rights such as freedom of expression, privacy, data protection, and non-discrimination. AI-based technologies have significant potential if they are created in accordance with universal norms, ethics, and standards, as well as ideals based on human rights and long-term development.

    Thank you for such an amazing article.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s