AI in the Everyday in India – A Socio-Legal Workshop

On 10th January 2026, the JSW Centre for the Future of Law at NLSIU, along with the University of Amsterdam, Centre for Interdisciplinary Methodologies at the University of Warwick, and Tilburg University, organised a socio-legal workshop titled “AI in the Everyday in India”. The event brought together a diverse group of scholars from a variety of disciplines to holistically understand how AI operates and influences the everyday lives of people. The event consisted of three sessions; the first session dealt with AI-related challenges to democracy and justice; the second consisted of a diverse set of presentations on governance, human rights, and the infrastructures of AI; and the final panel concluded by exploring imaginations of AI for the future. The discussion opened with introductory remarks by Mr. Siddharth D’Souza and Dr. Rahul Hemrajani, who situated the conference within the broader objectives of the JSW Centre for the Future of Law: to engage critically with emerging technologies and their implications for law, governance and society.
Panel I : AI Related Challenges to Democracy & Justice
Panellists: Nupur Chowdhury, Nikhil Purohit, Isha Ahlawat, Rajesh Kumar, and Anmol Diwan.
Chair: Rahul Hemrajani
Discussants: Siddharth de Souza and Sagnik Dutta
Mr. D’Souza introduced the discussion by urging participants to take a step back and examine the various stakeholders who are not only impacted by AI systems, but who also actively shape their governance and regulation. He mentioned that any meaningful rights-based approach towards AI governance must ask questions about inter alia, the level of surveillance, manipulation and the qualitativeness of data. Dr. Dutta focused on the peculiarities of the Global South, where power asymmetries between various stakeholders may lead to exclusions. He stressed that post-colonial contexts often make it difficult to apply existing theoretical frameworks of digital colonialism and that such theories must be reconfigured for specific contexts. Specifically, he noted that the Indian landscape is marked by multiple small startups (as opposed to a few BigTech conglomerates) engaged in welfare and governance functions in collaboration with the State.
This panel explored the growing application of AI in governance and its impact on democratic institutions and systems of justice, with particular attention to the criminal justice system, urban traffic control, increased surveillance by the state, welfare schemes, and AI slop, in the Global South and post-colonial context. The panel discussions reflected these areas of focus and covered three broad themes: (i) interoperability (or the lack thereof) and bias in AI; (ii) governance and infrastructural landscapes and (iii) emerging transformations, including bottom-up approaches to AI design and new forms of exclusion produced by AI systems.
AI in Judicial Decision-Making & the Inter-operable Criminal Justice System – Nupur Chowdhury
In the first presentation, Dr. Nupur Chowdhury examined the implications of AI-enabled systems for judicial decision-making in India, with a particular focus on the Integrated Criminal Justice System (ICJS). She focused on challenging the common “comparator argument” that posits AI as preferable because human judges are already biased. Instead, she cautioned against the uncritical adoption of optimisation-driven AI systems, where efficiency becomes a self-reinforcing justification for expanding data collection and automation. Turning to the Indian context, Dr. Chowdhury traced the evolution of the ICJS, from early efforts at digitising court records and standardising judgments through the e-Courts project, to more recent attempts at integrating policing, prosecution, forensics, CCTV systems, and courts into a single interoperable framework. It was argued that this “seamless integration” risks collapsing the separation between investigation, prosecution, and adjudication, which are typically seen as three different stages. By positioning the police as the first interpreter of law within a tightly-bound technological system, the ICJS threatens judicial independence and undermines the very idea of a fair legal trial.
Surveillance, Predictive Policing, and Free Speech Ramifications – Nikhil Purohit
Mr. Purohit focused on AI-enabled surveillance and predictive policing, and examined the impacts of these developments on broader human rights concerns. While acknowledging the ostensibly benign objectives of such systems, ranging from crime prevention to pandemic management, he warned of their profound democratic costs. He argued that the pervasive surveillance mechanism creates a chilling effect on dissent and free speech and likened the large-scale identification and targeting of protestors to that of an Orwellian state. Mr. Purohit’s central argument was that predictive policing systems, which are trained on historically biased and possibly even colonial-era datasets, inevitably reproduce socio-economic and caste-based discrimination. He drew a comparison with studies from the USA, where Black neighbourhoods are disproportionately flagged as high-risk. He noted similar problems in India, which are exacerbated by the low accuracy rates of technologies such as facial recognition. He concluded these concerns as not only limited to India, but rather as part of a major systemic issue with AI-driven governance across post-colonial States.
Law, Ethics, and AI in Urban Traffic Enforcement: The Case Study of Delhi NCR of India – Rajesh Kumar
Dr. Kumar’s work focussed on the Delhi NCR region, and highlighted the big change happening at the policy level, with increasing use of AI systems in traffic enforcement. The use of AI reduces the human dimension; thereby reducing the instances of corruption and discrimination. However, Dr. Kumar focussed on the risks associated with such widespread use of AI systems in traffic enforcement in the form of reduced discretion in levying penalties, automatic fixation of liability, increased predictive liability imposition, decreased personal hearings, and digital exclusion. In addition to this, he also highlighted the privacy concerns resulting from large data present with the private entities providing the AI services. Such risks need to be evaluated on the touchstone of Constitutional Rights under the rubric of Articles 14, 19, and 21. He argued that there is a need for small-scale studies to understand the granular impact of AI systems in traffic enforcement, before their deployment.
Democratic Backsliding in the Global Majority: Wading Through the Swamp of AI Slop – Anmol Diwan
Mr. Diwan described the phenomenon of AI slop – the swamp of information generated by AI, though not necessarily fake, which spreads on the internet, creating distortion and confusion. He focussed on its implications for democracy in global south countries by taking the example of India, South Africa, and Brazil. By making it harder to access accurate information, AI slop reduces the trust in democratic institutions. Further, AI technology allows the government to manipulate unlimited information, creating a facade of democracy. Mr. Diwan argues that these small effects of the AI slop aggregate, and lead to democratic backsliding.
Ms. Isha Ahlawat
Ms. Ahlawat explored algorithmic integration in the welfare schemes carried on by the Government of India to combat problems like corruption. She explored the use of large data sets in profiling citizens, determining eligibility, among other uses, which marks the significant shift in the relationship between citizens and welfare functions of the government. This automation, and centralization creates problems such as false negatives resulting in denial of benefits to many deserving citizens. Drawing from Jermy Waldron, she argued for normative value of human decision making that prioritizes due process, rather than just efficiency. Discretion and vagueness is required in implementing welfare schemes, and every process cannot be automated.
Panel II : Governance, Human rights and Infrastructures of AI
Panellists: Preeti Raghunath, Suriya Krishna BS, Devyani Pande, Manpreet Singh, Krishna Ravi Srinivas
Chair: Siddharth de Souza
Discussants: Rahul Hemrajani and Siddharth M
This panel touched upon diverse themes right from the environmental effects of Data Centres, actors in AI governance and inclusion in the context of AI, to the question of legal personhood for AI.
Ecologies of AI in India – Preeti Raghunath and Suriya Krishna B S
This study focused on the environmental impacts of AI Data Centres, and generally AI infrastructure, on the everyday life of people in the Global South. This special focus on the Global South is due to the disconnect between those who deal with the detrimental environmental effects of such Data Centres, and those in the Global North who benefit from them. The presentation was a combination of the work that both panellists had started individually, but combined to its present form. Ms. Krishna presented the policy landscape concerning data centres, highlighting problems of path dependency in promoting the adoption of AI despite the environmental concerns that it poses. Dr. Raghunath brought in perspectives from “frontier lives” ; people who interface with AI and its infrastructures on a daily basis from the field, mainly from Telangana. This amalgamation grounded their broader policy arguments in the lived experiences of those in the frontier who experience the effects of AI.
‘Who’ is Involved in Governing AI in India and ‘how’?: The Role of State and Non-state Actors – Devyani Pande
Dr. Pande, looking through the lens of public policy, explored the role of a large number of non-state actors such as the corporate sector, big tech, civil society, and non-governmental organisations in influencing the AI regulatory framework alongside the government. She uses the Advocacy Coalition Framework in a dynamic manner, to explore who the actors involved are, what their narratives are, and the conflicts and similarities in their approaches to AI Governance in India. In a developing country like India, which is at the nascent stage of AI regulation, she argues that such a study will enable us to understand how AI policy is being thought of as regulatory frameworks are made.
AI and Human Rights: A Posthuman Conundrum – Manpreet Singh
Dr. Singh intriguingly demonstrated his argument by presenting a ChatGPT-generated version of his paper, obtained by prompting it to present his ideas in an academic tone. He posited that the act of writing the paper using ChatGPT illustrates a shift in the way we conceive thinking. He conceived the paper as being co-written and co-produced by AI, and contended that thinking is no longer an exclusively individual or internal process – raising questions of agency and meaning-making. He focused on the post-human world, particularly emphasising the interaction between AI and humans. He concluded by indicating that the question of extending legal personhood to AI may be displaced, and highlighted the need to articulate post-human jurisprudence in the age of AI.
Inclusion, Innovation and AI in/for Law in India – Krishna Ravi Srinivas
Dr. Srinivas drew from ideas of equity and access to put forth an inclusive innovation approach to AI in law and AI for law. He delved into the multiple ways in which inclusive innovation has been defined, and emphasizes that innovation must be inclusive both in terms of the process through which it is achieved, and the problems and solutions it relates to. He emphasized the need to make Science, Technology, and Innovation (“STI”) inclusive, to be responsive to, and to benefit those sections of the society that did not gain much from the development of STI. He concluded by urging the audience to move towards participatory, need-based, and stakeholder centric processes and outcomes with regard to STI.
Panel III : Imaginations and Futures of AI
Panellists: Debangana Chatterjee, Shaunna Rodrigues, Shrawani Shagun, Sanchet Sharma, Bhawna Parmar
Chair: Sagnik Dutta
Discussants: Rahul Hemrajani and Siddharth de Souza
This panel explored themes of AI personhood, political legitimacy, and algorithmic bias. It examined how algorithmic systems mirror existing power hierarchies and shape political imagination. The speakers highlighted the need to re‑centre human accountability and democratise AI governance. Together, they envisioned equitable, inclusive, and socially conscious AI futures.
The Masculine Rhythm of Algorithmic Operations and the Future of Collective Political Imagination – Debangana Chatterjee
Dr. Chatterjee discussed how algorithms are shaped by values like order, safety, and predictability. This could inadvertently lead to reinforcing algorithmic biases. She spoke about the “masculine rhythm” of the algorithm. This was to demonstrate essentially how the algorithms tend to be linear, patterned, and tech-controlled systems. While the algorithmic operations do not take explicit political sides, they still influence political expression in a subtle yet significant manner. Ultimately, she explained how the masculine rhythm of the algorithmic operations significantly shape political life by controlling narratives and reinforcing dominant narratives.
From Assembly to Algorithm: Constitutional Intelligibility and Self-Respect in the Age of AI – Shaunna Rodrigues
Ms. Rodrigues discussed political legitimacy in contemporary democracies and how it is being reshaped by the rise of AI and algorithmic governance. She argued that political intelligibility and legal orders are deeply shaped by colonial thought. This makes legality alone insufficient for democratic legitimacy. Her paper aimed to bring forth the ideas of knowledge, progress, and self-respect as crucial means of understanding rights in the age of AI. Ms Rodrigues also touched upon the concept of nationalism, as an essential form of political identity within the global order.
Decolonising AI Personhood: Designing a Framework for the Future – Shrawani Shagun and Sanchet Sharma
The panellists discussed how giving AI legal personhood would repeat core injustices of colonial rule by allowing powerful actors to shield themselves from responsibility through corporate‑style legal fictions. Using work on colonial companies, scholarship on data colonialism, and Ruha Benjamin’s critique that technology is never neutral, the panellists demonstrated that AI systems are always the result of human choices, yet present design, deployment, and disclosure practices systematically obscure those choices and their harms. They illustrated human costs through examples like wrongful algorithmic flagging in India’s DigiYatra system and deaths linked to Aadhaar‑based welfare exclusions, where the burden of failure falls on poor households while no identifiable actor is accountable. As a solution, they rejected AI personhood entirely and proposed supervisory strict liability: in low‑risk AI, harms are to be dealt with through ordinary lawsuits, but in high‑risk AI, those “at the top” would automatically be liable. This model keeps power centred on humans rather than judgment‑proof machines, mirrors how strict liability already works in sectors like aviation, pharmaceuticals and medical devices, and ensures accountability before large AI models are deployed rather than after the fact.
Building AI Futures from Below: Centering youth voices to build equitable and accountable AI – Bhawna Parmar
This paper presented a ground‑up study of how young people from marginalised communities in India understand and live with AI. Based on research in five states – Delhi, Haryana, Kerala, Gujarat, Odisha, it demonstrated that digital lives are “complex, messy and unguided”. With virtually no formal education on how AI works, they constructed their own narratives about algorithms, while surveys revealed that over half of girls and boys think AI chatbots are infallible and always correct. Against government AI‑literacy programmes that focus on workforce readiness and reproduce dominant techno‑optimistic narratives, the paper interrogated the question of whether young people are actively deciding their futures with AI, and what anxieties they carry about AI‑shaped futures in India. Through these processes, young people began to articulate concerns about accountability, data transparency, data diversity and power, and to imagine alternative AI worlds. This culminated in a youth‑authored AI manifesto that calls for labour‑centric, creativity‑supporting, community‑centred and climate‑conscious AI.
The panel discussion concluded with Mr Dutta encapsulating the themes, as well as the ideas that were explored. This was followed by Dr. Hemrajani delivering the vote of thanks.