AI in the Global South: DPI as an AI Governance Approach – India AI Pre-Summit Event
This Keynote Address on “AI in the Global South: DPI as an AI Governance Approach,” was organised by the JSW Centre for the Future of Law with the support of the Institute for New Economic Thinking, as part of the Pre-Summit events being held at multiple institutions in the run-up to the India AI Impact Summit happening in February 2026. This was a precursor to the panel discussion that happened on 24th January at the NLSIU campus, on “AI Governance in the Global South”. Dr. Akash Kapur, Visiting Research Scholar and Lecturer, Princeton University, delivered the keynote address.
Dr. Kapur started by drawing a comparison of the rise of the internet with AI. The internet had promised a lot of things; it was believed that the internet was to change the world. But he mentioned that for much of the 2000s, access to the internet remained limited to the rich. But Digital Public Infrastructure (“DPI”), such as the Unified Payments Interface, was commonly used by many, including non-urban areas. Further, he shared that his experience of visiting two DPI summits; in Egypt and South Africa led credence to the fact that Global South became a centre for DPI. By laying the ground in explaining how AI has been used in public interest , he proceeded to explain the central question he sought to address : can AI prevent falling into the trap of serving primarily private systems? “Can we build more forums of AI designed to benefit the many instead of being a closed system?” he asked.
He proceeded to outline three forms of the intersection of AI and DPI ; AI for DPI, DPI for AI, and AI as a form of DPI.
Under the first form, AI for DPI, he argued for the use of AI to enhance existing DPI modules. This includes fraud detection, targeting welfare programmes at certain sections of the population, using AI models for translation, and to expand the reach of DPI services. But he also recognised that there were concerns of transparency. There is a need for accountability and procedural safeguards. In this regard, he opined that the recent AI Governance Guidelines have, to a good extent, addressed these concerns.
The second form was DPI for AI – he proposed reusing and repurposing DPI data for developing small AI models. Data from DPI would also provide local-context sensitive data for training sensitive. He also acknowledged that reusing and repurposing data for a different form than what was consented to would pose problems with respect to privacy. But he opined that the answer does not lie in protecting individuals at all costs. He emphasised that there is a fine line between data as a private and a public good, and that data properly anonymised has immense benefits. Thus, he pointed out that this raises a lot of thorny but fruitful questions that need to be addressed.
The third and final form was to view AI as DPI; he argued that AI must be treated as a fundamental within the DPI stack. For him, the “I” in the “DPI” was its key component; since it represents horizontal and cross-sectoral application of a tool. He explained that there is a lot of work being done in how AI as a form of digital infrastructure would look like, including his own work. He emphasized the importance of recognising that AI is not a single technology; it is a set of interrelated technologies. Within these technologies, it is important to identify the foundational elements which would then serve as the base for building public and private innovation on top of it.
He opines that one must focus on Compute, data, and foundation models when it comes to the foundational elements of AI infrastructure. He explained that there was a lot of work being done to create Compute that could be provided on a public interest basis. For instance, India’s AI mission offered over 30,000 GPUs at subsidized rates. This was in line with his argument that Compute in the AI ecosystem must not be privately controlled, and demonstrated that countries are working towards this. Further, he emphasized the importance of creating open data sets, for the purposes of inference over training. He provided the example of weather data, to illustrate that open data sets are important for real time data inference. In the case of models, he found it interesting that the market and competition itself is pushing towards openness. He explained that with the introduction of DeepSeek, Open AI was forced to release open models. Further, he noted that open models function almost as good as the closed models, in terms of any real-world application that may be needed for DPI. Hence, compute and data may end up being bigger problems that one needs to focus on opening up for DPI.
He then sought to conclude by explaining the three governance takeaways applicable across the three above forms.
First, public interest is not equal to public ownership. DPI, according to him, refers to public interest, but is often misunderstood as a reference to public sector undertakings. He saw DPI as the creation of public interest technology, but this is with the aim of unleashing both private and public innovation. He took the example of the internet to explain this, which according to him, unleashed perhaps the greatest wave of capitalism in history. Hence, it is not about public ownership or management, but about creating a competitive market where better innovation for DPI is made.
Second, openness as architecture. A debate exists as to how much of DPI or public AI has to be open. He posited that openness is essential, but it does not necessarily have to be open source. Openness can be designed as part of the architecture of AI – for instance, interoperability, open APIs, and possibly even data portability. Thus for him, public technology does not necessarily have to be open source technology.
Third, whenever a new technology emerges, we think of regulating that technology, for instance, when crypto currency emerges. But he finds this as tedious as reinventing the wheel. He sees value in existing general purpose law such as Competition law, privacy and data-protection frameworks which need to be updated to accommodate the new technology. People working in tandem with these areas of law already would have deeply rooted knowledge and capacity within those areas of law. Areas like Constitutional law and Administrative law could help inculcate principles of due process, accountability, and contestability. He believed that there is value in building capacities within existing institutions insteading of regulating technologies when they emerge each time.
He concluded by recognising that the present moment of AI is a key moment, and hopes that his work makes a difference to ensure public AI.
Question and Answer Session:
1. Can you elaborate on what you conceptualise as DPI? For instance, whether even UPI must be considered as DPI is a questionable issue. So what are the key characteristics of DPI?
DPI is indeed a contested and evolving notion according to him. However Dr. Kapur mentioned that there is wide consensus now with what the World Bank considered as core DPI – digital identity, digital payments, and data exchange. For him, the foundational, infrastructural notion that creates open interoperable platforms that unleash activity higher up is core to the notion of DPI. Further, encouraging public and private innovation is central to the notion of DPI. He opines that as DPI spreads around the world, governments are interested in it, especially when it comes to digital identity. Hence, he recognised the risk of it becoming a public sector focused technology.
2. During the address, you mentioned how the AI guidelines were good, but would there be much meaning if it does not translate into action? How do you see the approach taken by the Indian government with respect to this?
Dr. Kapur opined that the guidelines are quite thoughtful, but agreed that operationalising them was a challenge. He noted that countries around the world were finding it difficult to implement frameworks of technology regulations. For him, the key is to not let emerging technologies like AI become privately controlled, or stifle innovation by making them completely public – a balance needs to be struck between the two.
3. You mentioned that digital infrastructure has an enabling quality – that is the right idea for DPI, the idea that the government can in some way use capital to enable growth and innovation across the public. But how does this square in with the idea of public interest and ownership? This is because there are different models for public ownership in India – for instance, DigiYatra, Aadhar, National Payments Agency all have different structures of ownership between public and private ownership. But to use these governance structures in the manner you spoke of is quite challenging, especially because the government can look out for the public, but the private sector needs to look out for other interests. So how does minimalism square in with this?
In some ways, for Dr.Kapur, the internet is the original DPI. He opined that the technical founders of the internet did not do it for the money, they created it as an open infrastructure they did not own or earn substantial money from. The US Government, too did not regulate it robustly. However, he did acknowledge that it exercised control over the internet using ICAN well until 2015. So for him, the government can seed public infrastructure. And thus far, he noted that India had performed well in this respect, despite the issues that existed. But he was not fully convinced that DigiYatra was DPI, since it is largely the end-user application, and the DPI underlying it is actually Aadhar.
- The idea of openness and public ownership may better serve public interest as long as capital comes from the public. How do you reconcile these contrasting ideas?
He did not have any objections to open source as such; to the extent that open source infrastructure could be built, he was in favour of it. But he mentioned that sometimes open source limits adoption, for both good and bad reasons. For instance, a country may not want to build open source infrastructure because it wants to retain control, but it could also be because its government does not have the capacity to use the open source infrastructure. But he did concede that there is some overlap between DPI and open source modules.
- This question is with respect to the institutional and regulatory architecture that India can adopt when it is moving towards materialising how AI can translate into DPI. Do you see a more sector-agnostic, horizontal AI regulatory framework working in this regard, or a more sector-specific use-driven approach?
Dr. Kapur opined that a combination of both (sector-specific and sector-agnostic framework) would be ideal. He leaned towards outcome or purpose specific regulatory approaches, instead of sector specific ones. But this did not mean that AI would be regulated by, for instance, the Agriculture department – but some aspects of AI relating to agriculture may be under the control of the Agriculture Ministry. AI regulation is ideally broken up across institutions and existing functionalities – some of it may be sector-specific, while others may be function-specific, but not necessarily out of one specific body.
- This is a clarification regarding openness as architecture. Would open data sets also not be a part of open source AI? So what does openness as architecture really mean, and where do open data sets fit into this?
Openness exists on a spectrum, and Dr. Kapur opined that one needs to be practical about it. For instance, given a choice between using a closed model made by a Silicon Valley Lab, and an open source model where one is unsure of the data sets that have been used to train the model, it is likely that the latter is preferred. He also does not think it is realistic for only publicly available data to be used for training purposes, since it is very scarce.
- You mentioned that general purpose law like Competition law and Administrative law is good enough to regulate emerging technologies like AI. But all of these laws of ex-post as of now, they get triggered when the problems become too big to manage. Is there a way to integrate the values that these general purpose laws possess into the architecture of DPI? For instance, ensuring that these technologies do not violate fundamental rights?
He agreed with the observation that general purpose law is largely ex-post, and not all emerging technologies can be regulated with that. The problem was not just existing general purpose law, but also using existing institutions. So either existing frameworks had to be updated to deal with new technologies, or some new frameworks had to be created to deal with this, which he opined, was inevitable.
- A lot of DPI is going to replace the bureaucrat and government intermediaries. How will this change the relationship between the State and the Citizen?
Dr. Kapur agreed that the relationship between the Citizen and the State was changing. Transparency and redress were the two main issues according to him when bureaucratic decision making was to be replaced by automated decision making. He noted that technology could play a big role in this, for instance, audit trails could be used to trace how the decision was done. But it did not mean that a large amount of data is dumped on those seeking information, in a form that they cannot understand or use. So techno-legal solutions need to be accompanied by legal-administrative solutions.
- DPI as we understand is rule based ; there is no identity involved. But with AI, probability is involved and one does not know what the output of AI is and what it is based on. So if AI is considered as an element of DPI, how must the government regulate it ? Especially because of the element of probability involved, and it essentially being blackbox.
Dr. Kapur agreed that this was an intrinsic issue with AI. He opined that having a human in the loop was a possible solution, to ensure that human decision making component entirely. Human decision-making is essential for redress mechanisms, even as automated decision-making lowers the burden on humans to make decisions. He also added that this was not only a problem with DPI, but with the scale of technology in general.
- Whenever there are discussions about DPI by the government, reducing information asymmetry is an important element. There is also public scepticism about DPI, along with low awareness of the necessity of digital identification to avail numerous schemes. What do you think about redressing this information asymmetry?
Dr. Kapur noted that in an ideal world, DPI would make government and public services more accessible and perhaps reduce corruption. But he believed that people often limit DPI to the specific manifestations of DPI – for instance, in India, it is especially based on identity in terms of Aadhar. He noted that different governments have different reasons to implement DPI. Sometimes they are good reasons, such as spurring development, increasing efficiency and reducing corruption. However, it could also be for having more control and surveillance. In explaining this, he sought to indirectly address the points raised in the question.
- You pointed out the value of moving away from general-purpose law – but I was wondering how we could move towards a formulation that looks at the power aspect that is in the works there. For instance, with high-value data sets, I would still be confused as to whether that would be AI for DPI, or DPI for AI, or probably both. My question is, is there a bargain that will have to be made one way or another? Even if a new purpose-driven law is adopted, it would be subject to the same regulatory capture which general purpose law is subject to. Or do we stick with existing institutions and tools and strengthen them?
Dr. Kapur opined that these were largely societal problems – the problem may lie with the technology, or with the person using the technology, and hence technology may amplify the problem. He noted that problems of power misuse and accountability still existed. But first, he saw value in the benefits we get out of technologies like DPI despite the problems with it. Second, he mentioned that one needs to look at the existing risks and new risks that arise out of emerging technologies, and find ways to govern them. However, some problems remain unanticipated. He used UPI to illustrate his answer. The purpose of UPI, according to him, was to break monopolies and enable smaller players to benefit, but two global multinational players ultimately became dominant in the UPI market space. So essentially, he saw value in thinking about minimising the harm and maximising the potential that AI and DPI offer.