Panel Discussion on AI Governance in the Global South : India AI Pre-Summit Event

This event was held at the NLSIU Campus on 24 January 2026, with Mr. Amlan Mohanty, Mr. Prakash Narayanan, Ms. Eunice Huang, Mr. Jaideep Reddy, Prof. Rahul De, and Prof. (Dr.) Sudhir Krishnaswam as panellists. The discussion was led by Dr. Rahul Hemrajani. This event was reported by Nethra J from the IJLT Editorial Team.

IJLT Editorial Team

February 19, 2026 16 min read
Share:

 

On 24 January 2026, the JSW Centre for the Future of Law at the National Law School of India University held a panel discussion titled “AI Governance in the Global South.” The session focused on India’s recent AI Governance Guidelines and placed them within broader discussions about sovereignty, regulatory capacity, geopolitical competition, labour changes, and the political economy of technology in the Global South. Instead of viewing AI governance as just a regulatory challenge, the panel explored fundamental questions: What does “governance” mean in relation to artificial intelligence? Is it the same as legislation, or does it include institutional capacity, technical structure, standards, and state strategy? What does “Global South” mean in the context of the AI value chain? How should labour, competition, the environment, and national infrastructure be considered in governance discussions? 

The panel included perspectives from policy-making, industry, legal practice, and academia. The conversation showed that AI governance is not just a legal issue but also a matter of political economy and statecraft. The panellists included Mr. Amlan Mohanty, Non-Resident Fellow at Niti Aayog; Mr. Prakash Narayanan, Global General Counsel at L&T Technology Services (LTTS); Ms. Eunice Huang, Head of AI and Emerging Tech Policy for Asia Pacific at Google; Mr. Jaideep Reddy, partner at Trilegal; Prof. Rahul De, independent consultant and researcher and former dean at IIM-Bangalore; and Prof. (Dr.) Sudhir Krishnaswamy, Vice Chancellor of NLSIU. The discussion was led by Dr. Rahul Hemrajani, Assistant Professor at NLSIU.

The panel began with Mr. Amlan Mohanty’s opening remarks. He centered his comments on three main concerns: the meaning of AI governance, why India’s approach is suitable for its context, and why this framework is important for the Global South. He pointed out that while artificial intelligence has existed since the 1950s, recent advances in token generation, natural language processing, multimodal reasoning, and autonomous systems have significantly raised the stakes. Today’s AI systems can do more than just create content, they can also take actions in the world. In particular, autonomous AI systems increase concerns about accountability, safety, and the need for institutional adaptation. He argued that these technological changes should not be viewed apart from geopolitical trends. Technological capability is now a key factor in strategic power, contributing to an increasingly bipolar technological order led by the United States and China. Thus, AI governance is tied to broader discussions of defence readiness, economic competition, and national capacity. 

In this context, Mr. Mohanty explained that a key decision made by the expert committee drafting the Guidelines was to avoid creating a separate “AI law”. Instead, governance was defined broadly to include awareness and capacity building, the creation of standards, institutional readiness, inclusion strategies, and harm mitigation through existing laws. The Guidelines were conceived as a strategic document rather than a strict regulatory code. Instead of mimicking the European Union’s risk-based legal model or the historically hands-off approach of the United States, India’s framework aimed for contextual adaptation and responsiveness. It was meant to be a “living” document that can be revised as technology evolves, recognizing the changing nature of AI systems and their usage. 

Finally, Mr. Mohanty stressed the need to understand the Global South politically, rather than just geographically. In the AI landscape, most countries outside the US and China have a similar structural position, they focus on applications instead of being at the forefront of model development and depend on foreign computing infrastructure and foundational models while aiming for technological independence. In South Asia and other Global South areas, he observed common features such as a practical use-case focus, linguistic diversity, ambitions for sovereign digital infrastructure, and increased labour market vulnerability. In this view, governance issues cannot be detached from political economy positioning within the AI value chain. Thus, AI governance is not merely about regulatory design. It is about situating states within an evolving technological landscape marked by unequal power, infrastructure reliance, and development goals. Building on Mr. Mohanty’s view of AI governance as a strategic and politically aware project, the discussion shifted from issues of sovereignty and state positioning to the practical aspects of ecosystem development and industry involvement.

Ms. Eunice Huang provided a regional and industry perspective. Referencing survey data collected over three years, she noted that India and other Asia-Pacific nations show higher levels of AI optimism: about 69% in India compared to around 33% in Europe. She highlighted that individuals who have used AI tools typically feel more confident in the technology, indicating that exposure and use shape public opinions. In this way, India’s favourable attitude toward innovation reflects not just a policy choice but a broader societal view of technology as a key to economic growth and change. Ms. Huang supported the Guidelines’ ecosystem based view of governance, agreeing that AI governance cannot be limited to official legislation alone. It should also cover safety standards, technical testing infrastructure, supportive copyright and data governance systems, and approaches like watermarking to manage synthetic content. In this understanding, governance is embedded in institutional structure and technical setup, not just confined to laws. At the same time, she warned against simply copying regulations from other countries. She pointed to South Korea’s AI laws, which closely followed the European Union’s AI Act without having similar implementation capabilities, as an example. She cautioned that overly ambitious legal frameworks might fail if lacking the necessary regulatory infrastructure. For many Global South nations, the challenge is not just writing laws but enforcing and putting them into practice. She identified three structural limitations affecting AI governance in the Global South: limited regulatory capacity, a reliance on English-language datasets for foundational models, and a lack of strong local data ecosystems. These issues create hurdles for inclusive and context-aware AI deployment, especially in diverse linguistic societies. Therefore, she argued that AI governance should be flexible, adaptable, and grounded in evidence rather than rigid and prescriptive. Effective governance arises not from adopting foreign regulatory models but from careful adjustment between ambition, institutional capacity, and technical realities.

Mr. Prakash Narayanan shifted the focus to the corporate and labour aspects of AI governance, reflecting on the real-world experiences of Indian service providers in global markets. He argued that the Global South should not only be viewed in terms of individual citizens or state actors. Indian companies that employ millions and provide tech services around the world also play crucial roles within the Global South. Governance frameworks must consider economic actors, business models, and labour markets along with broader developmental goals. While he recognized the potential benefits of AI, particularly given India’s skilled workforce and linguistic strengths, he stressed that employment remains a critical issue. The earlier notion that AI would simply enhance current jobs is fading as companies anticipate slower hiring and changing workforce needs. Increased efficiency from AI may boost productivity and profits, but it could also reduce entry-level positions and change long-term job prospects. In a services-oriented economy further shifting to digital operations, the implications for new employees and job growth are significant. If the Guidelines promote “AI for inclusive growth,” Mr. Narayanan questioned, whose inclusion is being considered? He emphasized that the concept of inclusive growth should go beyond just economic metrics to include job stability and skill development. Furthermore, he supported the report’s generally light regulatory approach, suggesting that governance should not impose heavy compliance burdens when clear harms are not evident. He believes that excessive regulation could stifle innovation and divert resources from training and responsible deployment efforts already happening in the industry. His comments raised an important tension of balancing pro-innovation governance with meaningful attention to labour market shifts in a Global South economy that relies heavily on service exports.

Shifting the focus to legal and institutional design, Mr. Jaideep Reddy evaluated whether India’s current legal framework sufficed to meet the challenges of AI governance. He generally agreed that the Information Technology Act, 2000 could serve as a broad law for AI regulation, but he noted that some definitions and assumptions, especially regarding the term “intermediary,” need revaluation in light of AI actors and functions. Section 79 of the IT Act provides safe harbour protection to intermediaries for third-party content, initially set up for platforms hosting user-generated content, following similar laws in the United States. However, AI systems blur traditional boundaries between developers, operators, platforms, and users, complicating responsibility. When AI generates content rather than users uploading it directly, it becomes less clear who is legally responsible. Mr. Reddy argued for clearer definitions between AI actors.

He raised concerns about draft amendments related to “synthetically generated information,” pointing out that the definition may be too broad. In a world where most digital content could be AI-assisted through editing tools, generative systems, or algorithmic improvements, universal labelling requirements might become impractical and misleading. Strict rules, such as mandatory disclosures for a set percentage of audio or video content, could burden compliance without addressing the actual issues. Finally, Mr. Reddy highlighted an essential difference between India and the United States: without specific regulations for each sector, legal disputes in India often rely on criminal law instead of evolving through common law. Therefore, the lack of regulation in the AI space might lead to criminal law stepping in, which could have serious consequences. His remarks stressed the need for clear rules and proper setups to ensure that AI governance develops coherently within India’s current legal framework instead of causing unintended legal problems.

Prof. De and Prof. (Dr.) Krishnaswamy brought a critical and evidence-based perspective to the discussion. Prof. Rahul De started by discussing what he called a paradoxical AI moment. There is a high level of enthusiasm for adoption alongside significant uncertainty about actual benefits. He cited notable data, mentioning that around 2.5 billion prompts are made to ChatGPT daily, nearly 98% of CEOs want AI integrated into their organizations, but a large number, sometimes cited as high as 95%, of AI projects fail to achieve meaningful results. This gap between hype and real value, he argued, requires scepticism instead of blind optimism. Prof. De also emphasized that AI is not one technology but a group of diverse systems. He made a crucial distinction between generative AI, especially large language models that are probabilistic and “hallucinatory by design,” and analytical or deep learning systems, which generally provide more predictable outputs. Governance frameworks that treat AI as a single category risk causing confusion and policy misalignment. 

In addition to this clarification, Prof. De pointed out several missing elements in the broader governance discussion. He highlighted the crucial role of open-source ecosystems in speeding up AI development. He argued that governance documents should acknowledge and engage with open innovation instead of focusing only on proprietary models. He also called for more emphasis on data labelling governance. He warned that labelling practices carry cultural biases and significantly affect model outputs in subtle yet powerful ways. Furthermore, he stressed the need for AI literacy, pointing out that both public and institutional players must understand how these systems are built, trained, and used in order to regulate or apply them effectively. Lastly, he suggested that governance frameworks should set clearer boundaries and specific guidelines on what should not be done instead of relying solely on vague, flexible language. In his view, AI needs to be managed as a distinct concept, not due to its mysterious nature, but because its probabilistic design and scale of use require a depth of understanding and engagement that exceeds traditional regulatory responses.

Prof. Sudhir Krishnaswamy provided a structural critique that shifted the focus from regulatory approaches to state theory and political economy. He questioned whether the India AI Governance Guidelines qualify as a true governance document, suggesting they resemble a technology policy, or more accurately, a modern form of industrial policy, rather than a conventional governance framework. He argued that the document repositioned the state not just as a rule-maker for businesses but as an early adopter and active player in technological development, reflecting earlier phases in India’s growth. He drew historical comparisons to India’s telecom liberalization in the 1980s, where policy decisions regarding switching technologies and local capabilities shaped long-term impacts. He also referenced the strategic choice to use open-source software in early computing and the pharmaceutical sector’s process-patent approach, which served as a deliberate techno-legal strategy to help domestic industries grow. These examples, he claimed, show how law and technology policy together can influence a country’s position within global value chains. 

Against this context, Prof. (Dr.) Krishnaswamy questioned what India’s appropriate techno-legal strategy for AI should be. He argued that the Guidelines do not clearly define India’s role in the AI value chain concerning compute infrastructure, semiconductor production, foundational model development, data generation, or design capabilities. While recognizing ongoing discussions about compute strategies and large language model development, he questioned whether there is a solid long-term plan to ensure India maintains a significant structural position in a decade. He stressed that references to the “Global South” should be rooted in political economy, not just in rhetorical solidarity. The key question is who controls the basic materials of AI systems, who creates and trains foundational models, where training data comes from, and who benefits from value along the chain. Without addressing these fundamental issues, governance may remain more about rhetoric than establishing solid institutional and material foundations. For Prof. (Dr.) Krishnaswamy, the main challenge is not just to write adaptable guidelines but to create a clear techno-legal strategy that positions India and, by extension, the Global South within the shifting landscape of global AI influence.

Structured Q & A discussion

  1. What Is Missing from the AI Governance Guidelines?

A common question from the audience was about what the Guidelines lack. The panellists mentioned several gaps: the environmental and supply-chain impacts of AI systems, especially compute-heavy infrastructure, the lack of a clear stance on open-source ecosystems, insufficient clarity on indigenous model-building strategies, limited connections with competition law, and missing structured governance for data labelling. Prof. (Dr). Sudhir Krishnaswamy emphasized that without considering material infrastructure, compute, semiconductors, and value-chain positioning, the document risks being more rhetorical than structural. Prof. Rahul De highlighted the importance of open-source ecosystems and the political aspects of data labelling. In response, Mr. Amlan Mohanty acknowledged that labour and environmental issues were not thoroughly covered in the committee’s brief. He suggested that governance will need to evolve gradually as clearer evidence comes to light.

  1. Competition Law and Digital Market Regulation

Another question centered on the role of the Competition Commission of India (CCI) in guiding AI deployment. The main concern was that AI systems are part of digital market structures, and any significant governance discussion must consider competition remedies, fines, and structural changes in platform markets. The panel generally agreed that AI deployment will be affected by competition law and digital market rules, making interdepartmental coordination within government crucial. Mr. Mohanty responded by stressing the need for systematic risk assessments and more transparency across the AI value chain as necessary governance components. He suggested that competition issues connect with broader accountability measures.

  1. Regulation versus Trust-Based Self-Regulation

Another discussion focused on whether AI governance in India should adopt a precautionary regulatory approach or favour trust-based self-regulation. Mr. Prakash Narayanan supported minimal proactive regulation, arguing that interventions should only occur after real harm becomes apparent, rather than based on potential risks. However, Mr. Jaideep Reddy warned that in India’s legal landscape, regulatory gaps usually lead to criminal law enforcement instead of gradually evolving through civil law, which raises the stakes of leaving regulatory silence. Ms. Eunice Huang advocated for an evidence-based, adaptive governance model, cautioning against rigid, overly specific frameworks that may outdistance regulatory capabilities. Mr. Mohanty aimed to find a middle ground, calling for well-measured action, avoiding both complete precaution and total restraint, with interventions tailored to specific sectors and risk profiles.

  1. Data Labelling and Embedded Bias

Data labelling governance became a separate topic of interest. Prof. Rahul De pointed out how labelling practices can entrench cultural and epistemic biases, affecting model outputs in ways that reinforce existing structures. He insisted that governance cannot overlook the politics of training data. Prof. (Dr.) Krishnaswamy expanded on this concern, warning against exploitative labour models similar to earlier outsourcing practices where poorly paid annotators invisibled digital work. The conversation highlighted that data governance is not just a technical issue, it is deeply political, affecting labour conditions, authority over knowledge, and social representation.

  1. Is AI Fundamentally Different?

A conceptual question arose regarding whether AI should be treated simply as another technology within existing legal frameworks or as a fundamentally different category. Mr. Mohanty pointed out two features that he believes distinguish AI systems, their probabilistic nature and their emerging capabilities. Prof. De agreed that AI should be treated as a separate concept, not because it is unprecedented, but because its probabilistic design and extensive use require deeper understanding and engagement in governance discussions.

  1. Central versus Fragmented Regulation

The last set of questions revolved around institutional questions of central versus sectoral or fragmented regulation. Ms. Huang noted that “many jurisdictions in the Asia Pacific region are not pursuing horizontal AI laws; rather, AI governance is often sectoral or adaptive in nature.” The panellists debated whether India’s model represented “governance by architecture,” where norms are built into digital public infrastructure, in contrast to Singapore’s market-driven regulatory philosophy centered on assurance.

Conclusion

The panel discussion ultimately revealed that AI governance in the Global South is less a question of regulatory detail than one of structural political economy. The AI Governance Guidelines in India represent a highly strategic pro-innovation framework that is unwilling to adopt foreign regulatory models but rather emphasizes adaptation and agility in context. At the same time, however, unresolved questions remain about how labour displacement, environmental externalities, competition regulation, open-source positioning, and the articulation of a technology and law strategy will be addressed in the guidelines. The panel discussion revealed that AI governance is not simply about identifying harms that need to be addressed through compliance-based models but rather about how we think about the role of the state in a rapidly changing technology environment and how we think about positioning the Global South in a highly asymmetrical world architecture.

AI in the Global South: DPI as an AI Governance Approach – India AI Pre-Summit Event February 19, 2026