Algorithmic Insurance and Resource Pooling: The Missing Piece in SEBI’s AI/ML Governance Framework
I. Introduction
The Securities and Exchange Board of India (SEBI) released a consultation paper on June 20, 2025, addressing the recent dilemma regarding the ethical application of artificial intelligence (AI) and machine learning (ML). This step by SEBI signifies that cognizance has been taken against the potential threat from AI/ML technologies to the Indian capital markets. This initiative by SEBI is noteworthy, as it aims to maintain consistency with the evolving international securities market regime. As we understand that innovation is crucial for growth and consistency with the global market, this consultation paper leaves some of the most important aspects in the grey area, such as accountability, and compensatory mechanism. For instance, if a brokerage firm’s AI model fails and causes investor losses, the paper lacks clarity on who bears the responsibility, especially when the firm lacks the financial capacity to compensate. This article identifies this gap and proposes a novel solution that SEBI could consider i.e. algorithmic insurance and risk-pooling mechanisms tailored for the Indian securities ecosystem.
II. The Unaddressed Problem of Compensation and Risk
The SEBI’s 2025 consultation paper creates a strong governance framework for AI/ML usage in financial markets, requiring dedicated oversight teams and backup mechanisms. However, it overlooks a critical issue, who pays when these systems fail? While SEBI correctly places compliance responsibility on brokers, mutual funds, and exchanges (Section 5.1(e), SEBI Consultation Paper, June 2025), it doesn’t address financial liability during AI failures.
Consider a mid-sized brokerage with an AI trading advisor that misinterprets turbulent market situations, leading to huge losses by investors. The existing framework is not clear in terms of compensation sources, particularly in case the brokerage is not able to absorb the associated financial burden. The directors are subjected to the risk of accountability under Section 166 of the Companies Act, 2013, which demands reasonable care and diligence. However, Section 166 is mainly focused on breach of fiduciary duty or negligence, it does not capture instances whereby directors, acting in good faith, are not endowed with the technical expertise to fully understand complex AI systems. In the case of highly specialized and opaque technologies such as machine learning models, this presents a grey zone because boards can make responsible decisions to adopt AI-based tools on the basis of expert certifications, and subsequently, those tools can behave in unpredictable ways.
Such instances expose the limitations of existing fiduciary standards when applied to algorithmic governance and highlight the need for financial protection mechanisms, such as pooled-risk funds or algorithmic insurance, to safeguard investors without attributing liability where there is no demonstrable misconduct.
III. Algorithmic Insurance – A Global Trend
Algorithmic insurance is an emerging concept in global financial regulation. It is a type of risk-transfer, in which financial institutions are immune to losses that occur due to the failure of AI systems, misjudgement of algorithms, and unforeseen decision errors. In a similar manner to how cyber-insurance is used to cover the outcomes of data breaches, algorithmic insurance would act as a security net that protects institutions and by extension investors against the dynamism and risks that come with the adoption of AI.
Government regulatory bodies have taken up the opportunity to determine how they can manage risks associated with AI using insurance and liability frameworks. For instance, legal scholars have identified parallels with the Price-Anderson Act in the United States, which created a government-sponsored insurance and liability mechanism in cases of nuclear disasters, which they suggest can be applied to AI in order to provide coverage where existing insurance is inadequate. In the European Union, expert groups and the European Parliament have proposed the mandatory liability insurance to cover the high-risk AI applications as the insolvency may leave the victims uncompensated, these proposals represent a shift towards an institutional financially backed safety net against AI harm. Similarly, the Automated and Electric Vehicles Act 2018 of the United Kingdom, introduces a statutory single insurer approach, where autonomous cars will have the liability when driving themselves, so that victims have a clear route to compensation.
Across the world, this model is gradually catching on and private insurers are supplementing the regulation efforts by rolling out specialized AI liability coverages to cover the new risks. Lloyd’s of London has issued new policies appealing to liability linked to AI issues comprising training data flaws, inadvertent algorithmic bias, and malfunctions in software, which were not usually covered by traditional insurance because they were not previously considered in legacy frameworks like standard error cum omissions (E&O) coverage. Similarly, Armilla Insurance Services launched an Affirmative AI liability Insurance Policy, which, underwritten by certain Lloyd’s syndicates, explicitly extends beyond the approach of retrofitting AI risk into traditional technology policies by establishing a dedicated framework to AI-driven harms. These policies clarify the issues of third-party liability arising from AI malfunctions, which generic tech E&O policies failed to provide. This distinction matters as conventional frameworks were established to manage the processes driven by human beings and not the autonomous systems with adaptive behaviours.
In addition to indemnity, these AI-specific insurance policies include legal defense insurance as well, given that the legality of AI failures will frequently involve complex legal questions of algorithmic causation and negligence that will need specialized legal representation. Such additional coverages can be illustrated by reference to the recently launched AI liability insurance product by Armilla (underwritten at Lloyd’s) which specifically covers defense costs and the costs of a court expense, arising from AI performance shortfalls, including phenomena such as hallucinations and model drift, cases under which traditional E&O tech policies most likely would have a coverage gap or a low sublimit. Such new models were, in fact, developed to respond to what insurers refer to as ‘Silent AI’ exposures, whereby a standard policy lacks clarification regarding AI risk cover, leaving organizations without understanding about whether or not they are actually covered.
These trends highlight an important fact i.e., AI represents a novel risk profile that can no longer be adequately addressed by traditional compliance audits and legacy liability policies. The legacy policies that might cover cyber events, general liability or E&O are traditional, human-based risks and therefore are not comparable to a self-learning and dynamic system. They often exclude or ambiguously address AI perils, making AI liability unpredictable and underwritten in an offhand, reactive manner. In contrast, Algorithmic insurance solutions are emerging as essential components in AI governance, not just as financial protections, but as structural safeguards that align legal accountability with technological complexity.
IV. Risk Pooling Mechanism for Indian Markets
An adaptation that is possible for India could be drawn from the financial structures that exist already, such as mutual funds, where the risks and returns are shared collectively by the investors. Mutual funds usually aggregate investments from a big pool of participants, each contributing modest sums into a shared pool, which is managed professionally; hence, rather than being concentrated, the risks and rewards are distributed. This kind of collective risk-sharing model provides built-in financial resilience for individual investors and smoothens volatility through diversification. From the regulatory perspective, a risk-pooling mechanism for brokers and other intermediaries using AI systems could be designed. As per this model, a fixed or variable premium from each participating firm could be contributed. Compensation from this fund could be given to the affected investors, especially when any intermediary fails to fulfil their obligation.
In the event of institutional failure, India already has precedents for financial safety nets, which are aimed at protecting investors and small depositors. To enable investors to reclaim the unclaimed dividends, matured deposits, and shares that have remained inactive for seven consecutive years, the Investor Education and Protection Fund (IEPF) was established under the Companies Act, 2013. A wholly owned subsidiary of the Reserve Bank of India, known as the Deposit Insurance and Credit Guarantee Corporation (DICGC), an insurance coverage of up to ₹5 lakh could be provided as per depositor per bank in case of bank failures. At the global level, the UK’s Financial Services Compensation Scheme (FSCS) does provide comparable protection; consumers are protected up to £85,000 in case of the regulated financial firm’s failure. This signifies that these mechanisms, through regulation, can build institutional resilience and consumer trust through pooled-risk frameworks and statutory compensation systems. These frameworks illustrate a critical principle that when systemic risks or failures occur, individual restitution is often impractical, making pooled-risk structures essential for preserving confidence and market stability.
Applying this logic to AI failures where complex, automated decisions can amplify harm across markets, a similar risk-pooling mechanism could act as a financial backstop, ensuring investor protection even when an intermediary lacks sufficient resources to compensate. Building on this, SEBI could set up an Algorithmic Risk Guarantee Fund where AI-using firms make mandatory contributions, calibrated to the complexity and scale of their AI systems. For instance, firms deploying high-risk, fully autonomous trading algorithms or portfolio advisors would contribute more than those using limited, rule-based systems, given the former’s greater potential for systemic harm. The fund would be triggered in clearly defined scenarios, but this raises an important question: what qualifies as an “algorithmic malfunction”? Should protection apply only when systems deviate from their intended code, or also when they operate exactly as programmed but produce undesirable, harmful outcomes such as cascading sell-offs during volatile markets? A threshold that is too high might leave many investors exposed, defeating the very purpose of risk pooling.
Additionally, establishing eligibility would necessitate a well-defined claims process, raising a critical question: who should bear the responsibility of proving a malfunction, the intermediary, the investor, or an independent adjudicating body? These design considerations underscore why the framework must be both precise and flexible, ensuring meaningful protection without encouraging moral hazard. Importantly, participation should remain conditional on compliance with SEBI’s AI governance standards, so that the fund incentivizes responsible innovation rather than serving as a safety net for negligent practices.
V. Benefits and Challenges
Implementing algorithmic insurance or a risk-pooling mechanism would yield several long-term benefits. First, it would enhance institutional stability. If an AI-related market failure occurs, such as an erroneous mass liquidation by multiple bots, a financial backstop would mitigate panic and prevent a contagion effect. Investors would retain trust in the system, knowing that a compensation framework exists. Second, such a scheme could spur innovation by reducing the fear of liability among smaller firms and startups. By providing a financial cushion, algorithmic insurance would allow these players to compete with larger institutions on a more level playing field, without the deterrent of catastrophic losses from a model failure.
However, the proposal is not without challenges. One significant concern is moral hazard, the possibility that firms will become less cautious if they know they are insured. Another is the difficulty in pricing algorithmic risk, especially in a diverse market where AI tools vary in transparency, purpose, and robustness. Actuarial modelling would be difficult in the absence of historical loss data for AI malfunctions. Regulatory enforcement would also need to evolve. SEBI would need to maintain a registry of AI-related failures, define what constitutes “algorithmic malfunction,” and perhaps coordinate with the Insurance Regulatory and Development Authority of India (IRDAI) to structure and monitor such a scheme.
Nevertheless, such difficulties are not impossible ones. Nevertheless, these challenges are not insurmountable. Moral hazard can be mitigated through a combination of calibrated risk-based premiums, where firms deploying more complex or opaque AI systems pay higher contributions, regular third-party audits to ensure compliance with SEBI’s AI governance framework, and strict exclusions for negligent conduct or wilful non-compliance. This might be checked through a limited rollout through the sandbox model of SEBI, as implemented by IRDAI and RBI, which might determine the viability of such a fund without subjecting the entire mechanism to a volley of legal and regulatory questions.
VI. Conclusion
SEBI’s 2025 consultation paper represents a meaningful attempt to address AI integration in securities markets, yet it falls short in one crucial area, what happens when AI systems fail, and investors suffer losses? While the paper does well to stress internal governance and keeping humans in the loop, these measures alone feel incomplete without some form of financial safety net. As AI becomes more deeply embedded in trading algorithms and investment advice, we’re essentially creating new categories of systemic risk that our current protections weren’t designed to handle. The solution isn’t revolutionary, we already have models that work, like the Investor Education and Protection Fund or the Deposit Insurance Corporation. An algorithmic insurance scheme or industry-wide risk pool could provide the backstop that’s currently missing. This isn’t just about protecting individual investors; it’s about maintaining confidence in markets that increasingly rely on automated decision-making. SEBI has a real chance here to set a global precedent for AI-era financial regulation, but only if they’re willing to think beyond traditional oversight and consider innovative protective mechanisms that match the scale of technological transformation we’re witnessing.
Keywords – SEBI AI/ML guidelines, Algorithmic Insurance, Risk Pooling Mechanism.
* The authors are third year students at Chanakya National Law University, Patna.