Amnesty International stated that the Indian AI Impact Summit 2026 did not secure concrete commitments to halt “destructive practices” by governments and technology companies, warning that the gathering failed to address the human rights risks posed by artificial intelligence meaningfully.
The organisation pointed to the continued deployment of AI tools in contexts such as predictive policing, biometric surveillance, and automated welfare administration, and to voluntary pledges and industry standards, which do not substitute for enforceable regulation capable of preventing rights violations and ensuring access to remedy.
Criticism of the summit’s outcomes extended to its framing and priorities – The Internet Freedom Foundation wrote that India’s AI Impact Summit “promises little more than spectacle,” arguing that the event foregrounded technological ambition and geopolitical positioning while avoiding firm accountability measures. Similarly, a coalition of digital rights groups reported that the summit did not meaningfully incorporate recommendations from grassroots organisations, including calls for transparency obligations and independent oversight mechanisms.
Concerns about AI’s impact on marginalised communities were also highlighted in analyses released alongside the summit. An international non-profit organisation has documented how AI systems can disproportionately harm racial and religious minorities, migrants, and low-income groups, particularly when used in border management, law enforcement, and access to public services. In a caste-dominated and religion-centric country like India, these biases can cause irreparable harm to already weak and targeted communities such as migrants.
In April 2024, Amnesty International had warned that automated social protection systems in India and elsewhere risk excluding individuals from essential welfare benefits due to flawed data, algorithmic bias, and insufficient human oversight.
Policy analysts have also urged governments to primarily focus on human rights in AI governance. The Observer Research Foundation argued that AI policy must place “people at the heart of the AI story,” emphasising participatory governance and safeguards against algorithmic discrimination. The analysis highlights the need to embed rights protections at the design stage of AI systems rather than relying on post hoc correction.
The summit took place amid growing international efforts to address AI governance gaps. In July 2024, the UN General Assembly adopted a draft resolution aimed at bridging the artificial intelligence gap for developing countries and promoting equitable access to AI technologies. While the resolution emphasised cooperation and capacity-building, Amnesty maintained that global commitments must be matched by domestic legal frameworks that clearly prohibit rights-violating applications of AI.