Call for Inputs – Protection of human rights defenders in the digital age
This response is submitted by AlSur, a consortium of eleven civil society and academic organizations based in Latin America that are working together to strengthen human rights in the region’s digital environment (underlined phrases marks different answers)
1. Legislative and regulatory measures
Historicaly AlSur has highlighted significant gaps and problems in the legal frameworks governing state surveillance in Latin America. These frameworks were designed in a pre-internet era and, as a result, do not account for the current capabilities of digital technologies.
The absence of adequate human rights safeguards in these regulations has been a constant concern for digital rights organizations. In this context, it has become evident that, amid these gaps, states’ capabilities to massively collect personal data, use spyware, or directly access private infrastructure are growing, compromising fundamental rights such as privacy, freedom of expression, freedom of association, and the free development of personality.
Despite advocacy efforts and repeated reports of abuses and risks -which underscore the urgency of updating these regulatory frameworks -recent express reforms in Mexico, Ecuador, and Argentina have not followed a rights-protecting approach. At the same time, in Colombia, no substantive progress has been observed toward a reform that incorporates a different perspective, aimed, among other things, at implementing the standards established by the Inter-American Court of Human Rights in the case of Colectivo CAJAR v. Colombia. Across Latin America, regulatory frameworks related to cybersecurity, surveillance, and information integrity are expanding state powers, often without adequate safeguards.
AlSur comparative case studies across Brazil, Chile, Colombia, El Salvador, Mexico, Peru and Paraguay show a regional trend towards the expansion of surveillance powers, often justified under national security frameworks and implemented without adequate safeguards. These developments increase risks of arbitrary interference with privacy and have a chilling effect on human rights defenders. Other research shows how In Peru, surveillance regimes operate under national security frameworks with limited transparency, enabling secret access to communications data without sufficient accountability mechanisms. Similarly, in Paraguay, cybersecurity and cybercrime laws include broad and ambiguous provisions that may criminalize legitimate digital activities of HRDs.
The research done by AlSur members had identified several legal tools used to restrict HRDs’ rights among them:
➔ Legal frameworks across the region—specifically surveillance and national security laws—enable the interception of communications and access to personal data under broad and often vague provisions, particularly regarding data retention.
➔ Cybercrime legislation with vague definitions such as “unauthorized access”
➔ Criminal procedural mechanisms expanding digital evidence collection
➔ Administrative and judicial processes lacking transparency.
From a human rights perspective these mechanisms often fail to meet international standards of legality, necessity, and proportionality under international human rights law.
Evidence suggests that restrictive regulatory approaches are spreading across the region through policy diffusion and harmonization processes. The comparative nature of the evidence suggests a regional diffusion of surveillance practices, where similar legal and institutional models are replicated across countries without incorporating adequate human rights safeguards.
Cybercrime and disinformation frameworks are increasingly replicated across countries, often influenced by global narratives around security and misinformation, without incorporating adequate safeguards. This contributes to a broader pattern of shrinking civic space in Latin America.
2. Digital communications
Beyond legal frameworks, infrastructure and connectivity conditions significantly affect HRDs.
Research in the Amazon region shows increasing dependence on private connectivity providers, including satellite services, raising concerns about governance, control, and unequal access.
Interviews with quilombola leaders in Brazil highlight how connectivity inequalities affect: Communication capacity, visibility of struggles and the autonomy of communities
An Internet Mapping Initiative shows how the internet infrastructure in Latin America is unevenly distributed and often concentrated, reinforcing territorial inequalities and limiting secure access for communities and defenders.
According to research done by AlSur member organizations HRDs in the region face a range of technology-facilitated attacks, including: Online harassment and threats, gender-based digital violence, surveillance and data exposure and coordinated disinformation campaigns
These attacks often intersect with offline risks. For example: (a) Environmental defenders in Penco (Chile) and Matopiba (Brazil) face both digital harassment and physical intimidation linked to extractive conflicts, (b) In Jalisco and Querétaro (Mexico), digital surveillance and harassment are embedded in broader contexts of territorial conflict, also (c) evidence from Paraguay and Colombia shows that human rights defenders face recurrent digital attacks, including hacking (up to 38.9% of cases in Paraguay), phishing, and identity theft, which directly undermine trust in communications and organizational security. gender-based
Evidence from countries such as Mexico, El Salvador and Colombia shows the use of surveillance technologies, including spyware, in contexts affecting journalists and human rights defenders, increasing risks of intimidation, harassment, and self-censorship.
Therefore, patterns demonstrate how digital attacks contribute to offline violence, intimidation, and repression.
Resarch published by AlSur and our member organizations shows how politically sensitive contexts, including elections and social unrest, digital technologies are used to target and delegitimize HRDs, amplify disinformation and expand surveillance.
In politically sensitive contexts, surveillance tools have been used to monitor civil society actors, raising concerns about their potential misuse to influence democratic processes and restrict civic space.
These dynamics undermine democratic participation and increase risks of criminalization and stigmatization of HRDs.
The research done by AlSur and member organizations has been identifying how Women HRDs face disproportionate and gender-specific risks, including: sexualized harassment, threats of sexual violence and coordinated online abuse campaigns.
Marginalized groups -including Indigenous, Afro-descendant, and rural communities- face compounded risks due to: structural discrimination, territorial disputes and limited access to secure infrastructure
These intersecting risks are consistent with obligations under CEDAW (Convention on the Elimination of All Forms of Discrimination against Women) and the UN Declaration on the Rights of Indigenous Peoples (UNDRIP)
Content moderation discussions has been a concern for AlSur due to it’s impact on Human Rights. Platform governance plays a critical role in shaping HRDs’ safety and visibility. Research findings indicate that content moderation systems are often opaque and inconsistent, automated systems fail to capture local context and cooperation with state authorities may enable over-removal or surveillance
Studies show that platform interventions can have uneven impacts, sometimes limiting legitimate expression while failing to adequately protect users from abuse.
Evidence from multiple countries such as a recent report by AlSur, also highlights increasing cooperation between state authorities and private actors, including telecommunications and technology companies, facilitating access to data and expanding surveillance capabilities.
This raises concerns under the UN Guiding Principles on Business and Human Rights
AI technologies are exacerbating risks to HRDs by scaling disinformation and manipulation, enabling automated surveillance and profiling and reinforcing biases against marginalized groups
AI systems also introduce opacity and accountability gaps, making it difficult for HRDs to challenge decisions affecting their visibility or safety.
These concerns align with recent UN Human Rights Council calls to regulate emerging technologies and protect defenders from misuse of AI and surveillance tools.
3. Digital restrictions to privacy
The procurement and abuse of digital surveillance tools by both State and non-State actors in a context of total absence of regulation on the topic, have created several critical risks for HRDs:
➔ Infiltration and Unauthorized Access: HRDs face constant monitoring through the infiltration of their devices, allowing attackers unauthorized access to information.
➔ Stalking and Physical Risk: Surveillance often involves monitoring and stalking actions, which can lead to physical threats or attacks in the non-digital world.
➔ Chilling Effect and Self-Censorship: The mere possibility of being under surveillance generates a "chilling effect". HRDs restrict opinions or activities for fear of reprisals.
➔ Illegitimate Criminalization: State actors use surveillance to gather information that is later used to criminalize HRDs.
➔ Specific Attacks on Women Defenders: Women HRDs face gendered attacks, including digital sexual violence and smear campaigns that question their morality or professional ethics.
In the Matopiba region (Brazil), environmental defenders opposing agro-industrial expansion have faced surveillance, intimidation, and pressures linked to land conflicts, illustrating how digital and physical threats intersect in territorial disputes.
In recent reports, AlSur documents a steady increase in the government’s use of surveillance technologies -such as video surveillance systems, facial recognition, social media monitoring, and mass data collection- in Latin American countries.
These automated technologies are primarily used by government agencies in public safety, immigration control, and public spaces without sufficient democratic oversight or transparency. The implementation of facial recognition includes the ability to identify individuals in public spaces without their explicit consent, which poses direct risks to privacy, freedom of movement, and freedom of expression, undermines the right to non-discrimination, and poses challenges to the freedom of peaceful assembly.
Challenges that AlSur had been highlighting regarding these public policies are as follows:
➔ Lack of comprehensive regulatory frameworks and a reactive approach: According to AlSur, regulatory initiatives in the region tend to be reactive and piecemeal, lacking robust legal frameworks that explicitly integrate human rights into the regulation of AI systems. This means that decisions regarding AI are made without clear criteria for the protection of fundamental rights.
➔ Vague definitions of AI and automated systems. The regulatory proposals analyzed by AlSur tend to use broad or vaguely technical definitions of “AI” or “automated technologies” without clearly defining what is being regulated. This vagueness can leave significant impacts on rights—such as privacy, equality, and non-discrimination—outside the scope of regulation.
➔ Insufficient algorithmic transparency. AlSur has emphasized that many automated systems used by the state lack mechanisms to ensure transparency and explainability—that is, that affected individuals can understand how and why an automated decision was made. This lack of transparency hinders oversight, accountability, and access to redress in the event of harm.
➔ Risk of discrimination and exclusion. AlSur has warned that without clear safeguards, automated systems can reproduce or amplify existing biases and lead to discriminatory decisions or social exclusion, for example in the classification of beneficiaries of social programs, services, or administrative decisions.
➔ Lack of citizen participation and dialogue with civil society. AlSur has observed that regulatory processes and the adoption of AI systems lack formal mechanisms for consultation and effective participation by civil society and affected communities. This undermines the legitimacy of decisions and hinders the incorporation of human rights perspectives into the governance of these technologies.
➔ Promotion of AI without human rights impact assessments. Several public policies and state strategies promote the use of AI for economic development, innovation, or state modernization without integrating robust human rights impact assessments (privacy, equality, access to services, freedom of expression, etc.). This can lead to the adoption of technologies that negatively affect these rights without mitigation mechanisms.
➔ Reliance on technical narratives and a lack of critical perspectives. AlSur notes that many official debates treat AI as a purely technical or economic issue, relegating discussions of its ethical, social, and human rights implications to a secondary level or ignoring them entirely. This results in frameworks that fail to address the structural risks of automated technologies.
➔ Risk of surveillance and social control technologies. Complementary reports by AlSur, such as the one on facial recognition and surveillance technologies, show a growing adoption of biometric and automated surveillance systems by states lacking robust rights protection frameworks -which poses a significant risk to privacy, freedom of expression, and freedom of movement.
Furthermore, on the specific topic of the expansion of biometric infrastructure and increased monitoring of public and digital spaces, the research by AlSur and our member organization concludes that these technologies has been introducing new vulnerabilities:
➔ Identification during Protests: Facial recognition systems are being used by States to identify and potentially target individuals participating in peaceful protests.
➔ Social Media "Cyber-patrolling": Monitoring of digital spaces (cyber-patrolling) creates a chilling effect on freedom of assembly and association, as activists fear their online interactions are being recorded for future repression.
➔ Loss of Anonymity: The use of biometric data removes the ability for HRDs to remain anonymous in public spaces, which is often essential for their safety when working in hostile environments.
➔ State-funded Troll Armies: In digital spaces, coordinated operations—often funded by State structures—use surveillance data to viralize attacks and delegitimize HRDs.
The research by our member organizations indicate that encryption remains a contested territory with significant implications for HRDs, especially:
➔ Essential Protection for Safe Operations: HRDs rely on encrypted platforms (ie. whatsapp or Signal) to organize, communicate, and exchange information securely.
➔ Threats from Cybercrime Legislation: New regulatory frameworks, such as broad cybercrime laws, are sometimes used as an excuse to bypass privacy protections or criminalize the use of secure communication tools.
➔ Insecurity for Environmental Defenders: For those defending land and territory, a lack of robust digital security and encryption skills makes them particularly vulnerable to harassment and threats.
The research in the region shows how advances in Artificial Intelligence (AI) have significantly intensified the threats to the privacy and safety of HRDs:
➔ Automated Surveillance: AI-assisted digital surveillance allows for mass monitoring that is more efficient and harder to detect than traditional methods, impacting the rights to assembly and association.
➔ Systematic Identification: AI systems are used to systematize interactions and uncover accounts, but they are also used by governments to monitor and detect "malicious attacks" from civil society.
➔ Unpredictable Legal Impacts: AI-driven data processing can lead to HRDs being targeted based on predictive analytics, which may result in preemptive arrests or the suspension of state benefits.
➔ Privacy Invasions in Real-Time: Tools using AI to perform Open Source Intelligence (OSINT) gathering can discover social connections and monitor the behavior of HRDs in real-time without their consent.
There are also other less evident risks but also very important. In Brazil, the licensing process of a large-scale data center linked to TikTok in Ceará has raised serious concerns about environmental compliance and regulatory integrity, highlighting how digital infrastructure projects can affect local communities and expose environmental defenders.
4. Corporate responses
AlSur reports suggest that companies are largely failing to adequately meet their human rights responsibilities, particularly in relation to surveillance technologies.
Evidence indicates that private companies actively develop, market and export surveillance technologies (including facial recognition and spyware) across Latin America. These technologies are often sold without sufficient human rights due diligence or safeguards and there is limited transparency regarding clients, contracts, and safeguards
For instance, the mapping of facial recognition systems in the region shows extensive deployment through public-private partnerships, with little evidence of companies assessing risks to affected populations, including HRDs.
Similarly, earlier research documents show how private surveillance technologies have been acquired and used abusively by states, with companies continuing to operate despite known risks of misuse.
This suggests that companies are not systematically identifying or mitigating risks to HRDs, particularly in high-risk contexts.
The evidence indicates that existing corporate approaches are insufficient and ineffective AlSur reports highlight: absence of robust due diligence mechanisms prior to the sale of surveillance technologies, lack of effective oversight or accountability once technologies are deployed and weak or non-existent remedial mechanisms for affected individuals
The persistence of abusive practices over time -documented between 2018 and 2025- suggests structural deficiencies in corporate governance models, rather than isolated failures. Current corporate approaches do not adequately prevent misuse, nor do they provide effective remedies when violations occur.
AlSur report identify several structural challenges:
➔ Lack of transparency: Contracts, procurement processes, and technical specifications are often classified or inaccessible. This prevents civil society from: Assessing risks, monitoring deployment, seeking accountability.
➔ Asymmetry of information and power: Companies possess technical knowledge and control over systems, civil society lacks access to: system design and operational details.
➔ Weak regulatory frameworks: States often fail to impose: Due diligence obligation, transparency requirements, this allows companies to operate with limited scrutiny.
➔ Corruption risks: Procurement processes for surveillance technologies are vulnerable to irregularities and lack of oversight, further complicating accountability.
Based on the recommendations identified in the AlSur reports, companies should:
1. Implement robust human rights due diligence. Prior to sale and throughout the lifecycle of technologies and including specific assessment of risks to HRDs
2. Increase transparency. Companies should publish: Clients (where possible), Types of technologies deployed and Safeguards and policies
3. Establish accountability and remedy mechanisms: Accessible complaint mechanisms and cooperation with independent oversight bodies
4. Refrain from high-risk sales: Avoid supplying technologies to contexts where there is documented abuse and lack of safeguards
5. Strengthen anti-corruption safeguards: Ensure integrity in procurement and contracting processes.