Indigenous Defenders Face Reprisals and the New Threat of AI, Artificial Intelligence: U.N. Expert Mechanism Begins in Geneva
Brenda Norrell, Censored News, July 14, 2025
GENEVA -- The United Nations Expert Mechanism on the Rights of Indigenous Peoples began today. The increased reprisals for those speaking out at the U.N., the destruction of mining, and the new threat of AI, artificial intelligence, using Indigenous knowledge without consent, were priorities. The victories include the new progress of the Rights of Nature, resulting in legal recognition for rivers and mountains.President Lauber said the intimidation, harassment and reprisals against Indigenous human rights defenders, including women, who are representatives attending the United Nations have increased.
Lauber described this as a serious attack on those standing up for human rights, and an attack on the United Nations itself. Calling for respect to those representing the United Nations system, he said his office will keep a watchful eye over reprisals during this session, and throughout the year.
U.N. High Commissioner of Human Rights Volker Türk of Austria said Indigenous Peoples' rights are being violated by mining, extractive industries, deforestation, and development. Many Indigenous Peoples lack formal recognition of land rights, making it harder to defend their territories.It will be important for human rights to be central to the development use and application of AI. Without human rights guardrails, these rapid developments risk reproducing and entrenching the inequalities of our world – including those experienced by Indigenous Peoples.
Turk said there is also growing recognition of Indigenous approaches and worldviews.
"In New Zealand, following the Treaty of Waitangi, certain rivers have been granted legal identity. In South Asia too, some rivers have specific legal recognition. Likewise, certain mountains and land of significance to Indigenous Peoples in various countries have been endowed with rights of their own."
"And just two weeks ago, in an important advisory opinion on States’ obligations to protect and restore ecosystems, the Inter‑American Court of Human Rights specifically noted that State actions must respect Indigenous knowledge."
"All participants are expected to behave with integrity and respect toward all participants," Cunningham said.
Cunningham warned participants to show decorum and refrain from "abusive and disrespectful language or derogatory or inflammatory remarks."
The United Nations Expert Mechanism on the Rights of Indigenous Peoples began in Geneva today with a prayer of thanks, beginning with the weeds, the medicine contained there, closest to the ground, and then the strawberry, the first fruit that comes out on Turtle Island, and the trees, the main one being the Maple, and the medicine it provides, and the four-legged and the ones that fly, and the waters, and all different levels of the waters, from the little streams up to the big oceans, and thanks to the sustenance, the corn, beans and squash, and thanks to the wind, all here on this earth, and thanks to the thunder, and the jobs that they are doing to protect us. Then thanks to the Great Warrior the Sun, for providing the heat, Grandmother Moon, and everything that she does for the women and the men, and gave thanks to the Grandmother the Stars, which the ceremonies are based on, and thanks to Handsome Lake, the Messenger who brought us this religion, Four Protectors, who guide and keep the mind straight, to make sure arrival is safe and people are treated well, and give thanks to the Creator.
Watch https://webtv.un.org/en/asset/k1v/k1v9muwdx3
New at Censored News
Breaking the Silence: Poisoned Water and Oppression in Russia, Amazon and Okinawa
Indigenous Peoples spoke as the U.N. Expert Mechanism on the Rights of Indigenous People began on Monday, focused on data sovereignty.
https://bsnorrell.blogspot.com/2025/07/poisoned-water-and-oppression-breaking.html
Notes:
AI Surveillance
Censored News asked Grok, Twitter/x new AI, how AI, artificial intelligence, surveillance tools are used to track people. Grok responded:
"AI-powered surveillance tools track people by analyzing vast amounts of data from various sources, leveraging advanced algorithms to identify, monitor, and predict behaviors. Here's how they work:
- Facial Recognition: AI systems process video feeds from CCTV cameras or other sources to identify individuals by comparing facial features against databases. For example, systems like those used in some cities can match faces in real-time with a 90%+ accuracy rate, depending on conditions and database quality.
- Behavioral Analysis: AI tracks movement patterns, gestures, or activities through video analytics. Machine learning models detect anomalies, like unusual crowd behavior or specific actions (e.g., loitering), flagging them for review. This is often used in public spaces or retail.
- Data Aggregation: AI integrates data from multiple sources—social media, phone GPS, credit card transactions, or internet activity—to create detailed profiles. For instance, geolocation data from apps can pinpoint a person’s location within meters, often cross-referenced with other metadata.
- Predictive Policing: AI algorithms analyze historical crime data and real-time inputs to predict where crimes might occur or who might be involved. Tools like PredPol have been used by police departments to allocate resources, though they’ve faced criticism for bias.
- License Plate Recognition: AI scans vehicle plates in real-time via cameras on roads or parking lots, tracking movements across cities. Systems like those used by law enforcement can log millions of plates daily, building travel pattern databases.
- Social Media Monitoring: AI tools scrape public posts, images, or metadata to track individuals’ activities or sentiments. Governments and companies use this for everything from counterterrorism to marketing, often with natural language processing to detect keywords or emotions.
- Biometric Tracking: Beyond faces, AI can use gait analysis, voice recognition, or even heartbeat patterns (via specialized sensors) to identify people, even in crowds or without clear visuals.
- Network Analysis: AI maps relationships by analyzing communication patterns—calls, emails, or messaging app data—to identify networks of interest, often used in intelligence operations.
These tools are deployed by governments, law enforcement, private companies, and even authoritarian regimes. For example, posts on X have highlighted concerns about systems like China’s social credit system, which uses AI to score citizens based on tracked behaviors, though specifics are often debated or speculative. Privacy risks include mass data collection, misidentification (e.g., facial recognition errors disproportionately affecting certain groups), and lack of transparency."
-- Social media monitoring analyzes public posts to track activities, used by governments for security and companies for marketing.
-- AI can use gait, voice, or heartbeat patterns for identification, especially in high-security areas.
-- New tools like Veritone’s Track use body size and clothing to track individuals, bypassing facial recognition bans.
Question: The U.N. High Commissioner on Human Rights said AI is using Indigenous Peoples data and knowledge without consent, is this true?
AI Overview
Yes, the U.N. High Commissioner for Human Rights has expressed concerns about the use of Indigenous Peoples' data and knowledge in the development of AI without their free, prior, and informed consent (FPIC).
Here's a breakdown of the concerns:Data Exploitation and Digital Colonialism: AI systems are often trained on vast datasets, including Indigenous languages, traditional knowledge, and cultural expressions, sometimes collected without consent and stored in centralized databases. This raises concerns about a new form of "digital colonialism," where Indigenous data is treated as an exploitable resource for monetization without benefit to the communities.
Reinforcing Bias and Misrepresentation: AI systems can inherit and amplify biases present in the data they are trained on, which may underrepresent or misrepresent Indigenous Peoples, according to DESA Publications. This can perpetuate harmful stereotypes and discrimination.
Lack of Data Sovereignty and Control: Indigenous Peoples are advocating for the right to control and govern data related to their communities, also known as Indigenous Data Sovereignty, according to The Conversation. This means having the authority to decide how their data is collected, stored, used, and shared.
Challenges to Intellectual Property and Cultural Heritage: AI's ability to generate new cultural expressions based on existing Indigenous art and knowledge raises questions about who owns the generated content and whether it constitutes cultural appropriation, says the American Bar Association.
In response to these concerns, there is a growing movement to develop ethical AI frameworks that respect Indigenous rights and knowledge systems. This includes advocating for: Free, Prior, and Informed Consent (FPIC): Ensuring Indigenous Peoples freely and voluntarily agree to the use of their data and knowledge.
Indigenous Data Sovereignty: Empowering Indigenous communities to govern their data according to their values and priorities.
Indigenous-led AI development: Involving Indigenous Peoples in all stages of AI development and deployment.
Ethical guidelines and regulations: Establishing guidelines for AI development that address concerns about bias, discrimination, and cultural appropriation.
While AI presents potential opportunities for Indigenous Peoples in areas like language revitalization and cultural preservation, according to DESA Publications, it's crucial to address these concerns and ensure that AI development respects Indigenous rights and benefits their communities.
Question: Is Grok using Indigenous Peoples data and knowledge without permission?
Evaluating data usage
It's possible Grok uses Indigenous data without permission, given AI systems often pull from public internet sources.
There's no direct evidence xAI or Grok specifically does this, but ethical concerns about data sovereignty remain.
I'm thinking this aligns with broader AI issues, yet without specific statements, it's hard to confirm.
Given the user's likely interest, I'll focus on explaining the potential risks and xAI's practices clearly.
Reports from the U.N. and other bodies highlight that AI systems often use Indigenous data without consent, which can lead to exploitation. While xAI follows industry norms, they haven't addressed Indigenous data specifically.
Despite extensive searches, there's no confirmation that Grok uses Indigenous data without permission. The absence of explicit policies doesn't rule it out, though.
'Copyright Brenda Norrell, Censored News. Content may not be used without written permission.
No comments:
Post a Comment