AI as Provocateur, Partner, and Possibility: Reclaiming Epistemic Justice in the Age of Artificial Intelligence
- Eda Tibet
- Jun 2
- 7 min read
Updated: 12 hours ago

In an age where artificial intelligence (AI) increasingly shapes our imaginations of the future from climate models to humanitarian interventions it is vital to pause and ask: Whose knowledge is shaping AI? And more urgently: Who benefits, who is harmed, and who is simply ignored?
These questions animated the workshop “AI as Provocateur, Partner, and Possibility,” in a collaborative knowledge exchange workshop with KPFE (Swiss Alliance for Global Research Partnerships).Hosted by the Wyss Academy for Nature at the University of Bern, convened by a transdisciplinary team of researchers and practitioners, the workshop brought together environmental economists, geographers, film scholars, anthropologists, AI developers, and community organizers.
The goal was not to merely debate the ethics of AI, but to reimagine how AI might serve a plurality of worlds human and nonhuman, visible and invisible, dominant and marginal.
Modeling the Present: AI’s Role in Sustaining or Disrupting Systemic Inertia
💬 “ As a Partnership, AI reflects our systemic inertia as much as our capacity for transformation.” — Jan Göpel

Dr. Jan Göpel (Postdoctoral Researcher, Land Systems and Sustainability transformations at the Wyss Academy for Nature, Universität Bern) opened the workshop with a keynote by examining the use of AI in modeling environmental transitions. Drawing from his work with Integrated Assessment Models (IAMs), he highlighted the paradox at their core: “These models are used to imagine transformation, yet are constrained by what we already know and accept. They tend to exclude systemic alternatives like degrowth because they don’t align with the dominant economic paradigm.” He pointed out that citation-based datasets often in English and peer-reviewed—become a gatekeeper for what counts as “realistic.” This epistemic bottleneck sidelines local, indigenous, and activist forms of knowledge. “The problem is not just bias,” Goepel argued, “but systemic exclusion. We’re hardcoding narrow futures into our decision-making.”One audience member asked: What would an anti-colonial modeling approach look like? Goepel replied, “It means starting with the community and co-producing the scenario space, rather than filtering it through an academic or institutional lens.”
Seeing Cinematically: Participatory AI Visualisation Evocations
💬 “ As a Possibility, with AI we can create new Cinematic Genres and imagine what can that do to our imagination in future building for systems change.” — Eda Elif Tibet

Dr. Eda Elif Tibet (Postdoctoral Researcher, Land Systems and Sustainability transformations at the Wyss Academy for Nature, Universität Bern) then shared a new idea she had on AI Visualisation Evocations as a visual methodology that engage AI not as a data collection tool, but as a relational partner. Her work centers on cameraless, AI-generated visuals co-created with scientists and stakeholders in biodiversity conservation especially in sacred or restricted territories where photography is either unethical or forbidden.
“In one survey she conducted, AI-generated images unexpectedly triggered strong emotions among scientists; longing, grief, connection. She realized that AI was not merely creating content; it was surfacing subconscious values if accompanied with the right questions. This aligned perfectly with the project called Bridging Values, in which scientists are invited to reflect on transdisciplinarity.”
Her workflow involves feeding these AI visuals into reflexive, participatory dialogues alongside memory work, sensory recall, and narrative exercises. “It’s about inviting storytelling and making visible the emotional and epistemic fractures in our work.”
A participant in the audience raised a concern about consent: What if a generated image accidentally misrepresents or triggers a community? Tibet responded, “That’s why the process must be slow, iterative, and co-facilitated. AI here isn’t extractive, it becomes a mirror, and therefore an evocation.”
Tibet concluded by proposing the creation of new cinematic genres through AI and local communities together, which she described as “Mythoreal Ethno-Fantasmic Cinema.” She shared the example of a storyboard trailer for her upcoming feature film Into the Steppes (2027) taking place in the fractured mind of a character symbolised within the surreal landscape of Cappadocia a story narrated by archetypes rather then main protagonists, which she was able to produce overnight thanks to the immense speed of AI. She then posed a question to the audience: “Imagine what this could do to our collective psyche in imagining and future-building for systems change around the globe. No other generation has experienced such speed in thinking and dreaming, we are indeed living in revolutionary times.”
Reading Migration Through the Machine: Opportunities and Pitfalls of Text Classification
💬 “We must always be aware of the social biases of commerical LLM models when using them for research and adapt and anticipate the prompts we use"-- Remo Agovic.

Remo Agovic ( Wyss Academy for Nature at the University of Bern) shared early findings from his experiment using GPT-3.5 to classify German newspaper articles on migration. His goal was to analyze how different outlets framed migrants—as threats, victims, contributors, or otherwise.
The results revealed both potential and danger. “LLMs are powerful,” Agovic said, “but they often over-classify. They pick up on words like ‘border’ or ‘crime’ and jump to conclusions. Context and nuance are lost.”
His research confirmed that prompt design and task separation (e.g., classifying one variable at a time) improved performance. But he stressed the importance of human oversight: “The model is a tool—but without epistemic humility and subject-matter expertise, it can reinforce rather than question biases.”
In the discussion that followed, one participant from the Swiss media sector asked if LLMs could be used to counter xenophobia. “Yes,” Agovic answered, “but only if the training data is consciously curated. We need a critical AI literacy—across journalists, policymakers, and the public.”
From Silicon Valley to Madagaskar: Localized AI for Rural Empowerment
💬 “A local farmer in a remote village of North-Eastern Madagascar got tired of waiting for the rare visit of a veterinarian and asked ChatGPT about which vaccine to use, where to buy it, what type of syringe and needle depth to apply—and managed to vaccinate the pigs himself.” — Ntsiva Andriatsitohaina

Ntsiva Andriatsitohaina (Land Systems and Sustainability transformations at the Wyss Academy for Nature, Universität Bern), representing WA/LRA–Madagascar and the Full Circle Initiative, delivered a powerful testimony on grassroots AI adaptation. In rural areas where internet access is scarce and literacy uneven, AI is made accessible through young “coaches” who help farmers translate their needs into ChatGPT prompts on everything from pig farming to bureaucratic hurdles.
“The coach becomes a bridge. AI is not replacing wisdom it’s amplifying it.”
She also emphasized that without localized mediation, AI risks being irrelevant or even harmful. “Context is everything.. We’re inventing our own approach, with our own challenges.”
His intervention prompted rich discussion on power asymmetries. One question from a Kenyan participant: Can this model be replicated in other postcolonial contexts? Ntsiva replied, “Yes but only if communities co-lead from the beginning.”

Rethinking Intelligence: From Ecofeminist Ethics to Multispecies Justice
💬 “ As a Provocateur, AI is never neutral. It is entangled in geologies of extraction, labour regimes, and epistemic violence. Its intelligence is built on top of layers of mined minerals, scraped data, and flattened contexts." - Svitlana Lavrenciuc

Svitlana Lavrenciuc ( Land Systems and Sustainability transformations at the Wyss Academy for Nature, Universität Bern) offered the most radical provocation of the day. She challenged the very ontology of AI arguing that current models reflect a masculinist, extractivist worldview. “We assume intelligence is predictive, competitive, and linear. But in many cultures, intelligence is relational, cyclical, even ritual.”
Drawing on ecofeminist theory and indigenous knowledge systems, she proposed sympoietic AI: systems that co-create with ecosystems, listen before acting, and are accountable to more-than-human beings.
She cited speculative design methods such as AI rituals, multispecies protocols, and cameraless archives as ways to “unlearn” the dominant metaphors of control and objectivity. “We must reimagine AI as a composting practice, not just a computing one.”
Her talk ended with a question for all: What would it mean to build an AI that helps us mourn, rather than manage, the losses of the Anthropocene?
Literacy Beyond Letters: Pictograms and the Politics of Access
💬 “Generative AI can answer almost everything. But who gets to ask?” — Marcel Walti

The final presentation by Marcel Wälti Rettenmund (WALRA / University of Teacher Education Lucerne) introduced a participatory pilot project in Madagascar using icon-based, voice-enabled AI. Designed for non-literate users, the system uses familiar pictograms and oral interfaces to support access to information on health, finance, and farming.
“Instead of typing, people can point to images and speak their language,” Marcel explained. “It’s intuitive but only if the icons are culturally validated. A chicken can mean food or ritual, depending on the region.”
This sparked debate on universal design vs. cultural specificity. One participant suggested: Could such systems be layered, like oral archives? Marcel responded, “Yes, we’re working on modularity. But it must always be co-designed. Accessibility without agency is not equity.”
Toward Plural Futures: AI for and by Communities
As the day closed, the group reflected on common threads. Across case studies and geographies, five core principles emerged:
AI must be co-created, not just applied—involving communities from the start, not as end-users but as epistemic partners.
Plural knowledges are vital—oral traditions, grassroots innovations, and indigenous cosmovisions are not optional; they are necessary to democratize AI.
Contextual and open-source tools matter—infrastructure must be low-cost, transparent, and locally adaptable.
Ethics is a process, not a checklist—AI should be relational, accountable, and capable of discomfort, not just efficiency.
We must reimagine what intelligence is—from mastering data to mediating between worlds, from prediction to poetic care.
As we prepare for the KPFE conference on June 20th, 2025 in Bern, this workshop has laid a crucial foundation. It reminds us that AI is not destiny. It is a set of situated, socio-technical choices. And these choices must be informed by ethics, imagination, and relation.
If we are courageous enough to widen the circle of voices and let go of singular futures, AI might just become not only a tool but a teacher, a companion, and a possibility for plural ways of living well on a damaged planet.
LINE UP & All Abstracts
Willingness to Change: What is AI Telling Us? By Dr.Jan Göpel
AI Visualisation Evocations: A New Transdisciplinary Tool for Participatory Action Research in Global Protected Areas by Dr Eda Elif Tibet
AI Text Classification: Narrative Divide, Divergent Perspectives and Media Stereotyping of Immigrants by Prof. Kai Gehring and Remo Agovic
The role of ChatGPT in Helping Deliver Relevant Information to Local Communities from Remote Areas by Ntsiva Andriatsitohaina
Ethics in the Web of Beings: Decentering the Human in the Environmental AI by Svitlana Lavrenciuc
AI and Iconographics: Bridging the Language Divide in Global Partnerships by Marcel Waeti.
Knowledge Exchange Workshop for Conservation Science and Global Partnerships
Co-Conveners: Dr.Eda Elif Tibet & Dr.Jan Göpel , Postdoctoral Researchers
Affiliation: Land Systems and Sustainability Transformations, Wyss Academy for Nature at the University of Bern
Workshop date: 27 May 14-16 CET
In collaboration with KFPE, Swiss Alliance For Global Partnerships
Comments