Responsible Artificial Intelligence Network

Responsible Artificial Intelligence NetworkResponsible Artificial Intelligence NetworkResponsible Artificial Intelligence Network
  • Home
  • Our Theory of Change
  • The Community
  • More
    • Home
    • Our Theory of Change
    • The Community

Responsible Artificial Intelligence Network

Responsible Artificial Intelligence NetworkResponsible Artificial Intelligence NetworkResponsible Artificial Intelligence Network
  • Home
  • Our Theory of Change
  • The Community

Our Mission

We inform the responsible development, deployment, adoption and governance of AI technologies and research. We do so by promoting an innovation ecosystem that welcomes diverse voices, reduces inequalities, embraces critical thinking, collaborates across disciplines, and enables the wellbeing of people, places and planet.

RAIN's Theory of Change

Shape Practices

We provide civil society with an accessible and inclusive voice in matters concerning AI; inform industry practices that strive for the responsible development, deployment, use and governance of AI; and engage policymakers in dialogues, providing diverse voices to the development of AI-related policies and regulation.

Promote Literacy

We encourage critical thinking about AI technologies and the power dynamics behind their use, enabled by collaborations among diverse stakeholders; and we model using transparent, accurate and inclusive communication about AI, whilst both challenging any overinflation and misrepresentation of AI performance and capabilities, and identifying positive impacts relevant developments may have.

Ensure Impact

Acknowledging the cost and privilege involved in community volunteering, we seek sponsorship, partnerships and other funded opportunities for the delivery of workshops on responsible AI, and conducting innovative research into the values and voices underpinning AI.

The Status Quo

Fiction and Reality

Unintended Consequences

Unintended Consequences

AI advancements have been inspired by fiction, and pursued by science. Ancient myths and stories of artefacts imbued with life are abundant, as are modern depictions of computational systems capable of human intelligence. These are the stories we draw on when making sense of AI technologies. Meanwhile, “AI” as a field of research and development was only formalised in the 1950s. It is through the scientific process –hypothesis-testing and expert scrutiny– that many advancements in mathematics, engineering and computer science have lent themselves to the development of disparate tools and systems we now call “AI.”

Unintended Consequences

Unintended Consequences

Unintended Consequences

These days, AI advancements are often driven by the prospects of commercial gain. The undue hype surrounding AI in recent times has lent itself to an inordinate amount of investments and startups focused on developing and deploying AI-powered tools. In our excitement about AI, fuelled by fictions and imaginings, we have lost sight of technology’s role for solving problems affecting people, places and planet. Indeed, whilst AI as a range of technologies and research agendas can help solve many challenges, it is currently too often used in ways that perpetuate climate and social injustices. After all, AI is a product of its time – as all things. Therefore, we must begin by envisioning a future world that is conducive to beneficial AI-related activities, from design, development and deployment, through to adoption and governance.

The Tech Futures we Strive For

AI advancements must enable the wellbeing of people, places and planet. In the future we strive for, human life thrives through a flourishing natural environment; AI tools and systems are used to tackle social issues; and technologies serve to provide clear and necessary solutions. AI is attuned to nature, geared towards intentional social impacts, and embedded in equitable innovations.

Attuned to Nature

In the future we strive for, human activity enables a healthy natural environment. Humanity is at peace with nature, and technological advancements serve to strengthen this relationship. More specifically, we approach AI in a way that is conscious of its environmental impacts, ensuring minimal or even positive impact throughout the AI lifecycle.

Intentional Social Impacts

In the future we strive for, we advance AI technologies and research in a way that is intentional in their social impacts. On the one hand, comprehensive impact assessments are common practice, and “unintended consequences” are considered thoroughly. On the other hand, pertinent decision makers, policy makers, institutions and organisations are held to account where errors occur.

Equitable Innovations

In the future we strive for, nuanced understandings of real human needs inform targeted technological innovations. Commercial opportunity is secondary to the social gains made through thoughtful innovations. AI technologies are reconceptualised as solutions to real problems.

Harmonious Knowledge Systems

In the future we strive for, diverse forms of knowledge are open to one another. Insights can be gained from new links between geographically, linguistically and culturally distinct perspectives on the world. Science is enriched by new connections between knowledge-building artefacts, such as libraries, universities, oral histories and science labs.

Credit where Credit is Due

To cite this page:

Responsible Artificial Intelligence Network (2026) Theory of Change, RAIN, https://rain.ngo/our-theory-of-change [Date accessed ...]

Divider images, from left to right:

Lone Thomasky & Bits&Bäume / Better Images of AI / CC BY 4.0

Yutong Liu & Digit / Better Images of AI / CC BY 4.0

Elise Racine / Better Images of AI / CC BY 4.0


Copyright © 2026 Responsible Artificial Intelligence Network - All Rights Reserved unless stated otherwise

Supported by volunteers and Funded by Kairoi

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept