Linear cause‑effect relationship
Description
While this is the simplest model to understand, it's also the least realistic when it comes to the complex nature of AI impacts. Still, it serves as a useful starting point for exploring more nuanced models.
S‑shaped cause‑effect relationship
Description
This model shows a classic S-shaped cause–effect relationship: small increases in the "cause" lead to minimal early change, followed by a rapid rise in "effect" — which eventually levels off as further increases in cause have diminishing returns.
This pattern is common in many real-world systems. One example is the toxicology dose-response curve: once all the target cells or organisms are affected, increasing the dose has little additional effect.
It also appears in innovation cycles, where early adoption is slow, followed by rapid growth—until the innovation reaches saturation and is eventually overtaken by new developments.
Exponential cause‑effect relationship
Description
This model illustrates an exponential cause–effect relationship, where small increases in the "cause" initially lead to modest changes in the "effect" — but these changes accelerate rapidly, with no clear upper limit.
Exponential relationships often appear in the early stages of new developments, including emerging AI technologies. However, they rarely continue indefinitely. A key question is how long the exponential growth will persist before leveling off or shifting into a different pattern.
This model is especially important because people tend to underestimate how quickly exponential effects can escalate — especially if they expect a more gradual, linear progression.
Jagged cause‑effect relationship
Description
This model builds on the classic S-shaped curve, but introduces unpredictable jumps, dips, and plateaus along the way. While the general pattern still follows slow beginnings, rapid acceleration, and eventual tapering, the "effect" no longer increases smoothly with the "cause."
The "jagged" behavior reflects real-world systems where unexpected events, hidden feedback loops, or external disruptions create irregular outcomes—especially common in complex, adaptive systems like those shaped by AI. It also represents the case where AI adoption is uneven and this leads to unevenly distributed consequences which, in turn, drive unpredictable responses or "effects."
Importantly, this model is not fully reversible: even if the cause is reduced, the system may not return to its previous state, due to the lingering effects of past volatility and continuing "jaggedness."
Complex/chaotic cause‑effect relationship
Description
This model begins with a seemingly stable, predictable relationship between cause and effect—but after reaching a tipping point, the system becomes unstable and increasingly chaotic. Small changes in cause start producing disproportionate and unpredictable effects – even when the "cause" is lowered.
This reflects behavior often seen in complex systems when they're pushed beyond a critical threshold. Patterns break down, feedback loops amplify disruptions, and outcomes become erratic.
Once the system enters this chaotic state, it can’t simply be "reversed." Rolling back the cause doesn’t return things to the way they were, because the structure of the system itself has changed.
Hysteresis in cause‑effect relationship
Description
This model resembles a smooth S-curve relationship—where the effect rises gradually, then rapidly, and finally levels off as the cause increases. However, when the cause is reduced, the effect doesn't immediately follow. Instead, there's a lag or “overhang” in the response due to hysteresis.
This lag reflects systems where past inputs continue to shape present outcomes—such as cultural shifts, behavioral habits, institutional inertia, or emotional imprinting. Once a certain threshold is crossed, the system retains a memory of that state.
Even if the cause is scaled back, the effect may remain elevated for some time, or follow a different path altogether.
Further Information and Resources
Additional resources, including important concepts, articles, initiatives, and web resources.
Effect as value gain/loss
A useful way to think about effect in these models is as a gain or loss of "value," where value can take on a wide variety of meanings, depending on different situations and circumstances. It may, for instance, represent profits or GDP growth. Or health, wellbeing and happiness. Or even social justice, dignity, autonomy, and personal fulfilment.
Value is often associated with what individuals, communities or organisations see as important, or care about.
Approaching effect in this way allows the consequences of different causes to be approached in terms of how they potentially threaten or enable the growth of existing or aspirational value. For instance, certain actions may lead to a loss of health – or even a gain in health and well being. In the same vein, not taking specific actions could lead to similar outcomes or effects, underlining the reality that lack of action can, in reality, be a cause that leads to effects that may or may not be desirable.
Novel governance models and decision frameworks
Using hypothetical cause-effect models like these is useful for thinking about and exploring unconventional and innovative approaches to the responsible development and use of AI. Emerging approaches that are potentially useful include (but not limited to) the following. The link by each opens up a session in Perplexity to explore the approach further:
- Agile governance: A suite of tools, practices and attitudes that allow organizations and individuals to respond fast to unexpected trends and outcomes. [Explore further using perplexity]
- Sandboxing: Creating "safe spaces" where new technologies can be tried out under relaxed regulations so that new approaches to oversight can be learned fast. [Explore further using perplexity]
- Anticipatory governance: A foresight-based model that uses scenario planning, trend analysis, and horizon scanning to proactively shape policy rather than reacting to crises. [Explore further using perplexity]
- Algorithmic Governance: Using code and legal frameworks together (e.g., smart contracts, programmable compliance) to automate and enforce policies in complex or borderless tech environments. [Explore further using perplexity]
- Decentralized Autonomous Organizations-based governance: Inspired by blockchain, these are rules-based, code-driven forms of governance where decisions are executed automatically based on smart contracts. Governance tokens or voting power enable decentralized participation. [Explore further using perplexity]
- Scenario planning: Methods and tools that allow plausible futures to be explored and actively moved toward or avoided. [Explore further using perplexity]
- Participatory decision making: Using deliberative democracy and crowdsourced policymaking in ways that enable diverse populations to weigh in on complex tech issues through digital platforms, citizen juries, or online deliberation tools. [Explore further using perplexity]
- Values-Based or Mission-Oriented Governance: Using shared societal goals (e.g., climate action, equity) as a north star to guide policy and investment decisions, aligning technological development with public purpose. [Explore further using perplexity]
- Risk innovation and orphan risks: Practical approaches to navigating "orphan risks" through approaching risk as a threat to value to an organization and its key stakeholders. [Explore further using perplexity]
- Care: Using the concept of care to guide decision-making governance: [Explore further using perplexity]
Leading Initiatives in Responsible Innovation and AI
- OECD AI Principles [Link to more information]
- OECD AI Policy Observatory [Link to more information]
- World Economic Forum AI Governance Alliance [Link to more information]
- IEEE Global Initiative 2.0 on Ethics of Autonomous and Intelligent Systems [Link to more information]
- Partnership on AI Responsible Practices for Synthetic Media[Link to more information]
- Global Partnership on AI Responsible AI [Link to more information]
- NIST AI Risk Management Framework [Link to more information]
- Responsible AI Institute [Link to more information]
- Stanford Institute for Human-Centered AI (HAI) [Link to more information]
- OECD AI Policy Observatory [Link to more information]
- Ada Lovelace Institute [Link to more information]
- UK Centre for Data Ethics and Innovation (CDEI) [Link to more information]
- Responsible AI UK [Link to more information]
Key Articles and Reports on Responsible AI & Innovation
- US Blueprint for an AI Bill of Rights (archive) [Link to more information]
- EU Artificial Intelligence Act [Link to more information]
- UNESCO Recommendation on the Ethics of AI [Link to more information]
- OECD: Advancing Accountability in AI [Link to more information]
Additional resources
- ASU Future of Being Human AI articles and papers [Link to more information]
- Responsible innovation and AI acceleration (ASU) [Link to more information]
- Responsible Innovation and Artificial Intelligence: A Comprehensive Briefing (ASU) [Link to more information]
- Future of Being Human Substack (AI) [Link to more information]
- Risk Innovation tools and resources [Link to more information]
About the tool
The Responsible AI Trajectories Tool was developed to help understand and navigate the complex consequences of developing and using artificial intelligence. As AI rapidly reshapes our world, decisions about its use carry increasingly significant and often unpredictable effects. The tool helps foster a mindset of responsibility by visualizing potential cause-and-effect relationships, helping users anticipate possible risks, avoid potential harm, develop a more nuanced understanding of AI impacts and implications, and enable more beneficial outcomes.
Whether you’re a developer, policymaker, or everyday user, the tool is intended to encourage thoughtful engagement with AI’s potential impact and promote considered and informed choices.
About the models
The six models here illustrate different hypothetical cause–effect relationships — linear, S-curve, exponential, hysteresis, jagged, and chaotic — associated with decisions around the development and use of AI. They’re designed to help you explore how AI-related actions (or inactions) might lead to different outcomes, under a variety of real-world conditions.
Each cause–effect relationship is shaped by context. What causes change in one situation may not have the same effect in another. Effects may be positive or negative, immediate or delayed. And causes can stem from deliberate decisions — or from inaction, neglect, or oversight.
In many cases, effects can be understood as changes in "value" — though what counts as value varies widely. For a business, it might mean profit; for a government, GDP or public well-being; for a university, knowledge generation or education-based impact; for an individual, a sense of autonomy, health, or purpose.
Value is also intimately tied to what's important to us, and what we care for. In this way, the concept of care is deeply connected with how cause cause and effect are understood and interpreted with AI – and how different cause-effect models are interpreted and used.
This is how "cause" and "effect" might look across different sectors:
- Governments: Relevant effects may include changes in GDP, health, equity, or trust — driven by AI policy decisions or implementations, or even by failure to act.
- Businesses: Relevant effects might include profit shifts, market position, or stakeholder trust — depending on how AI is used, and even claims that are made about its use and impact.
- Not-for-profits: Relevant effects may include increased or reduced leverage, impact, and engagement — shaped by whether AI is used ethically and effectively, or not.
- Universities: Relevant effects might include learning outcomes, research breakthroughs, or reputational shifts — linked to the use or avoidance of emerging AI tools and how AI is framed and discussed.
- Individuals: Relevant effects could range from improved productivity and well-being, to misinformation, dependency, behavior changes, and depression — depending on how AI is integrated into daily life.
The key takeaway: cause–effect relationships in the AI landscape are diverse, complex, and deeply context-dependent. What works—or causes harm—in one situation, might play out very differently in another.
By exploring these different models, you can begin to think more critically about what responsible AI development and use might look like—not just for yourself or your organization, but for society as a whole.
Such explorations also pave the way to thinking about unconventional and innovative approaches to reducing the chances of adverse consequences occurring or getting out of hand – or of ensuring that positive outcomes emerge as advanced AI is developed and used. For more information see the More Info tab.