Where AI meets
human context
across cultures

AI East West (AIEW) is an applied, interdisciplinary initiative that uses cross-cultural perspectives and human experience to map where AI systems misalign with real-world contexts, and to translate those insights into more contextually-grounded approaches to responsible AI — one that serves our pluralistic social realities.

Now happening: Research, Translated — Field notes from CHI 2026
Photo by Andy Kelly on Unsplash
How We Work

Culture as a
comparative lens to
surface fault lines

  • Technical Social
  • Model Context
  • Abstract principle Lived experience
  • What AI does What people understand
  • Universal standard Local reality
  • Research Practice

AI East West works in the in-between space: where systems meet social contexts, where research meets practice.

Rather than treating AI risks as purely technical or regulatory problems, we focus on the human layer – using cross-context lens to map where AI can serve our diverse societies better.

What we do:
Three dimensions of Responsible AI

An integrated system where each informs the others.

Work lane 01

Responsible Deployment

For organisations, we identify where their AI deployment breaks in the real-world contexts it actually operates in, before risks and harm scale.

  • Diversity-aware red teaming for AI tools
  • Cross-region AI policy and risk scan
  • Responsible AI workshops for governance, tech, and strategy teams
  • Multilingual robustness and fairness review
Work lane 02

Individual Judgment

Beyond tool-based literacy, we nurture science-backed and socially-aware judgment for everyday users to know when to trust, question, and push back.

  • Role- and domain-based AI ethics sessions for non-technical teams, informed by research, translated for decision-making relevant for individual fields
  • Cultural dialogue on AI interpretation and bias
  • Over-reliance and escalation pathway awareness
  • Tailored materials for legal, care, education, and media domains
Work lane 03

Stories in Between

Where human experience becomes insight that shapes responsible AI practice.

  • Human Stories interview series (EN/CN/JP)
  • Cross-context synthesis and comparative insight
  • Multilingual digital formats for public reach
  • Community events and facilitated dialogues
See this in action
Research, Translated
We speak to leading AI & HCI researchers and translate frontier insights into multilingual memos for the people AI actually affects. Field notes from CHI 2026.
Explore memos →
Photo by ThisisEngineering on Unsplash
Project Spotlight

Stories in Between:
a structured inquiry into
how AI is lived across societies

Listening · Thinking · Translating across contexts

In 2026, we are developing a multilingual interview-based mini-book:

  • Story-driven and research-informed
  • Designed for broader publics who live and work alongside AI, but rarely see their own experience reflected in technical literature or mainstream media
  • Brought together by a team with backgrounds across AI research, journalism, and consulting across Europe and Asia
  • Short- to long-form social content cascading across languages

This series uses human stories as a form of structured qualitative inquiry – not to draw simple conclusions, but to surface important questions that technical and policy discourse rarely asks.

  • ✏️

    Manga artist

    Navigating authorship and creativity: who made this, and does it matter?

  • ⚖️

    Lawyer

    Reasoning through accountability: when AI is in the chain, where does responsibility land?

  • 📚

    Historian

    Questioning synthetic narratives: what happens to memory and truth?

  • 🤖

    Robotics designer

    Asking what machines should never do in care work: where is the irreducible human?

Photo by Zac Ong on Unsplash
In-Depth: Why This Matters

AI doesn't fail only
at the model level —
it fails in context

Fault line 01

Built in partial contexts

AI systems are developed within specific technical, linguistic, and cultural frames. A model built for chat is increasingly expected to make consequential decisions; benchmarks designed for one language are applied as if they measure universal capability; training data reflecting one society's norms is deployed across many.

For example, over 6,000 of the world's 7,000+ languages remain under-represented in AI, and only 5 of 24 top-ranking LLMs report multilingual safety alignment.¹ ² But language is just one dimension of a broader pattern: partial contexts treated as complete ones.
Fault line 02

Risk lives in the human layer

The human layer is wherever AI outputs are used to score, rank, include, exclude, predict, or prioritize: in hiring, law, education, care, creative work, information, and public services. Across these domains, people interpret and act on AI through the lens of their roles, cultures, and lived experience.

Research shows, for example, that East Asian and Western users bring fundamentally different expectations to AI relationships and trust.³ ⁴ These gaps don't arise in the model. They emerge where diverse people encounter the same system with different stakes.
Fault line 03

Governance lags deployment

AI governance frameworks are globally uneven and often classify risk by domain rather than by how harm actually propagates, through decisions, interpretations, and cultural contexts. The EU builds trust through rights-based legal design; China regulates for societal stability; the US defaults to market self-governance, and these are only three of many diverging approaches worldwide.⁵

But the deeper problem is operational: teams deploying AI across jurisdictions face expectations that don't translate, standards that don't align, and accountability gaps that widen with every context the system enters.⁶

¹ Joshi et al. (2020), "The State and Fate of Linguistic Diversity and Inclusion in the NLP World," ACL 2020. aclanthology.org
² Yong et al. (2025), "The State of Multilingual LLM Safety Research," EMNLP 2025. arxiv.org
³ Folk, Wu & Heine (2025), "Cultural Variation in Attitudes Toward Social Chatbots," Journal of Cross-Cultural Psychology. DOI: 10.1177/00220221251317950
⁴ Malfacini, K. (2025), "The Impacts of Companion AI on Human Relationships," AI & Society. springer.com
⁵ Stanford HAI (2025), AI Index Report 2025, Chapter 6: Policy and Governance. hai.stanford.edu
⁶ OECD (2025), "Algorithmic Management in the Workplace," OECD AI Papers. oecd.org

Behind Our Name

More than geography.
A philosophy of method.

English
East-West

In English, a civilizational axis often used as a shorthand for cultural difference itself. We use it not as a binary, not as an exhaustive map, but as a starting point for comparative inquiry into how AI is built, governed, and understood across different societies, stakes, and assumptions.

日本語 · Japanese
東西

In Japanese, 「東西」can name an East-West transport line connecting two points, or reference two civilizations in relation across history, conveying how we situate ourselves in both physical machines and human knowledge.

中文 · Chinese
東西

Beyond geographical directions, 「東西」in Chinese also means "things" and "matters", the tangible and the abstract, the object and the knowledge, even an action and a thought.

These multiple meanings spotlight the pluralism of human experience, and the tension it surfaces in realizing human-centered AI, where there is no one-size-fits-all answer.

'East–West', as we use it metaphorically, is a comparative inquiry lens. The same technology, even the same AI model, lands different opportunities and risks for real people across cultures, industry, and governance. What we call "Responsible AI" must be built on pluralistic understanding – not narrow assumptions.

Stay in Touch

Follow this work
as it takes shape

Whether you research, build, teach, or simply live alongside AI, we'd love to know who you are and what matters to you. Fill in the short form below to stay connected.

By signing up, you agree to occasional updates from AI East West. We respect your data and won't share it.