AI East West (AIEW) is an applied, interdisciplinary initiative that uses cross-cultural perspectives and human experience to map where AI systems misalign with real-world contexts, and to translate those insights into more contextually-grounded approaches to responsible AI — one that serves our pluralistic social realities.
Now happening: Research, Translated — Field notes from CHI 2026 →AI East West works in the in-between space: where systems meet social contexts, where research meets practice.
Rather than treating AI risks as purely technical or regulatory problems, we focus on the human layer – using cross-context lens to map where AI can serve our diverse societies better.
An integrated system where each informs the others.
For organisations, we identify where their AI deployment breaks in the real-world contexts it actually operates in, before risks and harm scale.
Beyond tool-based literacy, we nurture science-backed and socially-aware judgment for everyday users to know when to trust, question, and push back.
Where human experience becomes insight that shapes responsible AI practice.
In 2026, we are developing a multilingual interview-based mini-book:
This series uses human stories as a form of structured qualitative inquiry – not to draw simple conclusions, but to surface important questions that technical and policy discourse rarely asks.
Navigating authorship and creativity: who made this, and does it matter?
Reasoning through accountability: when AI is in the chain, where does responsibility land?
Questioning synthetic narratives: what happens to memory and truth?
Asking what machines should never do in care work: where is the irreducible human?
AI systems are developed within specific technical, linguistic, and cultural frames. A model built for chat is increasingly expected to make consequential decisions; benchmarks designed for one language are applied as if they measure universal capability; training data reflecting one society's norms is deployed across many.
The human layer is wherever AI outputs are used to score, rank, include, exclude, predict, or prioritize: in hiring, law, education, care, creative work, information, and public services. Across these domains, people interpret and act on AI through the lens of their roles, cultures, and lived experience.
AI governance frameworks are globally uneven and often classify risk by domain rather than by how harm actually propagates, through decisions, interpretations, and cultural contexts. The EU builds trust through rights-based legal design; China regulates for societal stability; the US defaults to market self-governance, and these are only three of many diverging approaches worldwide.⁵
¹ Joshi et al. (2020), "The State and Fate of Linguistic Diversity and Inclusion in the NLP World," ACL 2020. aclanthology.org
² Yong et al. (2025), "The State of Multilingual LLM Safety Research," EMNLP 2025. arxiv.org
³ Folk, Wu & Heine (2025), "Cultural Variation in Attitudes Toward Social Chatbots," Journal of Cross-Cultural Psychology. DOI: 10.1177/00220221251317950
⁴ Malfacini, K. (2025), "The Impacts of Companion AI on Human Relationships," AI & Society. springer.com
⁵ Stanford HAI (2025), AI Index Report 2025, Chapter 6: Policy and Governance. hai.stanford.edu
⁶ OECD (2025), "Algorithmic Management in the Workplace," OECD AI Papers. oecd.org
In English, a civilizational axis often used as a shorthand for cultural difference itself. We use it not as a binary, not as an exhaustive map, but as a starting point for comparative inquiry into how AI is built, governed, and understood across different societies, stakes, and assumptions.
In Japanese, 「東西」can name an East-West transport line connecting two points, or reference two civilizations in relation across history, conveying how we situate ourselves in both physical machines and human knowledge.
Beyond geographical directions, 「東西」in Chinese also means "things" and "matters", the tangible and the abstract, the object and the knowledge, even an action and a thought.
These multiple meanings spotlight the pluralism of human experience, and the tension it surfaces in realizing human-centered AI, where there is no one-size-fits-all answer.
'East–West', as we use it metaphorically, is a comparative inquiry lens. The same technology, even the same AI model, lands different opportunities and risks for real people across cultures, industry, and governance. What we call "Responsible AI" must be built on pluralistic understanding – not narrow assumptions.
Whether you research, build, teach, or simply live alongside AI, we'd love to know who you are and what matters to you. Fill in the short form below to stay connected.
By signing up, you agree to occasional updates from AI East West. We respect your data and won't share it.