Saturday, June 07, 2025

Reflections from CHI 2025 in Yokohama

I just returned from an unforgettable week at CHI 2025 in Yokohama, Japan, where I had the opportunity to share research, reconnect with beautiful minds in HCI, and dive into new global conversations about how we design technology that truly works for everyone. CHI (Conference on Human Factors in Computing Systems) is one of the top international conferences on human-computer interaction, bringing together researchers to explore how people interact with technology. This year’s CHI felt especially timely—with AI rapidly reshaping how we build, communicate, and even understand each other, the urgency to design with dignity, and care has never been greater.

👩🏽‍💻 Presenting Our Paper: Generative AI, Accessibility & the Need for “AI Timeouts”

At CHI, I presented our paper, “The Impact of Generative AI Coding Assistants on Developers Who Are Visually Impaired.” In an era where tools like GitHub Copilot and ChatGPT are transforming how developers code, we asked: What happens when these tools are used by developers who are blind or visually impaired?

Using an Activity Theory framework, our team conducted a qualitative study with blind and visually impaired developers as they worked through coding tasks using generative AI assistants. We uncovered:

  • Cognitive Overload: Participants were overwhelmed by excessive AI suggestions, prompting a call for “AI timeouts” that allow users to pause or slow down AI help to regain control.
  • Navigation Challenges: Screen readers made it difficult to distinguish between user-written and AI-generated code, creating context-switching barriers.
  • Seamless Control: Participants expressed a desire for AI that adapts to their pace and supports their workflows, rather than taking over.
  • Optimism & Friction: Despite the challenges, developers were hopeful—highlighting the importance of designing AI systems grounded in lived experience.

This work offers design implications not just for accessibility, but for how to meaningfully integrate AI into diverse real-world workflows.

🔍 CHI 2025 Paper Highlights:

  • NightLight: A smartphone-based system that passively collects ambient lighting data through built-in sensors to help pedestrians choose safer nighttime walking routes. 70% of users altered their routes when shown light-augmented maps—demonstrating the power of low-cost, AI-powered safety interventions.
  • VR and Team Belonging (Mariana Fernandez, Notre Dame): A study on how VR can support inclusion in newly formed teams. Newcomers felt significantly more accepted in VR settings compared to in-person ones, thanks to avatar anonymity and immersive design reducing social pressure.
  • AI in Social Care (South Korea): An analysis of a real-world deployment of an LLM-powered voice chatbot used to check in on socially isolated individuals. Contrary to expectations, the system increased workload for frontline workers due to unforeseen maintenance burdens, revealing the “invisible labor” of human-AI collaboration.
  • Political Ideology & App Use: A U.S.-based study using structural equation modeling to show how political beliefs—not just privacy concerns—influenced adoption of COVID contact-tracing apps. People were more willing to share health data to help others than for self-protection, highlighting the deep sociopolitical layers of tech trust.
  • Participatory GenAI for Reentry (Richard Martinez, UC Irvine): A co-design study with formerly incarcerated youth using AI for creative and entrepreneurial projects. Participants designed novel GenAI use cases based on their lived experiences, reshaping notions of expertise in AI and spotlighting the need for infrastructure that serves the margins first.
  • 🎤 Keynote by Mutale Nkonde: Designing AI with Curiosity and Accountability

    This year’s keynote by Mutale Nkonde—AI policy researcher and founder of AI for the People—delivered a powerful critique of current AI design paradigms. Drawing from her background in journalism and digital humanities, she called for a socio-technical approach to AI that centers lived experience, history, and adaptability.

    • Main Argument: AI is often misused “in the wild” because designers fail to anticipate the messy, social realities in which systems operate.
    • Example: Google’s LLM generating Nazi content was framed as a failure of foresight—designers had not imagined the breadth of harm users could elicit.
    • Call to Action: Combine social science frameworks with red teaming—adaptive, adversarial simulations that surface hidden risks by testing evolving AI tools in diverse cultural and political contexts.
    • Critique: She acknowledged the tension between tech and social science communities, urging for mutual learning rather than an “us vs. them” stance.

    Takeaway: If we want AI to serve the public good, we must not only anticipate user behavior—but also stress-test systems with communities who will be most affected.

    💬 Co-Organizing the Workshop on Explainable AI

    I also co-organized the “Explainable AI in the Wild” workshop, where we asked: How do we build explainable AI that works for real people? We explored questions of power, cultural context, and transparency—moving beyond technical definitions to address who needs explanations and why.

    🌏 Yokohama: A City of Reflection and Futurism

    Yokohama provided the perfect backdrop to reflect on global futures in tech. From stunning harbor views to the mix of tradition and innovation, it was the ideal setting for a conference rooted in community and vision.

    One of the personal highlights of the trip was reconnecting with my PhD advisor, who attended my talk and offered thoughtful feedback. We also had dinner together with his lab—my academic siblings—and spent the evening exchanging stories, laughing, and reflecting on the winding paths of our research journeys. It was deeply inspiring to hear about his current work and to receive career advice grounded in years of experience navigating academia, mentoring, and interdisciplinary research.

    Moments like these remind me of the value of mentorship and how much we grow by staying connected to those who helped shape our intellectual foundations. I left that dinner energized and grateful for the ongoing guidance and camaraderie in our academic lineage.

    ✨ Final Thoughts

    CHI 2025 reaffirmed a belief I hold deeply: the future of AI must be human-centered, justice-oriented, and designed in partnership with those most often excluded.

    Our tools can either widen the gaps—or help bridge them. It’s on us to choose the latter.

    Until next time, CHI. 🫶

    #CHI2025 #AccessibleTech #AIForGood #ParticipatoryDesign #GenerativeAI #XAI #HumanCenteredAI #DigitalJustice #HCI #InclusionInTech #Yokohama

    No comments: