Google AI language and vision.

Advancing AI for everyone

Obviously, Google AI (or “Research”) is a huge part of the company. To narrow it down a bit, my work has been on the horizontal AI UX team.

On AI UX, I have worked across Research with a number of teams focused on language fairness and machine vision.


My role

TITLE — UX designer

TL;DR — I design everything from Google product features to future explorations for how various Research teams might apply various AI and ML technologies. I focus on fairness applications, especially with regards to machine perception.

ORGANIZATIONAL IMPACT — Our narrative work has given a spotlight to ASR fairness, and led to multiple future research and engineering commitments.


Impact

1500+

Project proposal views across Google

10+

Research teams I’ve designed and launched work with


Selected project summaries

All work is under NDA, and any projects presented here are fully launched to the public. I cannot share design screenshots from Google AI.

Full case studies available when pertinent.

ASR FAIRNESS + AAVE

MY ROLES — UX designer, storyteller

PROBLEM SPACE — (from the associated white paper)

While recent work suggests that there are racial disparities in the performance of ASR systems for speakers of African American Vernacular English, little is known about the psychological and experiential effects of these failures.

Basically, a group of researchers wanted to understand the effects of ASR racial disparities (namely for African Americans).

SOLUTION — (also from the associated white paper)

We incorporate the insights and lessons learned from sociolinguistics in our suggestions for linguistically responsive ways to build more inclusive voice systems that consider African American users’ needs, attitudes, and speech patterns.

The research team ran a diary study trying to elucidate the lived experience. The study suggests that the effects are both strong and widespread. We want to change that.

Another designer and I synthesized the data into a visual narrative which was shared across the company. I also created design probes of the aforementioned “linguistically responsive ways” to support future research studies.

IMPACT — Visual narrative was one of the most-viewed slide decks at Google in 2021. It served as buy-in for multiple future research commitments.

But really, the personal impact is what’s most important. I’m White. I speak “American English”. I can barely fool language models even when I manually alter my voice. This experience forced me to evaluate the way I [re]present the voice of others.


Mobile Vision UX (NOW PART OF ML KIT)

MY ROLE — UX designer

PROBLEM SPACE — Machine vision algorithms can provide a wide range of output data. How might we support engineers and scientists to take an ML model, fine-tune it, and display output in human-friendly ways?

SOLUTION — A series of prototyping engagements with the Perception team) allowing ML scientists and engineers to deploy and iterate on machine vision models.

IMPACT — Engagement output used by hundreds of scientists / engineers across Research and AI. The work was eventually folded into ML Kit.

This work also inspired feature work in many iterations of Google Lens, Tensorflow.JS, and Pixel camera.