Saltar al contenido principal
Artículo

Culture, biases, and artificial intelligence

By Karl Ostroski and Alice Leong
couple working at home

A couple in my (Karl’s) family were planning their wedding. While going over the proposed guest list, one partner asked their mother for the address of their childhood neighbors. “You’re going to invite your neighbors to our wedding?” asked the other partner, surprised as their family didn’t even know these neighbors.

Further down the list, the second partner asked their mother, “What are Aunt Maria’s kids’ names?” “Wait,” the first partner asked, “you’re going to invite people to our wedding whose names you don’t even know?” “They’re family!” the second partner stated firmly, assuring everyone there was no more conversation to be had on the topic.

Many of us can relate to similar family experiences—common clash points as we identify those we consider key members of our lives as we prepare for milestone life events. And this extends beyond just those events — we all have preferences, or biases, that affect how we show up in almost every aspect of our lives. Uncovering the source of these biases helps us to identify our own values and know each other better.


But how is wedding planning related to artificial intelligence?


The connection between this anecdote of a couple planning their wedding guest list and artificial intelligence (AI) lies in how our biases influence what we include or exclude in our decision-making. Like wedding planning, AI depends on human input (who’s using AI tools) and human output (who’s building AI tools), both being influenced by our inherent biases, assumptions, values, and experiences.

With the popularity, praise, and potential impacts of AI taking a prominent place in politics and economic pursuits, many have called for a pause, or at least a slowing, of AI’s trajectory until we can better control its output. And for good reason. While many consider AI to be aconsolidator of facts and inherently neutral, human biases actually have a huge impact on AI, from how models are trained to the prompts we provide. Consider where bias can exist in AI …


five laptops work session

Bias in AI can have negative outcomes on scope, product satisfaction, legal justice, and more. Therefore, most would agree that bias is something we should consciously mitigate in AI. However, unless you’re employed to compile datasets or develop and test algorithms, this bias can feel like it lives out there in the cloud or some large computer—external to us and beyond our control. The reality is we inject our cultural values by making assumptions about what AI could and should be able to do. As such we, the average AI users, are also the bias in AI.


Who codes matters, how we code matters, why we code matters, and who uses our code also matters

Clarissa Koszarek

Data Scientist at Slalom


For instance, prompting AI to “Give me an elevator pitch to recruit someone to my organization” may seem like a simple and appropriate way to prepare for meeting someone at a conference. The purpose of this prompt is to convey that your company will benefit from their presence, and they’d enjoy working there. An elevator pitch makes sense if you come from a business culture that values directness, informality, and expediency. Contrast that with cultures where an effective proposal depends on full context and elegance in the words selected. An elevator pitch might not be understood and may even create a negative impression. The person being approached may wonder, “That’s really all they have to say about their company?” or “Why are they being so abrupt and aggressive?”

While you could improve your AI prompt—“Give me an employment pitch for someone who comes from X culture”—the premise still assumes the start of such a relationship journey would be with words, directness, and brevity which, as our colleague Justin Zamarripa shared, doesn’t translate well to every culture.

Our hypothetical recruiting attempt also warrants additional consideration of the data used to train the AI model.


AI needs large datasets to train models, but what if the absence of data is the data itself?


In the US, our focus on hyper-individualism draws us to speak out, say it loud, and if you see something, say something. As a white, male US citizen, and consultant in business and technology, I (Karl) come from a culture that emphasizes the written word, hence this article for Medium. When AI thrives on data, how do we account for scenarios where the data doesn’t exist, or exists in a different medium due to distinct cultural values and experiences? For example:

Oral versus written

In some post-Soviet countries, possible misuse of written meeting documentation against a participant has resulted in a preference for a verbal understanding of what was discussed. The impact on AI: there may not be electronic data on attendees, inputs, or decisions.

Silence versus spoken

When gauging reaction to a business decision or government policy, an instinct may be to gather responses from social media. However, many cultures agree with Plato that “an empty vessel makes the loudest sound.” The absence of an employee’s or citizen’s response is communicating something important that our AI algorithms aren’t programmed to capture.

Relationships

In the example of our betrothed couple from the beginning of this article, the priorities for the guest list are influenced by culture. The first partner’s family is of European American descent from the Midwest, a very kind and neighborly culture where knowing people on your block is the norm. The second partner’s family are recent immigrants from Latin America with a focus on blood relations. Growing up in a large extended family, they didn’t have time to know their neighbors. There were already enough cousins for connection and community. Their values were just in opposition. If either partner used AI for their guest list, wedding vows, or honeymoon ideas, you could imagine the possible culture misses.


little girl with robot

Given the imminent future of AI, we must include cultural intelligence, or CQ, as part of our tool set to mitigate human bias in our technical solutions. It will require people with different experiences, values, and perspectives who can then translate that diversity into nuanced data-gathering, coding, testing, and prompting of AI tools. We also need leadership that understands how thoughtfulness and intentionality are critical in our hiring, team composition, goals, and safeguards for those using AI. We would do well to consider when an AI tool is insufficient for our needs, and to recognize we may bring even our well-intended biases and assumptions to the prompting table.

Slalom data scientist Clarissa Koszarek reminds us that “who codes matters, how we code matters, and why we code matters.” I would expand upon that to add who uses our code also matters. We need cultural awareness in place to (1) build bias-free tools, (2) recognize when our tools are insufficient, and (3) educate ourselves and others on how to mitigate our own bias and the tool bias through education and prompt engineering.




Solucionemos juntos.