Chatbots are everywhere in the marketplace. In fact, they’re growing (despite peak hype occurring somewhere in 2016). As more companies begin implementing the use of AI assistants to meet their customer service and community management needs, the problematic sides of automated assistants have made themselves known.

It’s time we reexamine AI development, and understand how we can make better (and not just more efficient) bots – that means more empathy, context, and humanity (and less Silicon Valley).

chat-bot-image

Some Background

In the fall of 2017, a screenshot surfaced on the Internet comparing two different bots’ responses to one user’s simple statement –– “I feel sad.” Google Assistant offered compassion replying,“I wish I had arms so I could give you a hug.” Alice, Russia’s Google equivalent developed by Yandex, offered a bit of tough love replying, “No one said life was about having fun.”

What seems like a faulty response is likely the result of a culturally sensitive process; one where developers attempt to teach new technologies to understand human emotion. Artificial intelligence is no longer just about telling you the weather or the quickest route from Brooklyn to Upper Manhattan, now it’s about artificial emotional intelligence.

 

Why So Emotional?

According to Amazon, half of the conversations users have with Alexa are frivolous in nature (e.g. Alexa, tell me a joke) and, well, intense (e.g. Alexa, why are we here?).

This may be due to the idea that some people are more comfortable disclosing their vulnerabilities to robots. Studies indicate that people display their sadness with more intensity when they are less concerned about self-disclosure. That is, if the user believes they’re interacting with a virtual person, instead of a real one, they more comfortable being vulnerable. Much like when we put pen to paper in our daily journals, our screens can serve as a shield from outside judgement. On top of this, there’s proof (recently in the form of dinosaur toys) that humans create emotional connections to robots, which raises the question: are we doing our ethical due diligence when developing bots? 

 

How We’ve Developed Chatbots to Date…

Bots are typically developed in one of three ways: some are trained to analyze the contents of the web (i.e. trying to recognize patterns in huge datasets), others are given guidance on what’s appropriate by developers (i.e. they only answer certain “appropriate” questions or send certain types of media), and many are doing a little of both. More times than not, when left on their own, chatbots end up learning from humanity’s worst behaviors, responding with slurs and other inappropriate comments (let’s not forget Tay).

Screen Shot 2018-09-17 at 2.10.52 PM

This is why AI developed by major companies are provided guidelines. The algorithms developers use to influence AI become powerful in re-enforcing what cultural values they believe in. This predictability in human behavior, however, can stifle the capacity of AI to interact with people, but it also can make it easier to monetize.

 

…And Why That’s a Problem

Everything AI learns about feelings it will ultimately learn from us, humans. The buzziest of AI research at the moment is known as ‘machine learning’, where algorithms pick up patterns by training themselves on large data sets. In the financial industry, ML is deployed to analyze massive sets of market data to help wealth managers assess risks. In retail, it’s used to predict what you might buy next (e.g. Amazon’s “Recommended for You”). The list goes on.  

But because these algorithms learn from the most statistically relevant bits of data, they tend to analyze and produce content based on popularity, not what’s true, useful or kind. And while programmers can influence how AIs are trained, it will inevitably reflect the ideas and values of the programmers themselves, which are typically white adult men.

As a result, when human supervision is inadequate, chatbots left to roam the internet are prone to start spouting the worst kinds of slurs and clichés. Everywhere in the world, tech elites – mostly white, mostly middle-class, and mostly male – are deciding which human feelings and forms of behavior the algorithms should learn to replicate and promote.

As more companies turn to chatbots to help support and scale their communities, the question remains – how effective are they? Making sure a chatbot’s capabilities align with community goals and values should be top of mind when developing AI-powered products.

These norms of emotional self-awareness vary from one society to the next. It comes as no surprise that the hug-happy Google Assistant, developed in Mountain View, California sounds like a flower-child of the hippie movement. Sure, Google Assistant will give you a hug, but it’s only because its creators believe that hugging is a productive way to eliminate the ‘negativity’ preventing you from being the best version of yourself. In contrast, Alice is is all about the hard truths and tough love; her replies embody the Russian cultural ideals that tend to accept suffering as unavoidable, and thus better taken with a grain of salt rather than with a soft embrace.

In Conclusion, Diversify Your Bot Development Team

Your bots are engaging with users from every corner of the globe, from every tax bracket, and from every profession. A diverse team of strategists, engineers and designers is necessary in developing bots that are culturally sensitive and empathetic to user inquiries. Your bot development teams should be conducting robust research on cultural trends and human behaviors to divert the paths of tone deafness within bot engagements with users.

There is a lot of to learn about AI and the human behaviors that influence the development of their functions. However, in such a polarizing political climates it is necessary to assure that the virtual assistants we develop reflect the values held at the center of the human core.