Can AI Have Emotional Intelligence?

Krystal Kirkland

Software Engineer

Dr. Rana el Kaliouby is the author of Girl Decoded and a leading expert on technology and empathy and the ethics of AI. 

This past June, Affectiva, the company she co-founded, was acquired by Smart Eye.  In this virtual sit-down, we set out to learn more about what inspires Dr. el Kaliouby and how new innovations will change how we interface with technology and connect and communicate as humans. 

Q: Dr. el Kaliouby, tell us how you got started on your journey to exploring the role of emotion in today’s technology-driven landscape? 

el Kaliouby: I was born in Cairo and my parents were both involved in the technology industry, so I was exposed to and inspired by the digital world at a very young age. My education and career pursuits led me to Cambridge and later MIT, which meant  I spent a lot of time in front of devices communicating with family back home. There was very little face-to-face interaction and what struck me is that, although I was in constant touch with family and friends through technology, it was almost impossible to have any idea what was going on with my loved ones from an emotional and mental standpoint.

 

Q: What was your biggest takeaway from this experience? 

el Kaliouby: It became clear to me that the majority of our communication is conveyed through non-verbal cues: facial expressions, tone of voice, and body language. But, for the most part, those signals are lost when we’re on our smartphones and other devices. 

When I got deeper into my research in computer science and artificial intelligence it became obvious that technology has a lot of cognitive intelligence (IQ), but no emotional intelligence (EQ). This is problematic not only for how we interface with technologies, but how we connect and communicate with one another.

Q: How did that impact your research and your career pursuits?

el Kaliouby:  I set out to humanize technology in new ways. I knew emotional intelligence and the ability to sense others’ cognitive states could help the systems that are being developed adapt in real-time.  It gave us a golden opportunity to re-imagine how we connect with machines and each other and there were several areas where that opportunity was clear and the need was immediate. 

Q: How did you go about pursuing these opportunities? 

el Kaliouby: At Affectiva, my company that was recently acquired by Smart Eye, our approach was to design software that can understand emotional and cognitive states by analyzing facial expressions through a device’s camera.

One of our challenges was that there were so many ways this kind of technology could impact society. We chose to focus on big problems where we could improve or even save lives and where our innovations fit naturally into other ecosystems that were growing exponentially.

After analyzing developments in the automotive industry, we set out to help to address issues with automotive safety and the in-vehicle experience. 

Q: Tell us more.

el Kaliouby: While car and truck manufacturers were already using external cameras for a range of safety and operational applications, we realized that if cameras were used inside these same vehicles, we could use artificial intelligence and machine learning to identify complex and nuanced emotions and cognitive states, from whether a driver is drowsy or falling asleep to texting or under the influence. If necessary, our system can send an alert or even take over a vehicle with self-driving capabilities. 

Using automotive AI to make our roads safer is the first application of our technology, but once you have a deep understanding of what’s taking place inside the vehicle, we can use deep learning based on observed states to adapt cabin conditions such as music, lighting, and temperature which ultimately makes the occupants more comfortable.

Q: As applications that integrate emotional intelligence into computing expand, what are some of the risks that concern you and how are you addressing them? 

el Kaliouby: We’ve seen examples of how artificial intelligence and machine learning can be used in nefarious ways or have unintended consequences, so we’re aware of the ways that these types of technologies can go wrong. We’ve been vocal and diligent about the importance of understanding ways that technologies that can help sense emotional states can be used to manipulate or discriminate. 

Practically speaking, we carry this over into our business in terms of the types of companies we work with and how they intend to use our technology. If there is an infringement on privacy, if the consumer doesn’t give consent to be observed, if the company plans to do surveillance or lie detection, for example, we turn away the business.  

el Kaliouby:

From a technical standpoint, there’s a risk that you are perpetuating bias at a global scale if the teams building, training, and deploying algorithms and models aren’t from diverse backgrounds. 

For example, our data labeling team in Cairo, many of whom wear hijabs, pointed out that the model training data wasn’t reflective of people that looked like them and other minority populations.  Once identified, we introduced new data to better train our models. It was the diversity of our team that allowed us to evolve our models to be more representative of a broader population.

Here’s another example: there was a European car manufacturer interested in using our facial expression technology to better understand and improve its in-cabin experience. However, we discovered their data set was based on  a homogeneous population of white males with predominantly blue eyes and knew it would not be representative of the manufacturer’s global customer base. It’s very easy to predict in this scenario that the results wouldn’t be accurate for some populations because the data set was too homogenous. 

Q: What’s your approach to addressing these kinds of bias issues once models and algorithms move from the building and testing phases to implementation?

Once in production,  you have to be able to observe and get insights into how the model is performing. If something is off, you need visibility into the problem, but also the tools to answer why there is a disconnect between the lab and the real world. Finally, you have to have the flexibility to iterate on and re-train the models to ensure they are performing as desired and there are no instances in which the system is biased against minority groups. At the end of the day, if AI technologies aren’t ethical, they’re bad for society and bad for business. If AI can’t work for all people as it’s intended, there’s little benefit to using it in the first place.

Q: We’ve talked about diversity in AI, what’s your perspective on diversity in tech in general?

el Kaliouby: It’s improving. Representation amongst minority groups is on the rise at VC firms and VC funding for founders from diverse backgrounds is increasing as a result, although not as quickly as I would like to see. I have always been mindful to analyze the makeup of the VC firm I was pitching as part of the fundraising process and I lean towards firms that have a representative partner base and investment and operating teams. The firms that are more diverse tend to better understand the challenges we’re trying to address and can add more value to our business. 

More broadly, in almost every conversation I have, I bring up diversity in women in tech whether it’s with men, women, investors, other start-ups, or customers. It’s such an important topic. We need to build a full ecosystem of women and diverse leaders. Specifically as relates to women, our resolution should be to continue to improve representation for female founders and funders and to give young girls more role models to look up to–stepping up to the plate and helping elevate others.