Current Research

Emergent Leadership and Interpersonal Emotional Regulation


Do those who manage the emotions of others also emerge as leaders? In this project we are interested in testing whether group members grant leadership to those who engage in interpersonal emotion regulation. To test this idea, we are collecting data from leaderless groups of soldiers and students. We have collected data from recruits to military units, who in their tryouts for an elite unit work in leaderless groups and from students working together in leaderless groups on a class task. In both samples participants indicate their actions toward regulating the emotions of others, as well as selecting an individual whom they believe is the leader of the group. Initial findings suggest that those who claim to have attempted to improve the group’s emotion were more likely to be selected as leaders. This work is with Prof. Gil Luria.

Digital Emotional Labor


In this project with a large Israeli service company, we test the differences in emotional labor that service providers engage in relating to service via phone or via chat. In the chat service, providers attend to 15 customers at once, while on the phone only one conversation is serviced at a time. We focus on the differences in emotional elements of these interactions. Emotional labor has been mostly studied in face-to-face interactions, and there is scarce work on chat interactions. In order to get an in-depth understanding of the differences between the types of service we use qualitative methods that include in-depth interviews with service providers and team managers. We are interested in understanding how service providers deal with various platforms including phone calls, company website, WhatsApp, Facebook and faxes. We investigate how call centers are different from customer service and other digital platforms in order to understand how service providers deal with customer feelings. This work is with Dr. Ella Glikson, Dr. Einat Lavee, and Prof. Allison Gabriel.

Chatbots' Emotional Displays


In a series of lab and field studies we test various elements of chatbots, specifically their “emotional display”. Are chatbots that display emotions assessed more favorably? Would a chatbot that apologizes for failed service be evaluated more positively than one that does not – even though it is obvious that the emotional display is pre-programed and automatic? Would making a chatbot more anthropized improve customer satisfaction? This work is with Dr. Ella Glikson.

The Effect of Emotion Regulation Strategies in Health Care Settings


In this project we are interested in examining how emotional regulation strategies performed by medical staff effect the evaluation of patients' care. Moreover, we examine whether this has an impact on the medical staff's personal feelings. Regulating the emotions of patients and caregivers can benefit patients and staff and significantly improve the perception of the service and treatment process because emotions affect perception, thinking, and behavior. Will emotion regulation strategies among the nursing staff increase the patients' satisfaction and positive feelings from the treatment? Are there any different effects on staff and patients? Which strategy will be the most effective for patients? In order to answer these questions, we are examining four medical departments in a large public hospital. Each department will receive a workshop focusing on a different strategy of emotion regulation. We then follow the staff as well as the patients in the department surveying them every 3 months for one full year. This will allow us to test the effectiveness of each interpersonal emotion regulation strategy both in terms of those engaging in the regulation (medical staff) and the those who are the targets of the regulation (patients and visitors). This work is with Prof. Karen Niven.

Emotional Feedback by Human vs. Avatar

It is well established that emotion displays provide information and influence others, but do emotion displays from an Avatar also have similar effects? When the emotion displays are by default pre-programmed and inauthentic, would they still impact others, as do emotion displays of humans? In this project we use VR technology where participants get motor feedback on a task (bow and arrow target shooting) which either provides only performance information (e.g., "the shot was on target", "the shot missed the target"), or adds emotional content as well – positive (e.g., "the shot was on target – really happy with your performance!", – or negative (e.g., "the shot missed the target – I am really disappointed in your performance"). This feedback will be given by video after the performance in the virtual setting either by a human, or by an Avatar. This project will allow us to test the effects of emotions in feedback as well as the impact of interacting with emotional computerized agents. This work is with Dr. Tal Krasovsky and Dr. Michal Kafri.

Intergroup Emotion Authenticity Bias


Displays of emotions at times can be judged as inauthentic or even strategic. In this work we test whether there is an intergroup bias to judge the emotion displays of out-group members as inauthentic, or assess the emotion displays of ingroup members as more authentic. In this work, which involves an international collaboration with other researchers (Prof. Agneta Fischer from the University of Amsterdam and Prof. Masi Noor from Keele University) we have collected data in Amsterdam, the UK and Hong Kong, and are currently collecting data in Israel. In the Israeli segment we are specifically looking at the assessments of authenticity of emotion displays following condemnation of violent attacks by ingroup or outgroup members. We believe that this bias of authenticity could be another element that causes division in intergroup relations.