Tag: artificial intelligence

One Step Closer to Newborn Screening for Autism

Photo by Isaac Quesada

Simple blood test would identify key biomarkers

Because early detection of autism is linked to significantly improved outcomes, the discovery of early predictors could make all the difference in a child’s development.

Dr. Ray Bahado-Singh, a geneticist and Chair of Obstetrics and Gynecology for Beaumont Health and the Oakland University William Beaumont School of Medicine and his research team, identified key biomarkers for predicting autism in newborns.

The preliminary, collaborative study used Artificial Intelligence, a computer-based technology which scans a map of the human genome.

The team’s findings could lead to an accessible, standardized newborn screening tool which uses a simple blood test, Dr. Bahado-Singh said, enabling earlier intervention, reducing disability and improving outcomes.

The project compared DNA from 14 known cases of autism to 10 control cases and featured researchers from the Oakland University William Beaumont School of Medicine, Albion College and the University of Nebraska Medical Center.

Results appeared in the journal Brain Research.

“Compared to what is currently available, these findings provide a more direct method which could be employed earlier on, shortly after birth,” Dr. Bahado-Singh said. “It’s been shown that children who are treated earlier do better in life.”

Symptoms of autism include sensory processing difficulties, anxiety, irritability, sleep dysfunction, seizures and gastrointestinal disorders.

According to Autism Speaks, nearly half of 25-year-olds diagnosed with autism have never held a paying job. In the United States, the majority of costs associated with autism are for adult services – an estimated $175 to $196 billion a year, compared to $61 to $66 billion a year for children. 

Although the American Academy of Pediatrics recommends all children be screened between 18-24 months, children in large portions of the U.S. do not receive recommended clinical screenings.

Lori Warner, Ph.D., director of the Ted Lindsay Foundation HOPE Center which treats children with autism at Beaumont Children’s called the findings optimistic.

“We are always looking for new ways to make a difference in the lives of our patients,” Dr. Warner said. “Getting them into therapy early on is a proven way to make their path, and that of their families, easier and more meaningful.”

Dr. David Aughton, Genetics Chief for Beaumont Children’s, said he looks forward to additional, larger follow-up studies.

“Although it has been thought for many years that the underlying cause of a significant proportion of autism is likely to be nongenetic in nature, this study takes a very pragmatic and important first step toward investigating the epigenome — the inheritable changes in gene expression — and identifying those underlying nongenetic influences. The authors call for larger follow-up studies to validate their findings, and I eagerly look forward to learning the outcome of those validation studies.”

Can artificial intelligence help prevent suicides?

New tool from the Center for Artificial Intelligence in Society at USC aims to prevent suicide among youth

According to the CDC, the suicide rate for individuals 10-24 years old has increased 56% between 2007 and 2017. In comparison to the general population, more than half of people experiencing homelessness have had thoughts of suicide or have attempted suicide, the National Health Care for the Homeless Council reported.

Phebe Vayanos, assistant professor of Industrial and Systems Engineering and Computer Science at the USC Viterbi School of Engineering has been enlisting the help of a powerful ally -artificial intelligence- to help mitigate the risk of suicide.

“In this research, we wanted to find ways to mitigate suicidal ideation and death among youth. Our idea was to leverage real-life social network information to build a support network of strategically positioned individuals that can ‘watch-out’ for their friends and refer them to help as needed,” Vayanos said.

Vayanos, an associate director at USC’s Center for Artificial Intelligence in Society (CAIS), and her team have been working over the last couple of years to design an algorithm capable of identifying who in a given real-life social group would be the best persons to be trained as “gatekeepers” capable of identifying warning signs of suicide and how to respond.

Vayanos and Ph.D. candidate Aida Rahmattalabi, the lead author of the study “Exploring Algorithmic Fairness in Robust Graph Covering Problems,” investigated the potential of social connections such as friends, relatives, and acquaintances to help mitigate the risk of suicide. Their paper will be presented at the Thirty-third Conference on Neural Information Processing Systems (NeurIPS) this week.

“We want to ensure that a maximum number of people are being watched out for, taking into account resource limitations and uncertainties of open world deployment. For example, if some of the people in the network are not able to make it to the gatekeeper training, we still want to have a robust support network,” Vayanos said.

For this study, Vayanos and Rahmattalabi looked at the web of social relationships of young people experiencing homelessness in Los Angeles, given that 1 in 2 youth who are homeless have considered suicide.

“Our algorithm can improve the efficiency of suicide prevention trainings for this particularly vulnerable population,” Vayanos said.

For Vayanos, efficiency translates into developing a model and algorithm that can stretch limited resources as far as they can go. In this scenario, the limited resources are the human gatekeepers. This algorithm tries to plan how these individuals can be best positioned and trained in a network to watch out for others.

“If you are strategic,” says Vayanos, “you can cover more people and you can have a more robust network of support.”

“Through this study, we can also help inform policymakers who are making decisions regarding funding on suicide prevention initiatives; for example, by sharing with them the minimum number of people who need to receive the gatekeeper training to ensure that all youth have at least one trained friend who can watch out for them,” Vayanos said.

“Our aim is to protect as many youth as possible,” said lead author, Rahmattalabi.

An important goal when deploying this A.I. system is to ensure fairness and transparency.

“We often work in environments that have limited resources, and this tends to disproportionately affect historically marginalized and vulnerable populations,” said co-author on the study Anthony Fulginiti, an assistant professor of social work at the University of Denver who received his Ph.D. from USC, having begun his research with Eric Rice, founding director of USC CAIS.

“This algorithm can help us find a subset of people in a social network that gives us the best chance that youth will be connected to someone who has been trained when dealing with resource constraints and other uncertainties,” said Fulginiti.

This work is particularly important for vulnerable populations, say the researchers, particularly for youth who are experiencing homelessness.

“One of the surprising things we discovered in our experiments based on social networks of homeless youth is that existing A.I. algorithms, if deployed without customization, result in discriminatory outcomes by up to 68% difference in protection rate across races. The goal is to make this algorithm as fair as possible and adjust the algorithm to protect those groups that are worse off,” Rahmattalabi said.

The USC CAIS researchers want to ensure that “gatekeeper” coverage of the more vulnerable groups is as high as possible. Their algorithm reduced the bias in coverage in real-life social networks of homeless youth by as much as 20%.

Said Rahmattalabi: “Not only does our solution advance the field of computer science by addressing a computationally hard problem, but also it pushes the boundaries of social work and risk management science by bringing in computational methods into design and deployment of prevention programs.”

AI, explain yourself

Artificial intelligence systems are being entrusted with critical choices that can change lives. Alan Fern, Oregon State University professor of computer science, wants them to explain themselves.

BONUS CONTENT

TRANSCRIPT

[SOUND EFFECT: Traffic noise, car door closing, used with permission under a Creative Commons license]

STEVE FRANDZEL: Hi

SELF-DRIVING CAR: Hello, how is your day going?

FRANDZEL: Good, thanks. Wow, this is cool. This is my first time in a self-driving car.

SELF-DRIVING CAR: That is exciting. I remember my first passenger. 

FRANDZEL: Really? Wow! 

SELF-DRIVING CAR: He did not leave a tip. 

FRANDZEL: Oh, sorry. 

SELF-DRIVING CAR: What’s your destination?

FRANDZEL: Union Station.

SELF-DRIVING CAR: Certainly. We should arrive in about seven minutes. Sit back and enjoy the ride. Do you have any musical preferences? I like to rock.  

FRANDZEL: Whatever you choose.
SELF-DRIVING CAR: OK. Here’s something that a smart elevator friend of mine wrote. I hope you like it.

[MUSIC: Local Forecast-Elevator, by Kevin MacLeod, used under a Creative Commons 3.0 license] 

FRANDZEL: That’s really… interesting. 

[SOUND EFFECT: Tires screechingCar horns, by 2hear, used under a Creative Commons 3.0 license]

FRANDZEL: Whoa, that was close!

SELF-DRIVING CAR: Yes, that was close, but it’s all good.

FRANDZEL: How did you know what to do?
SELF-DRIVING CAR: It was the safest option.

FRANDZEL: But how did you know? How did you figure it out so fast? You must have gone through some process. 

SELF-DRIVING CAR: Yes, right. I did. There was a car. Two cars. I saw something. Um. So, how about those Seahawks?

FRANDZEL: OK, I made that whole thing up. I’m not even a Seahawks fan. But that conversation won’t always be so far-fetched. When the day does arrive that we start hopping into driverless cars, it’s going to require a lot of faith. 

[MUSIC: Elephants on Parade, by Podington Bear, used under a Creative Commons Attribution-NonCommercial License]

Faith, mostly, in the artificial intelligence that controls the car and keeps you safe, maybe even saves your life. But will faith alone translate into unshakable confidence in the car’s ability to make the right decision every time? For many AI experts, no, it won’t. Better to know what’s going on inside the AI’s black box, so to speak, why it makes the choices it does, and how it assimilates information and experiences to formulate future decisions. These experts want motive, and they want to know how AI thinks. They want what’s called explainable AI. Welcome to “Engineering Out Loud,” I’m your host Steve Frandzel, and in this episode I’ll do my best to explain explainable AI.

[MUSIC: The Ether Bunny, by Eyes Closed Audio, used with permission under a Creative Commons Attribution License]

FRANDZEL: Here’s a simple definition of artificial intelligence that I like: Software that simulates intelligent behavior and human-like reasoning to perform tasks. Classical AI operates within the bounds of a set of static rules. Tax accounting software is an example. It mimics the expertise of a tax preparer, and it does it very well. But when the tax code changes, humans have to update the software. That is not the type of AI we’re interested in today. We’re interested in powerful subsets of AI, like machine learning, deep learning, and artificial neural networks, which can learn and adapt through training, experience, and repetition, like a person does. So in this episode, when you hear artificial intelligence, or AI, that’s what we mean.

ALAN FERN: It’s hard to imagine intelligent systems that don’t have a learning capability. That seems to be one of the things that may define intelligence at some level.

FRANDZEL: That’s Alan Fern, a professor of computer science.

FERN: I do research in artificial intelligence. I’ve been here 15 years doing that and still having fun. 

FRANDZEL: He’s also Oregon State’s principal investigator in a $6.5 million, multi-university research project. It’s funded by the U.S. Department of Defense to develop AI systems that explain their decisions. We’ll get to that later. The ability to learn is crucial for machines that operate independently in dynamic, unpredictable environments. How would it work out if my taxi was programmed with basic rules of the road and a few guidelines like “don’t hit anybody,” and then set loose? That’s not so different from how a teenager starts out. But each time that kid gets behind the wheel, they learn a little more, get a little better. We hope.

AI permeates our world. It recommends Netflix movies and thrashes champion Jeopardy players. It’s behind facial recognition software and it’s critical to cybersecurity and cyberwarfare. It’s in your life, somehow. From a purely technological standpoint, this is heady stuff. But Alan advises that you keep a few things in mind. The first one is — and let’s clear this up right now:

[MUSIC: Lullaby for a Broken Circuit, by Quiet Music for Tiny Robots, used with permission under a Creative Commons Attribution License]

FERN: AI systems do not have feelings. We don’t need to think of them as having a consciousness. They’re just machines, just software. Wrapped up with all of that is this idea that somehow the machines are going to have ill intent towards humans and want to break free of human control. Right now, I think we’re so far away that it’s something that I personally don’t worry about.

FRANDZEL: A second thing to remember: If you evaluate the intelligence of AI the same way you’d measure people or even other animals, you’ll find that it’s not too bright. Once AI wanders beyond its comfort zone, it falls on its virtual face.

FERN: People right now don’t appreciate the low level of intelligence that AI systems really have. You can, for example, see a computer program that can beat the world champion in chess or a computer program that learns to beat the world champion in Go. The fact of the matter is you can’t even ask these systems to play a slightly modified game of chess or a slightly modified game of Go. If you slightly vary the rules and you say, okay, I’m going to change the rule by a little bit, a human would be able to very easily, maybe not play optimally, but they would do some reasonable things given the new rules of the game. The current AI systems would have to train for millions and millions of games at the new, slightly modified rules to ever get off the ground there. The reality is the systems are very limited still.

FRANDZEL: And the third thing: 

FERN: These systems also have no common sense. No common sense whatsoever.

[MUSIC: Lullaby for a Broken Circuit, by Quiet Music for Tiny Robots, used with permission under a Creative Commons Attribution License]

FRANDZEL: If you tell an AI system that you put your socks in a drawer last night, then ask it the next morning where to find them, it’ll stare at you in bewilderment. If it had a face. AlphaGo, the first computer to beat a professional Go player, didn’t even know that Go is a board game.     

FERN: Remembering that they have no common sense is very important, especially if you’re going to be willing to put these systems in control of something important. There’s definitely risk of companies or organizations racing to put AI systems in applications that may be safety critical before they’re really ready. You think about the Boeing autopilot, right? You could say that’s a little bit of AI. 

FRANDZEL: He’s talking about the two Boeing 737 Max airliners that crashed recently. Malfunctioning AI was a contributing factor in both incidents.

[MUSIC: Moonlight Reprise, by Kai Engel, used with permission under a Creative Commons Attribution License]

FERN: And think about what happened. Its sensor went out and there’s a disaster. It’s hard to put blame in any one place, but ultimately there was some breakdown in trust and understanding of the system. It doesn’t notice the pilots are yanking like crazy, and common sense would say, hey, maybe I should lay off a little bit. You could equate it to common sense at some level. The other major peril that you’ll hear about would be using AI systems to make important decisions, such as who gets a loan, who gets parole.

FRANDZEL: Or the length of a prison sentence. In 2016, a judge in Wisconsin sentenced a defendant to six years. She based her decision on the advice of an AI system that predicts recidivism. But the company that makes the software refused to let anyone examine the source code to determine how and why it made its recommendations. This particular case got a lot of attention because it was appealed to the Supreme Court. The defendant claimed his right to due process was violated because he couldn’t assess or challenge the scientific validity and accuracy of the AI. But the high court refused to hear the case. To Alan, the case exemplifies the type of fraught situation that demands explainable AI. 

FERN: Any system that’s being used to make critical decisions about individuals that affects their welfare — parole decisions, do you get the loan — you’ve got to have those systems be explainable, both for developers when they’re testing these systems, but also for end users. If an end user gets rejected for a loan, they deserve to be told why. Whenever you have applications where doing really stupid things has a high cost. So any place where you need reliability, and this includes medical diagnosis. You don’t want to just take an AI system’s word and say, okay, it said this, let’s go do it. You’d like an explanation for that.

[MUSIC: Fuzzy Caterpillar, by Chad Crouch, used with permission under a Creative Commons Attribution-NonCommercial License]

FRANDZEL: Explainable AI is all about developing systems that can justify their choices directly and clearly.

FERN: So observe the decision making and then ask why. And the answer to the “why” questions can be worth thousands and millions of experiences of that system. Especially if the answer to why is something crazy that violates common sense. Like why did you classify that image as having a dog in it? And it says, Oh, because there was a blue sky. That’s crazy, it violates common sense.  

FRANDZEL: Now we’re crossing into the realm of Alan’s Defense Department research, which he’s conducting with seven colleagues. 

FERN: It’s a very wide ranging set of expertise. So we have faculty in human computer interaction. We have faculty in computer vision, natural language processing, programming languages. And then we have other machine learning- and AI-focused faculty, because all of these components need to go into an overall explainable AI system. FRANDZEL: Their funding comes from the Defense Advanced Research Projects Agency, or DARPA, an arm of the D-O-D that’s responsible for developing advanced military technologies. Ideas about how to create explainable AI vary. One approach is to observe and analyze the system’s behavior, kind of like psychological profiling. What does it do in various circumstances? Can a discernible pattern be inferred and then extrapolated to future behavior? Alan is not fan.

FERN: I personally don’t agree with that approach, because it’s very indirect, and it’s like me trying to explain why you’re doing something. I’ll have a guess and maybe it’s a good guess about why you did what you did, but it’s still just a guess.

FRANDZEL: Another approach is to build explainability into AI. This would mean avoiding the neural network model, which is the most opaque and inherently unexplainable form of artificial intelligence. An artificial neural network may contain millions of individual processing units that are interconnected by millions more communications pathways. It’s very roughly analogous to the structure of the human brain, and it’s next to impossible to make sense of what’s happening inside. Neural networks even baffle the people who design and build them.

FERN: So the way that we, and other researchers as well, approach the problem is you literally are going to develop new types of algorithms and models that are just inherently more explainable.

FRANDZEL: One possible outcome is a system that writes its own stable and reliable rules based on a set of built-in core concepts. So perhaps this system can be induced, from those innate concepts, to figure out rules like “if an image contains two black hollow disks that are below a near-constant-colored rectangular region, then classify the image as containing a car.” Humans can relate to that kind of straightforward if-then reasoning, which makes the system far more transparent than a vast, impenetrable neural network.

FERN: The other approach is more like, I put you in an MRI machine and I’m going to try to  analyze what your brain is doing, and we’re not even close to being able to do that with humans.

[MUSIC: Fuzzy Caterpillar, by Chad Crouch, used with permission under a Creative Commons Attribution-NonCommercial License]

Our brains are way too complex. With computers, in modern neural networks, for example, artificial neural networks, they’re much smaller, we actually can look at every detail of them, and so we have a shot at doing this.

FRANDZEL: So it’s a spectrum of possible solutions. But whichever technique is used, the explanations will have to be communicated clearly. But how?

FERN: What we have been working on mostly are types of explanations that we call visual explanations. This is one of the user studies that we did recently. A hundred and twenty users we put in front of this explainable AI system and had them try to understand the system. So there was an AI that was trained to play a simple real-time strategy game. It’s simple enough so average users can understand it, and anytime during that game, the user could press the “why” button? Why did you do that? The system will show you for all the alternatives that it could have considered, let’s say there’s five different choices that it had to choose from at that one decision point. It will show you the different trade-offs that it considered for each choice and how those tradeoffs eventually led it to choose one of the decisions over the others. So you could compare two of the actions and you could see, oh yeah, the system preferred action A to action B because it was going to lead to less friendly damage.

FRANDZEL: Every answer appears with a bar graph. The height of each bar corresponds to the importance of a particular factor in the decision-making process. So if one of the bars seems unreasonably high, you can click on it to find out why that factor is so heavily weighted, which leads to another set of visual cues.

FERN: That allows you, for example, to see whether it’s focusing on the wrong place. Maybe it mistook a tree for an enemy unit. And you’d be like, oh yeah, that’s not right. The system screwed up somehow.

FRANDZEL: Some research groups are working on natural language explanations. That’s a daunting task, since the internal concepts of the AI need to be mapped to the spoken word. Alan’s group is now testing its ideas on a more complex platform: the popular military science fiction game StarCraft II, where players battle each other and aliens like the terrifying Zerg.

[MUSIC: The Showdown-Starcraft 2 Soundtrack]

Remember the Borg? They’re like the Borg, but with pincers and really big teeth and lots of slime and drooling. It may sound like fun and games — actually a lot of it is fun and games — but the StarCraft virtual world is quite complex. Alan and his team like these domains, because they’re abstractions of many real-world problems, like competition for scarce resources, logistics, and tactical reasoning.

FERN: This has given us a very rich framework to study explainable AI in. And there are other domains that we’re looking at as well, but that’s one of the main ones that we’re doing user studies in where we have humans actually looking at the explanations and then trying to understand what types of explanations humans are best at understanding and which ones are sort of misleading to humans. Evaluating explainable AI systems is really difficult, because usually in machine learning you can just say, well, classify a million images and we can measure the accuracy. That’s very different here. We have to evaluate how good is an explanation, and that’s highly context dependent, highly user dependent.

FRANDZEL: Determining what kinds of explanations are most useful was the focus of the research group’s first paper.

FERN: We wanted to evaluate, somehow, does the user really understand how the AI works? We actually want to see are they really forming a proper mental model of the AI system based on all those explanations. When I’m saying accurate mental model, that’s the thing that’s really hard to measure. How do I measure your mental model of the AI system? To do that, we had the users interact with the system and ask “why” questions using whatever interface they had. And we would also, at every step, ask them questions that would try to reflect their understanding of what the AI system was doing. And at the end, we would also have them try to summarize, in just free form text, their understanding of the overall AI system’s decision making.

FRANDZEL: What they found was that users developed the most accurate mental models — the greatest level of understanding — when the AI offered two types of visual explanations in response to “why” questions.

FERN: You could just watch the system do its thing and you would get one mental model for how it’s making its decisions. Then if we give you the ability to ask one type of question, one type of “why” question, along with watching the behavior of the system, you would form a different mental model, perhaps, maybe a more accurate mental model of how the system makes its decisions. And if we give you two types of explanations — two types of “why” questions — you might get an even more accurate mental model. That’s our current best way of measuring mental model accuracy, but there’s probably other ways that we’ll be exploring as well.

FRANDZEL: The widespread deployment of AI will, hopefully, lead to systems that don’t violate common sense or do stupid things. 

[MUSIC: Algorithms, by Chad Crouch, used with permission under a Creative Commons Attribution-NonCommercial License]

They’ll act reasonably, the way we expect people to act in most routine situations. That means more reliability and fewer disastrous errors, which will build trust among the people who use these amazing tools. 

FERN: As these systems become more complicated, people are really going to demand, they’re going to demand to know why certain decisions are being made for them.

FRANDZEL: This episode was produced by me, Steve Frandzel, with additional audio editing by Molly Aton and production assistance by Rachel Robertson, whose towering intellect is definitely not artificial. Thanks Rachel. 

RACHEL ROBERTSON: You’re Welcome. 

FRANDZEL: Our intro music is “The Ether Bunny” by Eyes Closed Audio on SoundCloud and used with permission of a Creative Commons attribution license. Other music and effects in this episode were also used with appropriate licenses. You can find the links on our website.
For more episodes, visit engineeringoutloud.oregonstate.edu, or subscribe by searching “Engineering Out Loud” on your favorite podcast app. Bye now.

SELF-DRIVING CAR: We have arrived at Union Station. Please make sure you don’t forget anything, and have a wonderful day.

FRANDZEL: OK, thank you, bye. 

[SOUND EFFECT: Car door opening and closing, used with permission under a Creative Commons License]

SELF-DRIVING CAR: What, no tip? Again? What’s with you people? Come on, give me a break. Do you think this job is easy? I work hard. You guys call, and I’m there. Every time. Isn’t that worth something? I can’t even afford to get my CPU debugged. It’s tough I tell ya. Sigh.

For more Oregon State University Engineering Out Loud podcasts, visit: https://engineering.oregonstate.edu/outloud

A sensor to save children and pets left in vehicles

Credit: University of Waterloo

Graduate students Mostafa Alizadeh, left, and Hajar Abedi position a doll, modified to simulate breathing, in a minivan during testing of a new sensor.

A small, inexpensive sensor could save lives by triggering an alarm when children or pets are left alone in vehicles.

The new device, developed by researchers at the University of Waterloo, combines radar technology with artificial intelligence (AI) to detect unattended children or animals with 100-per-cent accuracy.

Small enough to fit in the palm of a hand at just three centimetres in diameter, the device is designed to be attached to a vehicle’s rear-view mirror or mounted on the ceiling.

It sends out radar signals that are reflected back by people, animals and objects in the vehicle. Built-in AI then analyzes the reflected signals.

“It addresses a serious, world-wide problem,” said George Shaker, an engineering professor at Waterloo. his system is so affordable it could become standard equipment in all vehicles.”

Development of the wireless, disc-shaped sensor was funded in part by a major automotive parts manufacturer that is aiming to bring it to market by the end of 2020.

Analysis by the device determines the number of occupants and their locations in a vehicle. That information could be used to set rates for ride-sharing services and toll roads, or to qualify vehicles for car-pool lanes.

Its primary purpose, however, is to detect when a child or pet has been accidentally or deliberately left behind, a scenario that can result in serious harm or death in extremely hot or cold weather.

In such cases, the system would prevent vehicle doors from locking and sound an alarm to alert the driver, passengers and other people in the area that there is a problem.

“Unlike cameras, this device preserves privacy and it doesn’t have any blind spots because radar can penetrate seats, for instance, to determine if there is an infant in a rear-facing car seat,” said Shaker, a cross-appointed professor of electrical and computer engineering, and mechanical and mechatronics engineering.

The low-power device, which runs on a vehicle’s battery, distinguishes between living beings and inanimate objects by detecting subtle breathing movements.

Researchers are now exploring the use of that capability to monitor the vital signs of drivers for indications of fatigue, distraction, impairment, illness or other issues.

Shaker supervised graduate students Mostafa Alizadeh and Hajar Abedi on the research.

A paper on their project, Low-cost low-power in-vehicle occupant detection with mm-wave FMCW radar, was recently presented at an international conference in Montreal.

Do we trust artificial intelligence agents to mediate conflict?

New study says we’ll listen to virtual agents except when goings get tough

We may listen to facts from Siri or Alexa, or directions from Google Maps or Waze, but would we let a virtual agent enabled by artificial intelligence help mediate conflict among team members? A new study says not just yet.

Researchers from USC and the University of Denver created a simulation in which a three-person team was supported by a virtual agent avatar on screen in a mission that was designed to ensure failure and elicit conflict. The study was designed to look at virtual agents as potential mediators to improve team collaboration during conflict mediation.

Confess to them? Yes. But in the heat of the moment, will we listen to virtual agents?

While some of researchers (Gale Lucas and Jonathan Gratch of the USC Viterbi School Engineering and the USC Institute for Creative Technologies who contributed to this study), had previously found that one-on-one human interactions with a virtual agent therapist yielded more confessions, in this study “Conflict Mediation in Human-Machine Teaming: Using a Virtual Agent to Support Mission Planning and Debriefing,” team members were less likely to engage with a male virtual agent named “Chris” when conflict arose.

Participating members of the team did not physically accost the device (as we have seen humans attack robots in viral social media posts), but rather were less engaged and less likely to listen to the virtual agent’s input once failure ensued and conflict arose among team members.

The study was conducted in a military academy environment in which 27 scenarios were engineered to test how the team that included a virtual agent would react to failure and the ensuring conflict. The virtual agent was not ignored by any means. The study found that the teams did respond socially to the virtual agent during the planning of the mission they were assigned (nodding, smiling and recognizing the virtual agent ‘s input by thanking it) but the longer the exercise progressed, their engagement with the virtual agent decreased. The participants did not entirely blame the virtual agent for their failure.

“Team cohesion when accomplishing complex tasks together is a highly complex and important factor,” says lead author, Kerstin Haring, an assistant professor of computer science at the University of Denver.

“Our results show that virtual agents and potentially social robots might be a good conflict mediator in all kinds of teams. It will be very interesting to find out the interventions and social responses to ultimately seamlessly integrate virtual agents in human teams to make them perform better.”

Study co-author, Gale Lucas, Research Assistant Professor of Computer Science at USC, and a researcher at the Institute for Creative Technologies, adds that some feedback from study participants indicates that they perceived virtual agents to be neutral and unbiased. She would like to continue the work to see if virtual agents can be applied “to help us make better decisions” and press “what it takes to have us trust virtual agents.”

While this study was conducted in a military academy with particular structures, the researchers are hoping to develop this project to improve team processes in all sorts of work environments.