Notre Dame Expert: Host of problems with Facebook deepfake ban
Tim Weninger, associate professor in the Department of Computer Science and Engineering at the University of Notre Dame, says Facebook’s newly announced ban on deepfakes is good news for democracy but presents a number of challenges in the fight against the spread of misinformation.
Weninger is an expert in disinformation and fake news, web and social media, data mining and machine learning.
“This is good news for democracy and a good business policy for Facebook, whose users don’t want to be lied to by the content they see,” Weninger said. “If Facebook becomes flooded by fake or misleading content, then users will abandon the site.”
But, Weninger adds, the policy presents a host of problems and challenges.
“Most obvious is the technological question of how will Facebook determine which content is AI faked and which is not. It’s clear that deepfake technology will soon be usable by the masses. And when that happens, Facebook won’t have the capacity to filter fake videos manually. Notre Dame and others are working on deepfake detectors, but these automatic detectors won’t catch everything.
“Second is the actual effect that this deepfake ban will have on the actual problem. It’s often said that ‘a lie can travel around the world before the truth can get its pants on.’ So, if a deepfake is created, shared and quickly taken down, the damage is done — it will live forever. And there is little that a maligned political candidate or brand can do to fix it.
“In my opinion, deepfakes are some mix of identity theft and slander. And there ought to be a legal remedy or judicial recourse available to the victims of deepfakes.”
A method with roots in AI uncovers how humans make choices in groups and social media
The choices we make in large group settings — such as in online forums and social media — might seem fairly automatic to us. But our decision-making process is more complicated than we know. So, researchers have been working to understand what’s behind that seemingly intuitive process.
Now, new University of Washington research has discovered that in large groups of essentially anonymous members, people make choices based on a model of the “mind of the group” and an evolving simulation of how a choice will affect that theorized mind.
Using a mathematical framework with roots in artificial intelligence and robotics, UW researchers were able to uncover the process for how a person makes choices in groups. And, they also found they were able to predict a person’s choice more often than more traditional descriptive methods. The results were published Wednesday, Nov. 27, in Science Advances.
“Our results are particularly interesting in light of the increasing role of social media in dictating how humans behave as members of particular groups,” said senior author Rajesh Rao, the CJ and Elizabeth Hwang professor in the UW’s Paul G. Allen School of Computer Science & Engineering and co-director of the Center for Neurotechnology.
“In online forums and social media groups, the combined actions of anonymous group members can influence your next action, and conversely, your own action can change the future behavior of the entire group,” Rao said.
The researchers wanted to find out what mechanisms are at play in settings like these.
In the paper, they explain that human behavior relies on predictions of future states of the environment — a best guess at what might happen — and the degree of uncertainty about that environment increases “drastically” in social settings. To predict what might happen when another human is involved, a person makes a model of the other’s mind, called a theory of mind, and then uses that model to simulate how one’s own actions will affect that other “mind.”
While this act functions well for one-on-one interactions, the ability to model individual minds in a large group is much harder. The new research suggests that humans create an average model of a “mind” representative of the group even when the identities of the others are not known.
To investigate the complexities that arise in group decision-making, the researchers focused on the “volunteer’s dilemma task,” wherein a few individuals endure some costs to benefit the whole group. Examples of the task include guarding duty, blood donation and stepping forward to stop an act of violence in a public place, they explain in the paper.
To mimic this situation and study both behavioral and brain responses, the researchers put subjects in an MRI, one by one, and had them play a game. In the game, called a public goods game, the subject’s contribution to a communal pot of money influences others and determines what everyone in the group gets back. A subject can decide to contribute a dollar or decide to “free-ride” — that is, not contribute to get the reward in the hopes that others will contribute to the pot.
If the total contributions exceed a predetermined amount, everyone gets two dollars back. The subjects played dozens of rounds with others they never met. Unbeknownst to the subject, the others were actually simulated by a computer mimicking previous human players.
“We can almost get a glimpse into a human mind and analyze its underlying computational mechanism for making collective decisions,” said lead author Koosha Khalvati, a doctoral student in the Allen School. “When interacting with a large number of people, we found that humans try to predict future group interactions based on a model of an average group member’s intention. Importantly, they also know that their own actions can influence the group. For example, they are aware that even though they are anonymous to others, their selfish behavior would decrease collaboration in the group in future interactions and possibly bring undesired outcomes.”
In their study, the researchers were able to assign mathematical variables to these actions and create their own computer models for predicting what decisions the person might make during play. They found that their model predicts human behavior significantly better than reinforcement learning models — that is, when a player learns to contribute based on how the previous round did or didn’t pay out regardless of other players — and more traditional descriptive approaches.
Given that the model provides a quantitative explanation for human behavior, Rao wondered if it may be useful when building machines that interact with humans.
“In scenarios where a machine or software is interacting with large groups of people, our results may hold some lessons for AI,” he said. “A machine that simulates the ‘mind of a group’ and simulates how its actions affect the group may lead to a more human-friendly AI whose behavior is better aligned with the values of humans.”
Co-authors include Seongmin A. Park, Center for Mind and Brain at UC Davis and Institut des Sciences Cognitives Marc Jeannerod, France; Saghar Mirbagheri, Department of Psychology, New York University; Remi Philippe, Mariateresa Sestito and Jean-Claude Dreher at the Institut des Sciences Cognitives Marc Jeannerod.
This research was funded by the National Institute of Mental Health, National Science Foundation, and the Templeton World Charity Foundation.
Borderline personality disorder is a mental health disorder that impacts the way you think and feel about yourself and others, causing problems functioning in everyday life. It includes self-image issues, difficulty managing emotions and behavior, and a pattern of unstable relationships.