Moral framing is more than simply telling right from wrong. Moral narratives shape people’s worldview, influence social dynamics and provide insight into society as a whole. Morality and moral narratives have long been the purview of religious leaders and philosophers, but more recently psychologists and communication scientists have turned their attention toward devising an empirical account of how people make moral judgments.
Last year, UC Santa Barbara’s René Weber published a groundbreaking study that reexamines how we conduct research into moral intuitions and recommends a new way forward. In the same article he introduced a sophisticated software suite that demonstrates how moral theory can be investigated and applied to the processing of real-world narratives at a large scale. The paper has won the Best Article of the Year Award for 2018 from the journal Communication Methods and Measures and the Association for Education in Journalism and Mass Communication.
“All members of UC Santa Barbara's Media Neuroscience Lab are very honored and excited about this award and the recognition that comes with it,” said Weber, a professor in the university’s communication department and principal researcher at UC Santa Barbara’s Media Neuroscience Lab.
His group collaborates with computer scientists and psychologists in this line of research. “We are an interdisciplinary campus. We are proud that UCSB holds interdisciplinarity in high regard,” said Weber, who holds doctorates in both medicine (psychiatry/cognitive neuroscience) and the natural sciences (psychology).
Moral intuitions are a part of every society, but their specifics can differ between cultures and individuals. Upbringing, personality and life experiences are interconnected with an individual’s inbuilt moral intuitions. However, social psychologists recognize five broad moral foundations that all humans share. These are issues dealing with
- Care or Harm
- Fairness or Cheating
- Loyalty or Betrayal
- Authority or Subversion
- Purity or Desecration
According to Weber, these foundational categories are represented in varied systems of moral judgment and decision-making we see in the world. And different people may see the same issue through different lenses depending on which moral foundation they’re most sensitive to.
“It’s a little bit like language,” he explained. “All humans have the capacity for language, but there are different languages with different words and grammar. Something similar applies to humans’ moral judgment. All humans possess inborn moral intuitions that can be categorized along five broad categories, but then the environment and cultural context influence how these capacities become relevant,” he said.
In order to study moral intuitions, researchers first have to distinguish between moral and non-moral content and behavior. For instance, a behavior or message might be unusual, but not inherently good or bad. Eating desert before the main course may be judged as unusual and inappropriate, but not as inherently right or wrong.
Scientists then need to make sense of moral behaviors and messages they find. This is not at all a straightforward task. Each individual has different moral sensibilities that can change in different contexts. An individual’s intentions and the outcome of actions also influence moral judgment. All these individual differences make it difficult to find agreement on how to assign specific behaviors and messages to the various moral foundations. Different reviewers, or coders as they’re called, may assign different categories to the same content.
Past research tried to solve this complexity by training a group of “expert” coders. But Weber claimed this misses the point. Moral intuitions are just that: intuitions, deeply ingrained in human nature. Training coders encourages them to make choices consciously rather than tapping into their subconscious. In this light, too much training can actually be counterproductive, Weber said.
A New System
Out of this realization Weber and his lab developed MoNa, the Moral Narrative Analyzer, a system that combines computer algorithms, large-scale text mining and evaluations from a large, diverse group of humans to analyze real-world moral behaviors and messages. And it was this unique innovation, as well as their argument against the status quo, that earned the team the Best Article of the Year Award.
Their paper points out the folly of relying on just a couple of quickly trained coders to extract moral information, which has been the standard practice for years. “This, of course, poses the critical question of how valid this literature is,” Weber said, quickly pointing out that this corpus includes his own previous work.
Using MoNA, Weber can peer into the global zeitgeist in real time. The system can process content from up to 30,000 online news outlets every 30 minutes, sourced from the Global Database of Events, Language, and Tone, a Google and National Science Foundation-funded database that stores the world’s news and events every 15 minutes. MoNA filters this torrent into a stream of high-quality information from 1,000 to 1,500 sources. The system also allows specific analyses of moral framing for major regional and global events or for specific locations, organizations, and people.
Click the link below to read the full article.
Tuesday, August 20, 2019
August 27, 2019 - 1:29pm