A couple of years ago, I worked with Marilena Hohmann and Karel Devriendt on a method to estimate ideological polarization on social media: the tendency of people to have more extreme opinions and to avoid contact with people holding a different opinion. Studying ideological polarization is interesting, but it misses a crucial piece of the puzzle: what happens when differing opinions – which may or may not be trying to avoid each other – collide? Are people actually having a debate and an exchange of ideas, or are they escalating to name-calling and generally toxic behavior?
Answering that question requires a method to estimate affective polarization, rather than merely ideological polarization. Once Marilena and I were done working with the latter, we rolled up our sleeves to work on the former. The result is the paper “Estimating affective polarization on a social network“, which appeared a few days ago on PLoS One.
The objective appears simple: to try and quantify what people with differing opinions do when they interact. Unpacking this objective requires some care, though. One could think that this is a simple correlation test: if the more people disagree the more they use toxic language, then affective polarization is high.
Such an approach, however, ignores that people might hate each other so much that they refuse to communicate altogether, or they are forcibly separated. An example is r/the_donald. For a time, it was one of the most active subreddits on Reddit, creating a strong polarized environment. At some point, the Reddit admins decided to ban the subreddit altogether, which resulted in an exodus of users. In the data, one would see a decrease in affective polarization, because there was less toxicity. In reality, discourse had become so toxic it had to stop, which we argue is a sign of a growing, nor decreasing, affective polarization.

So we still need to track the network of interactions, just like we did for ideological polarization, because ideology and affect are intertwined. Marilena and I spent a lot of blood and tears trying to be smart about finding a solution, but in the end – as is often the case – the simple route was the best one. We decided to add the affective component to the ideological polarization measure we already had. The older measure captures the social separation, while the correlation between disagreement and toxicity captures the affective component.
Once we had such a measure, we made a case study, analyzing the evolution of the social discourse on former Twitter (RIP) on COVID-19. We used data from February to July 2020, filtered using a set of keywords used in the early pandemic debate. Initially, the results were a bit confusing. While we did find a modest rise in the affective polarization levels, it seemed that affective polarization was mostly a flat line.
This went a bit counter our expectation, but analyzing the social separation and the affective components separately told an insightful story (thanks reviewer #1 for prodding us in this direction, we owe you big time 🙂 ).
The clear pattern was that, in the first couple of weeks, there was low social segregation but a high affective component. After this initial shock, social segregation skyrocketed and by week 9 it plateaued, while the affective component went down.
This is consistent with a narrative of a new topic coming to scene. As the topic is new, no one knows where they stand exactly, so everyone tends to interact with everyone (low social segregation). However, feelings run high, both because of the emergency itself and – possibly – because of previous conflicts between the users, which lead to renewed toxicity. As people get used to the new scenario and the clear factions emerge and stabilize, social segregation suddenly kicks in, and the factions stop talking, which also reduces the chances of using toxic language against the opposing side.
I think this exemplifies beautifully why the measure is useful. If we didn’t have a network measure, we would conclude that affective polarization was low after the first few weeks of the pandemic, because there was no correlation between disagreement and toxicity. Instead, affective polarization was still growing, and we failed to see the correlation because polarization was so high people weren’t event talking to each other any more.
There’s more work to do, of course, because we only tested a tiny scenario. Marlena and I are working on the final piece of our polarization trilogy, where all these great tools we built are finally put to use. Stay tuned!



































