New publication – “False sense of security and a flurry of misplaced trust: the construction of trust in and by Facebook” by Balázs Bodó, Márton Bene and Zsolt Boda

Bodo, B., Bene, M., & Boda, Z. (2025). False sense of security and a flurry of misplaced trust: the construction of trust in and by Facebook. Information, Communication & Society, 1-20.

https://www.tandfonline.com/doi/full/10.1080/1369118X.2025.2482651

In the age of digital platforms, trust is no longer rooted exclusively in personal relationships or institutions—it is increasingly mediated by technology. Our study explored how trust in and through social media is constructed across Europe. It focused on Facebook, a widely used social media platform in many countries. It asks a timely and critical question: why do users continue to trust Facebook, its users, and the content they see—despite growing awareness of its risks? To answer this, we conducted a large-scale survey in 2022 across seven European countries: Estonia, France, Germany, Greece, Hungary, Portugal, and the Netherlands.

We defined three main dimensions of trust: trust in the Facebook platform itself, trust in its users, and trust in the information shared on the platform. We then examined how these forms of trust are influenced by users’ perceptions of risk and by three potential “pillars” of trust: (1) confidence in their own ability to recognize and avoid harm, (2) belief in Facebook’s ability to protect them through self-regulation, and (3) belief in the state’s role in regulating platforms.

The findings paint a nuanced picture. First, users who perceive high risk are less likely to trust Facebook as a platform—but this distrust does not extend to other users or to content. People tend to blame the company but still believe in what they see and who they interact with. Second, the strongest factor influencing trust is the perceived effectiveness of Facebook’s self-regulation. When users think the platform actively protects them from harm—through moderation or algorithmic control—their trust increases significantly.

A secondary, but still relevant, factor is users’ self-confidence. Those who believe they can manage risks themselves—by recognizing and avoiding manipulation—also report higher trust. However, this sense of control has a weaker effect than trust in Facebook’s own efforts. Interestingly, awareness of risk alone does not increase trust; action and perceived ability matter more than simple vigilance.

Contrary to expectations, belief in government regulation does not significantly affect trust. Users do not seem to associate platform safety with external oversight, suggesting that regulation is largely invisible in everyday platform experiences.

One of the most important conclusions of the study is that these sources of trust do not replace each other. Instead of one pillar compensating for the weakness of another, they reinforce each other. This means that when users trust both Facebook and themselves, their trust is strongest. But when one pillar is missing, the overall level of trust collapses.

Ultimately, the research suggests that Facebook users may be operating under a false sense of security. They trust a platform with a questionable track record and overestimate their own ability to stay safe. This dynamic creates a potentially fragile foundation for digital trust—and highlights the urgent need for more transparent, effective governance of online spaces.