By Dr. Debra Zahay, Dr. Janna M. Parker, and Dr. Kevin W. James

The research was conducted independently of several funding organizations in the spirit of academic inquiry.

Abstract

The focal questions this research addresses are what are consumer perceptions of, 1) the power that social media platforms have when banning specific legal business models, and 2) is it fair to do so. The research design operates at the consumer level using an online experiment to understand consumer opinions regarding the fairness of a ban based on either the company type or the decision process for the ban. Results of this exploratory study suggest consumers are most concerned that the processes and procedures in any ban are fair rather than the type of business mode (direct selling vs. traditional retailing). This research suggests avenues for public policy determinations and the extent to which social media platforms should be allowed to censor companies and business models at will.

Literature Review

Motivation

This research explores consumer sentiment regarding the banning of specific business models from social media platforms. The study’s initial motivation emerged when TikTok became the first social media platform to ban multi-level marketing companies (MLMs), hereafter referred to as direct sellers, from making social media content (Social Selling News, 2021). Not only was direct selling initially misclassified as a Ponzi-style pyramid, but TikTok’s community guidelines also prohibited independent distributors from using their accounts to sell products and direct selling companies are prohibited from using their business accounts to promote products. TikTok claimed it wanted to protect users from fraud and scams (Tiffany, 2020). However, at the same time, platforms allowed content such as live streaming and paid advertising from psychics and tarot card readers, which many consider fraudulent business forms (Mughal, 2020).

Direct selling is still misclassified as a fraudulent practice in the TikTok community guidelines because it has a hierarchical organization structure. Therefore, recruiting of salespersons for direct selling companies is prohibited on the platform. Under the fraud and scams section of the community guidelines it states that “recruiting for companies that sell products or services in a pyramid structure through independent distributors (multi-level marketing or MLM)” is not allowed (TikTok, 2024). While this is a step back from previous guidelines from 2020/2021, which banned all direct selling content, it still demonstrates a misunderstanding of how the direct selling business model operates. While the guidelines were changed in May of 2024 for a more nuanced approach to banning MLM content focusing on recruitment, this example illustrates the power of social media platforms to change their policies at will.

How Social Media Platforms ‘Police’ the Internet

While there are positive benefits to social media usage, the advent of social media has provided an online outlet for the emergence of consumer misbehaviors towards individuals and brands, such as trolling (Golf-Papez & Veer, 2022), collaborative brand attacks (Rauschnabel et al., 2016) and ‘canceling’ (Parker, James, & Zahay, 2024). In this complex environment, social media platforms also often engage in controversial behavior by banning accounts and individuals. Over the past few years, some social media platforms have effectively become ‘custodians’ of the internet, policing content without supervision (Gillespie, 2018). Justification for assuming this role is based on Section 230 of the Communications Decency Act (CDA), which gives these platforms a broad mandate to oversee content, as it protects internet firms from civil liability due to their activities in restricting content (West, 2017).

Method

Exploring Consumer Perceptions of Platform Power

A two-stage process was utilized to investigate the research questions listed above. An initial qualitative study comprising 31 in-depth interviews of undergraduate and graduate business students indicated that consumers believe that the social media platforms’ ability to ban companies and business models has the potential to suppress entrepreneurship and, therefore, stunt economic growth. Next, Prolific was used to recruit subjects who completed a survey on Qualtrics. A between-subjects randomized experiment consisting of a 2 x 2 study design was used, comparing 2 (a national retailer launching a direct selling product line vs. a direct selling company launching a new product line) x 2 (posted platform community guidelines vs. employee discretion to ban).

Subjects were randomly assigned one of the following scenarios that described fictional companies banned from a fictional social media site after a user complained to the platform, with each scenario explained in Table 1. Each scenario began with the following statement:

“Please read the following scenario carefully. You will be asked a series of questions that will require you to think of the fictional direct selling company and the fictional social media platform in the scenario when answering the questions that follow.” Each scenario contained a retailer type and a ban type. Finally, a summary paragraph was included at the end of each scenario.

Subjects were almost all daily social media users, with the majority using social media more than once a day, as illustrated in Table 2 below.

In each scenario, the entity is portrayed as using an MLM business model for the product, and the model is legal. To ensure that subjects read and understood the scenario presented, after reading the scenario and moving to the next page, a question was asked for each condition in which the subject identified the type of business and the type of ban. Additionally, to ensure that subjects read the survey items, two attention-check items that required them to check a specific answer were randomly inserted into the survey. Finally, a minimum time to complete was established during a pre-test. Twenty-four subjects answered at least one attentiveness item incorrectly, missed a manipulation check, or finished in less than the average time determined in a pretest (n=67) and were eliminated from the sample. The final sample size is N = 135 (49.6% male, 54.8% never married, 42.2% had at least a 4-year degree, 37.7% made less than $50,000 a year, and 61.5% between ages 18-34).

After reading the scenario, subjects were asked to think of the fictional company and fictional social media platform as they answered survey questions. Using a seven-point Likert scale ranging from “very unfair” to “very fair,” social media account termination fairness was measured with an adapted version of Konovsky and Cropanzano’s 1991 scale, which is comprised of two dimensions that focus on distributive justice (α = .90) and procedural justice (α = .96). An example item for procedural justice is “The account ban process used by the social media platform was fair” and an example item for distributive justice is “The company’s social media account termination by the social media platform was fair.” Ban reaction (α = .96), a new scale developed from the qualitative interviews, initially had nine items but an exploratory factor analysis (EFA) resulted in dropping two items. Subjects used a five-point Likert scale ranging from “Not very likely” to “Very likely” to rate their reaction to discovering a company had been banned from a social media platform. Example items from this scale are “Likely to decide not do business with the company” and “Likely to be suspicious of the company.” Similarly, four items for the new platform power (α = .78) scale were reduced to three due to cross-loading and low loadings (Hair et al., 2010). An example item for platform power is “The social media platform has all the power in their favor,” and was rated using a five-point Likert scale ranging from “strongly disagree” to “strongly agree.”

Results

MANCOVA (Multivariate Analysis of Variance) in SPSS 29.0 was used to analyze the results found in the Table 3 below The Box’s test of equality of covariance matrices and Levene’s test of equality of error variances met the requirements of being non-significant (Hair et al., 2010).

Results indicate that subjects were concerned with the fairness of the process (procedural justice) but not the outcome (distributive justice) in banning business models. Ban type as a main effect and Procedural Justice as a dependent variable yielded a significant effect. Specifically, the Procedural Justice mean is higher for violating community guidelines than when the banning was at the employee’s discretion (Mean Community Guidelines = 5.069, S.D. = 1.58, Mean Employee Discretion = 4.37, S.D. = 1.6). The company type manipulation did not affect procedural or distributive justice. An examination of the differences in means for each item indicated that respondents felt more comfortable with the ban if community guidelines were not met, but less comfortable if the ban was made based on platform employee discretion.

Ban type and company type main effects were both significant main effects when platform power is the dependent construct. Specifically, Ban Type and Platform Power yields a significant P value and F value. Means indicate the perception that platform exerted its power over the company more so in the employee discretion scenario than in the violating community guidelines scenario (Platform Power Mean Employee Discretion = 3.25, S.D. = .71; Platform Power mean Violating Guidelines = 2.83, S.D. = .85).

Company type and platform power also yielded a significant main effect. Specifically, when the company type is a direct seller, subjects believe platform power was used more so than when the company type is a retailer (Platform Power Mean Direct Seller = 3.23, S.D. = .697; Platform Power Mean Retailer = 2.87, S.D. = .86).

In addition, actions at employee discretion were particularly viewed as an abuse of platform power. However, the way the company makes money, through direct selling or retailing, was not as important to consumers as was organizational justice, both distributive and procedural.

Public Policy and Managerial Implications of Platform Power

While the field of social media has seen numerous academic studies, there has been a noticeable gap in research into the actual power that social media platforms wield and consumer attitudes toward these platforms. The results of this exploratory study indicate that consumers perceive social media platforms to have excessive control over business bans, a sentiment that is prevalent regardless of the type of company. Respondents’ primary concerns revolve around distributive justice (overall fairness) and procedural justice (the process of banning business models) when evaluating their attitude towards a specific company ban.

Since social media platforms are the main source of news for many consumers, trust is a key issue in social media. Thus, consumers are most concerned that the processes and procedures in any ban are fair, making these processes and procedures transparent can improve consumer trust. Detailed explanations of platform actions, independent oversight and audits, a robust and effective appeals process, and user empowerment in the process in terms of user control over content and a role in decentralized platform governance could enhance positive perceptions of distributive justice in these cases.

For practitioners in any industry, but the direct selling channel specifically, these results are good news. While social media platforms (TikTok in particular) have been eager to ban or censure their content, consumers are less likely to care about the type of company being banned in the interest of overall fairness and justice exercised by the social media platform companies. Platforms should be aware that their actions on behalf of consumers are not always appreciated; businesses that find themselves on the wrong side of the platform policing issue should determine the source of the ban (employee discretion versus community guidelines) and if they might use consumer sentiment in favor of procedural justice to defend their business presence on social media.

If platform actions have negatively impacted a brand, standard crisis management procedures, such as acknowledging what has happened and taking corrective action, could be helpful. Suppose the platform decisions do not appear to follow the guidelines of procedural and process fairness. In that case, it might be useful to appeal to the platform and the brand’s followers directly. While issues should be raised with the platform, these decisions often take a long time to reverse. Appealing directly to customer communities and testimonials as to the unfairness of the action might do more toward rebuilding trust than appealing to the platform itself. Key findings of this research are summarized in Figure 1.

These findings strongly suggest a potential need for public policy interventions in social media platform governance, focusing not only on more specific guidelines but also on procedural justice or fairness concerns in banning a legal business model. However, public policy alone cannot ensure fair marketplace access. Platform self-regulation must play a role, as we have seen recently with the actions of several social media platforms, most notably Facebook (Meta), admit that it engaged in censorship and bowed to government pressure on key issues. Since this research was conducted, the platform has, of its own, accord, reduced the role of fact-checkers and relaxed content moderation policies. (Rosenberg, 2025). These actions have yielded a cautious optimism that platforms will respond favorably to consumer and business concerns. This study also highlights the importance of trade organizations such as DSA, whose efforts to have TikTok’s policy on direct selling organizations and their representatives changed have been successful, at least for now.

Finally, these findings have significant implications for future academic research. The study suggests a promising opportunity for a comprehensive conceptualization and examination of the concepts of social media platform power as well as social media platform activism, akin to the extensive studies on consumer activism. The findings also suggest opportunities for further research into the public policy implications of giving social media platforms such broad powers. More research funding would be welcome in this area, whether provided by public or private sources.

References

  • Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
  • Golf-Papez, M., & Veer, E. (2022). Feeding the trolling: Understanding and mitigating online trolling behavior as an unintended consequence. Journal of Interactive Marketing, 57(1), 90–114. https://doi.org/10.1177/10949968221075315
  • Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis. Prentice Hall.
  • Konovsky, M. A., & Cropanzano, R. (1991). Perceived fairness of employee drug testing as a predictor of employee attitudes and job performance. Journal of Applied Psychology, 76(5), 698–707.
  • Mughal, A. (2021, October 20). The improbable appeal of TikTok Tarot: You’d think that having a reading delivered via machine algorithm would make it feel less useful or relevant. You’d think wrong. Wired. https://www.wired.com/story/the-improbable-appeal-of-tiktok-tarot/
  • Parker, J. M., James, K. W., & Zahay, D., “Cancel Culture for Human Brands and Firms: Punishment versus Accountability,” In Angeline Close Scheinbaum (Ed.) Corporate cancel culture and brand boycotts: The dark side of social media for brands. Routledge/Psychology Press.
  • Rauschnabel, P. A., Kammerlander, N., & Ivens, B. S. (2016). Collaborative brand attacks in social media: Exploring the antecedents, characteristics, and consequences of a new form of brand crises. Journal of Marketing Theory and Practice, 24(4), 381–410. https://doi.org/10.1080/10696679.2016.1205452
  • Rosenberg, S. (2025, January 5). Zuckerberg and Meta say good riddance to fact-checking. Axios. https://www.axios.com/2025/01/10/mark-zuckerberg-joe-rogan-facebook-censorship-biden
  • Social Selling News (February, 2021), TikTok imposes ban on direct selling content.
  • Tiffany, K. (2021, December 17). The internet is starting to turn on MLM. The Atlantic. https://www.theatlantic.com/technology/archive/2020/12/tiktok-bans-multilevel-marketing-mlm/617422/
  • West, S. (2017). Raging against the machine: Network gatekeeping and collective action on social media platforms. Media and Communications, 5(3). https://doi.org/10.17645/mac.v5i3.989