Facebook faced a great deal of criticism in September, following revelations published in the Wall Street Journal. One example was that Facebook had begun a program, called XCheck, to route public figures away from their content moderation process, to be handled differently so as to avoid PR problems. This program had metastasized to more than five million users, with little oversight, allowing some very prominent figures to engage in harassment and misinformation that Facebook’s moderation mechanisms might have removed. The Washington Post asked that Robyn Caplan and I discuss this revelation in light of research we had conducted about YouTube’s policies for their paid creators. Here too, the platform engaged in what we called “tiered governance,” quietly treating different categories of users in different ways. We argued that, while this is not inherently a bad practice, and could be a responsible approach if handled correctly, its obvious pitfalls are not disclosing this approach, which can leave users confused, frustrated, and suspicious of how and why they’re being treated differently.