If you've followed me for a while you know I was hyped about composable moderation when it first launched. Providing a community with a means to moderate itself is essential in a decentralized social space for the dual goals of providing a pleasant experience and upholding freedom of speech concerns.
I haven't spoken about it much lately due to life, work, more life, more work, and other things, but with the broader ATmosphere rapidly gaining traction I feel like it's time to bring it up again.
"It" is, of course, my belief that Bluesky's implementation of labelers so far has not lived up to its original expectation as the enabler of a competitive "marketplace of moderation labelers". In fact, if anything I believe it has encouraged the centralization of moderation in the app. Admittedly I have no data to support this, only vibes and my own experience with the reporting workflow.
My long-standing beef with the reporting flow is specifically the requirement to select a labeler. Imagine if Instagram asked users to select the name of a moderation team member when reporting an inappropriate post. Users don't care who handles the report, nor should they. They just want to see action taken.
By asking users to select a labeler, I believe Bluesky is - perhaps unintentionally - driving them to select Bluesky's own moderation service because that's what users have been trained to expect from traditional social media outlets.
This bias, combined with the natural result that only one labeler gets the report, undercuts the purpose and promise of the marketplace that would give users a wide array of choices in how their feeds are moderated.
How can this be fixed? I'm glad you asked.
I think the key to truly independent moderation lies in changing how labelers register themselves and how reports are distributed. Specifically, labelers should include which topics they want to handle. Then when a user reports a post or account, all the labelers interested in that topic are notified. Each can then determine how to best handle the report for their subscriber base. A strict labeler advertised as suitable for children and prudes might automatically hide all reported posts until they can be manually reviewed, whereas a more laid-back labeler might take minimal or no action unless the reported violation is severe.
Changing the reporting workflow and implementing these changes would accomplish several worthy goals:
it removes the confusing need to choose a labeler when reporting a post/account
it removes the inherent bias towards selecting the official moderation labeler
and most critically it would foster that competitive marketplace of labelers which could provide a wide array of custom alternatives to suit the varying tastes of each of the 35+ million members of Bluesky
I hope this is something the team will be able to find time to address in the coming year as they continue to balance spinning plates on every available appendage.