Stack Overflow, Inc. has decreed a near-total prohibition on moderating AI-generated content in the wake of a flood of such content being posted to and subsequently removed from the Stack Exchange network, tacitly allowing the proliferation of incorrect information ("hallucinations") and unfettered plagiarism on the Stack Exchange network. This poses a major threat to the integrity and trustworthiness of the platform and its content.
We, the undersigned, are volunteer moderators and curators of Stack Overflow and the Stack Exchange network. Effective immediately, we are enacting a general moderation strike on Stack Overflow and the Stack Exchange network, in protest of this and other recent and upcoming changes to policy and the platform that are being forced upon us by Stack Overflow, Inc.
Our efforts to effect change through proper channels have been ignored, and our concerns disregarded at every turn. Now, as a last resort, we are striking out of dedication to the platform that we have put over a decade of care and volunteer effort into. We deeply believe in the core mission of the Stack Exchange network: to provide a repository of high-quality information in the form of questions and answers, and the recent actions taken by Stack Overflow, Inc. are directly harmful to that goal.
Specifically, moderators are no longer allowed to remove AI-generated answers on the basis of being AI-generated, outside of exceedingly narrow circumstances. This results in effectively permitting nearly all AI-generated answers to be freely posted, regardless of established community consensus on such content.
In turn, this allows incorrect information (colloquially referred to as "hallucinations") and plagiarism to proliferate unchecked on the platform. This destroys trust in the platform, as Stack Overflow, Inc. has previously noted.
In addition, these policies disregard the leeway historically granted to individual Stack Exchange communities to determine their policies, by making changes without the input of the community, overriding community consensus, and outright refusing to reconsider their position.
Until this matter is resolved satisfactorily, we will be pausing activities including, but not limited to:
- Raising and handling flags.
- Running SmokeDetector, the anti-spam bot.
- Closing or voting to close posts.
- Deleting or voting to delete posts.
- Reviewing tasks in the various review queues.
- Running various other bots designed to assist in moderation, such as detecting plagiarism, low-quality answers, and rude comments.
Until Stack Overflow, Inc. retracts this policy change to a degree that addresses the concerns of the moderators, and allows moderators to effectively enforce established policies against AI-generated answers, we are calling for a general moderation strike, as a last-resort effort to protect the Stack Exchange platform and users from a total loss in value. We would also like to remind Stack Overflow, Inc. that a network that entirely relies on volunteers for its moderation model cannot then consistently ignore, mistreat, and malign those same volunteers.
Signed,
I think that this needs to be clarified a little. My interpretation is that we cannot delete generated content because it's generated content or suspend a user only because they posted generated content. For example, if they make many posts in a short period of time and bump up old questions, we can deal with that behavior.
However, it's also probably important to mention that the rules in place make it (almost) impossible to address potential plagiarism issues that arise from copying from generated text. There are specific mod messages for plagiarism, but since the text was generated and it's unlikely that same text would ever be generated again, it's not feasible to prove.
I thought this, too. However, it's now unclear. It appears that the company interprets the requirements to announce changes 60 days and offer 30 days for discussion and review to mean explicit changes to the text of the moderator agreement. Policies are included by reference in iv. Personally, I interpret the agreement as including changes to everything included by reference: the Code of Conduct, the Terms of Service, the Privacy Policy, and "all other officially announced moderator and user policies". The company appears to exclude things included by reference and only consider changes to the legal text of the agreement itself. It's not clear who is actually correct.
Is it worth while to talk about harms? I want to make sure that this stays something that everyone can sign onto, but it could be worth while to express that we believe that being hamstrung when it comes to how we deal with a class of content is more likely to do harm to our communities and the people who trust and rely on the content on Stack Exchange sites.