In my free time, I help run a small Mastodon server for roughly six hundred queer leatherfolk. When a new member signs up, we require them to write a short application—just a sentence or two. There’s a small text box in the signup form which says:
Please tell us a bit about yourself and your connection to queer leather/kink/BDSM. What kind of play or gear gets you going?
This serves a few purposes. First, it maintains community focus. Before this question, we were flooded with signups from straight, vanilla people who wandered in to the bar (so to speak), and that made things a little awkward. Second, the application establishes a baseline for people willing and able to read text. This helps in getting people to follow server policy and talk to moderators when needed. Finally, it is remarkably effective at keeping out spammers. In almost six years of operation, we’ve had only a handful of spam accounts.
I was talking about this with Erin Kissane last year, as she and Darius Kazemi conducted research for their report on Fediverse governance. We shared a fear that Large Language Models (LLMs) would lower the cost of sophisticated, automated spam and harassment campaigns against small servers like ours in ways we simply couldn’t defend against.
Besides finding better ways to positively recognize bots, we also need to invent ways that make it quicker to realize “false alarm, this user is actually legit”.
For example, users should have an option to pin posts and comments to their profile, and I suggest to provide at least two different ‘tabs’ for this in the public profile: One tab just for the usual “posts and comments you would like the world to see”, but another tab for “some recent, complex interactions between you and other (established) users that in your eyes prove quite well you’re not a bot”. The purpose is simply to save others, worried that you could be a bot, some time of going through your posts in search of signs for humanity. Yes, this can be gamed to some degree (what can’t?). However, at a technical level, the feature is little more than a copy of the “pin” feature that would be very nice to have anyways, so we can get an appreciable improvement in our ability to tell users from bots for very little programming effort.
Except that spammers can curate that set of posts and comments similar to legit users.
Like I said: it can be gamed to some degree, but what system can’t?