Given how Reddit now makes money by selling its data to AI companies, I was wondering how the situation is for the fediverse. Typically you can block AI crawlers using robot.txt (Verge reported about it recently: https://www.theverge.com/24067997/robots-txt-ai-text-file-web-crawlers-spiders). But this only works per domain/server, and the fediverse is about many different servers interacting with each other.
So if my kbin/lemmy or Mastodon server blocks OpenAI’s crawler via robot.txt, what does that even mean when people on other servers that don’t block this crawler are boosting me on Mastodon, or if I reply to their posts. I suspect unless all the servers I interact with block the same AI crawlers, I cannot prevent my posts from being used as AI training data?
It is unfortunate, buy we are giving our data freely, as we did on Spezzit. IMHO it would be great to block efforts to monetize Lemmy by ai, but that is not what we signed up for.
Lemmy is neither private, nor closed. It’s just the way it works.
Contributing in an open forum means the data will get harvested. If it closed there will be fewer views, open is what we have now.
Companies will train on what we post, we are not giving that (directly) to a centralized service though. To me that compromise is enough.