Interested in the intersections between policy, law and technology. Programmer, lawyer, civil servant, orthodox Marxist. Blind.


Interesado en la intersección entre la política, el derecho y la tecnología. Programador, abogado, funcionario, marxista ortodoxo. Ciego.

  • 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle
  • I do not think it is a very good analogy. I do not see how this would turn into a broadcast medium. Though I do agree it can feel less accessible and there is a risk of building echo chambers.

    Not so concerned on that–people being able to establish their tolerances for whom they want to talk to is fine with me. But if the system goes towards allowlists, it becomes more cliquish and finding a way in is more difficult. It would tend towards centralisation just because of the popularity of certain posters/instances and how scale-free networks behave when they’re not handled another way.

    It’s most likely a death sentence for one-persone instances. Which is not ideal. On the other hand, I’ve seen people managing their own instance give up on the idea when they realized how little control they have over what gets replicated on their instance and how much work is required to moderate replies and such. In short, the tooling is not quite there.

    I run my instance and that’s definitely not my experience. Which is of course not to say it can’t be someone else’s. But something, in my opinion not unimportant, is lost when it becomes harder to find a way in.


  • I’m concerned that people are already eager to bury the fediverse and unwilling to consider what would be lost. The solutions I keep hearing in this space all seem to hinge on making the place less equal, more of a broadcast medium, and less accessible to unconnected individuals and small groups.

    How does an instance get into one of these archipelagos if they use allowlists?

    Same thing with reply policies. I can see the reason why people want them, but a major advantage on the fedi is the sense that there is little difference between posters. I think a lot of this would just recreate structures of power and influence, just without doing so formally–after all the nature of scale-free networks is large inequality.





  • what do I think the history is? A record of the sites I visited.

    What do I think the history isn’t? A correlated record of which advertisements I’ve been exposed to, and which conversions I’ve made, that gets sent to people who are not me.

    Pretty relevant distinction. One thing is me tracking myself, another thing is this tracking being sent to others, no matter how purportedly trustworthy.


  • I’d like people to STOP PRETENDING that the only plausible reason why someone doesn’t agree with this is that we don’t understand it. Yes, I understand what this does. The browser tracks which advertisements have been visited, the advertiser indicates to the browser when a conversion action happens, and the browser sends this information to a third-party aggregator which uses differential techniques to make it infeasible to deanonymise specific users. Do I get a pass?

    Yes, this is actively collaborating with advertising. It is, in the words of Mozilla, useful to advertisers. It involves going down a level from being tracked by remote sites to being tracked by my own browser, running on my own machine. Setting aside the issues of institutional design and the possibility for data leaks, it’s still helping people whose business is to convince me to do things against my interest, to do so more effectively.



  • I don’t have a complete solution, but I have a vector, and this is in the opposite direction, being, according to its own claims useful to advertisers.

    The solution passes through many things, but probably has to start by changing the perception of advertising as a necessary nuisance and into a needless, avoidable, and unacceptable evil. Collaboration does not help in this regard. Individual actions such as blocking advertising, refusing to accept any tracking from sites, deploying masking tools, using archives and mirrors to get content, consciously boycott any product that manages to escape the filtering, are good but insufficient.


  • Whatever opinion you may have of advertising as an economic model, it’s a powerful industry that’s not going to pack up and go away.

    Fuck that. Not if we don’t make it. That’s precisely the point. Do not comply. Do not submit. Never. Advertising is contrary to the interests of humanity. You’re never going to convince me becoming a collaborator for a hypothetically less pernicious form is the right course of action. Never. No quarter.

    We’ve been collaborating with Meta on this,

    That makes it even worse.

    any successful mechanism will need to be actually useful to advertisers,

    And therefore inimical to humanity in general and users in particular.

    Digital advertising is not going away,

    Not with that attitude.

    but the surveillance parts could actually go away

    Aggregate surveillance is still surveillance. It is still intrusive, it still leverages aggregate human behaviour in order to harm humans by convincing them to do things against their own interest and in the interest of the advertiser.

    This is supposedly an experiment. You’ve decided to run an experiment on users without consent. And you still think this is the right thing–since you claim the default is the correct behaviour.

    I cannot trust this.





  • For me the weirdest part of the interview is where he says he doesn’t want to follow anyone, that he wants the algorithm to just pick up on his interests. It’s so diametrically opposed to how I want to intentionally use social networks and how the fedi tends to work that it’s sometimes hard to remember there are people who take that view.






  • Worth considering that this is already the law in the EU. Specifically, the Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market has exceptions for text and data mining.

    Article 3 has a very broad exception for scientific research: “Member States shall provide for an exception to the rights provided for in Article 5(a) and Article 7(1) of Directive 96/9/EC, Article 2 of Directive 2001/29/EC, and Article 15(1) of this Directive for reproductions and extractions made by research organisations and cultural heritage institutions in order to carry out, for the purposes of scientific research, text and data mining of works or other subject matter to which they have lawful access.” There is no opt-out clause to this.

    Article 4 has a narrower exception for text and data mining in general: “Member States shall provide for an exception or limitation to the rights provided for in Article 5(a) and Article 7(1) of Directive 96/9/EC, Article 2 of Directive 2001/29/EC, Article 4(1)(a) and (b) of Directive 2009/24/EC and Article 15(1) of this Directive for reproductions and extractions of lawfully accessible works and other subject matter for the purposes of text and data mining.” This one’s narrower because it also provides that, “The exception or limitation provided for in paragraph 1 shall apply on condition that the use of works and other subject matter referred to in that paragraph has not been expressly reserved by their rightholders in an appropriate manner, such as machine-readable means in the case of content made publicly available online.”

    So, effectively, this means scientific research can data mine freely without rights’ holders being able to opt out, and other uses for data mining such as commercial applications can data mine provided there has not been an opt out through machine-readable means.


  • Clearly this particular suit by this particular person is iffy. However, I don’t think this framing is very good: the fact Wikimedia is headquartered elsewhere shouldn’t make it immune from being sued where an affected party lives.

    Also, this part of the article seems a bit contradictory:

    Just because someone doesn’t like what’s written about them doesn’t give them the right to unmask contributors. And if the plaintiff still believes he’s been wronged by these contributors, he can definitely sue them personally for libel (or whatever). What he has no right to demand is that a third party unmask users simply because it’s the easiest target to hit.

    Ok, but how does he sue them personally without knowing who they are? It’s fine to say this shouldn’t be regarded as libel (I agree, it’s a factual point, should be covered by exceptio veritatis or whatever) but I think it’s a bit dishonest to say you can’t hit Wikimedia, go after the individual users; but also, Wikimedia shouldn’t be forced to reveal them.

    Much better if the court would consider this information as being accurate and in the public interest.

    Of course the GDPR cuts two ways here, because political information is an especially protected category, with certain exceptions (notorious information). So I’m not sure how the information on this person’s affiliation to the far right was obtained and so on.