• 0 Posts
  • 60 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle





  • Yes, this is what I said. Situations where a work can conceivably considered co-authored by a human, those components get copyright. However, whether that activit constitutes contribution and how is demarcated across the work is a case by case basis. This doesn’t mean any inpainting at all renders the whole work copyright protected–it means that it could in cases where it is so granular and directly corresponds to human decision making that it’s effectively digital painting. This is probably a higher bar than most expect but, as is not atypical with copyright, is a largely case by case quantitative/adjudicated vibes-based determination.

    The second situation you quoted is also standard and effectively stands for the fact that an ordered compilation of individually copyrighted works may itself have its own copyright in the work as a whole. This is not new and is common sense when you consider the way large creative media projects work.

    Also worth mentioning that none of this obviates the requirement that registrations reasonably identify and describe the AI generated components of the work (presumably to effectively disclaim those portions). It will be interesting to see a defense raised that the holder failed to do so and so committed a fraud on the Copyright Office and thus lost their copyright in the work as a whole (a possible penalty for committing fraud on the Office).


  • The CO didn’t say AI generated works were copyrightable. In fact, the second part of the report very much affirmed their earlier decisions that AI generated content is necessarily not protected under copyright. What you are probably referring to is the discussion the Office presented about joint works style pieces–that is, where a human performed additional creative contributions to the AI generated material. In that case, the portions such that they were generated by the human contributor are protected under copyright as expected. Further, they made very clear that what constitutes creative contribution and thus gets coverage is determined on a case by case basis. None of this is all that surprising, nor does it refute the rule that AI generated material, having been authored by something other than a human, is not afforded any copyright protection whatsoever.



  • For sure. I personally think our current IP laws are well equipped to handle AI generated content, even if there are many other situations where they require a significant overhaul. And the person you responded to is really only sort of maybe half correct. Those advocating for, e.g., there to be some sort of copyright infringement in training AI aren’t going to bat for current IP laws-- they’re advocating for altogether new IP laws effectively thar would effectively further assetize and allow even more rent seeking in intangibles. Artists would absolutely not come out ahead on this and it’s ludicrous to think so. Publishing platforms would make creators sign those rights away and large corporations would be the only ones financially capable of acting in this new IP landscape. The compromise also likely would be attaching a property right in the model outputs and so it would actually become far more practical to leverage AI generated material at commercial scale since the publisher could enforce IP rights on the product.

    The real solution to this particular issue is require all models that out materials to the public at large be open source and all outputs distributed at large be marked as generated by AI and thus being effectively in the public domain.




  • It could of course go up to the scotus and effectively a new right be legislated from the bench, but it is unlikely and the nature of these models in combination with what is considered a copy under the rubric copyright in the US has operated effectively forever means that merely training and deploying a model is almost certainly not copyright infringement. This is pretty common consensus among IP attorneys.

    That said, a lot of other very obvious infringement in coming out in discovery in many of these cases. Like torrenting all the training data. THAT is absolutely an infringement but is effectively unrelated to the question of whether lawfully accessed content being used as training data retroactively makes its access unlawful (it really almost certainly doesn’t).


  • Even in your latter paragraph, it wouldn’t be an infringement. Assuming the art was lawfully accessed in the first place, like by clicking a link to a publicly shared portfolio, no copy is being encoded into the model. There is currently no intellectual property right invoked merely by training a model-- if people want there to be, and it isn’t an unreasonable thing to want (though I don’t agree it’s good policy), then a new type of intellectual property right will need to be created.

    What’s actually baffling to me is that these pieces presumably are all effectively public domain as they’re authored by AI. And they’re clearly digital in nature, so wtf are people actually buying?


  • What we need is robust decentralized multimodal energy production fit for the local area where it is installed and contributing to a well maintained distributed grid with multiple redundancies and sufficient storage so that incidental costs are minimized and uptime is effectively 100%. Energy is a tool and its generation is a category of tools, whining about people developing a better screwdriver rather than only using hammers is counterproductive when we’re trying to build a house for as many people as possible that doesn’t fucking kill everyone.




  • No, the best choice would have been to empower the one party that actually has, albeit often ignored but nevertheless significant, pro-Palestinian and anti-Zionist members as part of its membership. And then to focus on improving that party and/or breaking up the duopoly along the natural and clear lines of opportunity present in the sole big tent party that’s left in US politics.

    At the end of the day, there are no significant pro-Palestinian voices in the Republican party. There are in the Democratic party And that indisputable fact alone should inform a strategic vote.

    That said, people are stupid and I don’t really blame non- and 3rd party voters for Democrats losing and the resulting shit show-- blame and culpability falls squarely on the many people who actually specifically voted for this. But it would be nice if those people would try to learn from and admit their incredibly disastrous error in judgement.




  • Regardless of training data, it isn’t matching to anything it’s found and squigglying shit up or whatever was implied. Diffusion models are trained to iteratively convert noise into an image based on text and the current iteration’s features. This is why they take multiple runs and also they do that thing where the image generation sort of transforms over multiple steps from a decreasingly undifferentiated soup of shape and color. My point was that they aren’t doing some search across the web, either externally or via internal storage of scraped training data, to “match” your prompt to something. They are iterating from a start of static noise through multiple passes to a “finished” image, where each pass’s transformation of the image components is a complex and dynamic probabilistic function built from, but not directly mapping to in any way we’d consider it, the training data.