• 7 Posts
  • 32 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle





  • zabadoh@lemmy.mltoLinux@lemmy.mlI had a journey
    link
    fedilink
    arrow-up
    4
    arrow-down
    4
    ·
    1 year ago

    I disagree somewhat.

    A lot of high tech development comes with a greed motive, e.g. IPO, or getting bought out by a large company seeking to enter the space, e.g. Google buying Android, or Facebook buying Instagram and Oculus.

    And conversely, a lot of open source software are copies of commercially successful products, albeit they only become widely adopted after the originals have entered the enshittified phase of their life.

    Is there a Lemmy without Reddit? Is there a Mastodon without Twitter? Is there LibreOffice without Microsoft Office and decades of commercial word processors and spreadsheets before that? Or OpenOffice becoming enshittified for that matter? Is there qBittorrent without uTorrent enshittified? Is there postgreSQL without IBM’s DB2?

    The exception that I can see is social media and networked services that require active network and server resources, like Facebook YouTube, or even Dropbox and Evernote.

    Okay, The WELL is still around and is arguably the granddaddy of all online services, and has avoided enshittification, but it isn’t really open source.



  • If you read the article, evidently the IWW’s customer service was better.

    An international union actually makes sense if you think about today’s corporate landscape.

    Modern large corps are very international, with facilities for production, distribution, and retail all over the globe, so just striking in one country doesn’t make sense because funds and production can be shifted so quickly to a different city, state, or country.

    The pandemic demonstrated how tightly connected the supply chains are, so striking just one or a few parts can have ripple effects on the bottom line.









  • zabadoh@lemmy.mltoMemes@sopuli.xyzPossibly
    link
    fedilink
    arrow-up
    28
    ·
    1 year ago

    Capitalist: Kids, now you too can come and experience the Craft of Mining! Here’s your personized helmet lamp and pickaxe! Anything useful that you dig up belongs to me. Notresponsibleforsideeffectssuckasdeathandinjury.










  • Okay, I found SurfOffline that does the trick without too much hassle, but…

    It’s verrrrrrrry slooooooooow.

    It uses Internet Explorer as a module, and calls each individual resource separately, instead of file copying from IE’s cache, which is weird and slow, especially when hundreds of images are involved.

    And SurfOffline doesn’t appear to be supported anymore, i.e. the support email’s inbox is full.

    edit: Aaaaand SurfOffline doesn’t save to .html files with a directory structure!!! It stores everything in some kind of sql database, and it only saves to .mht and .chm files, which are deprecated Microsoft help file formats!!!

    What it does have is a built in web server that only works while the program is running.

    So what I plan to do is have the program up but doing nothing, while I sick Httrack on the 127.0.0.1 web address for my ripped website.

    Httrrack will hopefully “extract” the website to .html format.

    Whew, what a hassle!