• 0 Posts
  • 29 Comments
Joined 8 months ago
cake
Cake day: December 27th, 2023

help-circle

  • smb@lemmy.mltoAndroid@lemdro.idSearching for exact app names in the Play Store
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    edit-2
    17 days ago

    ads with install buttons always are traps. and traps are always bad (except snmp traps, those are good but unreliable)

    same way ads at download pages stating “proceed to download” are traps.

    also ads at search result pages stating " 1 2 3 4 … next" are traps too.

    for the “sponsored” note: there is no boundary here that makes it really clear for what that ‘sponsored’ is meant for. without any boundary it could be for something above it, below it, on the side or maybe even something that opens when you click on “sponsored” itself (seen it this way once). it could be for an ad that just failed to load (noticed the free space above that “sponsored” text? maybe the ad loads a bit later just to shift the real contents down so you “accidently” click on the ad that loads intentionally late for this very accident to be likely to happen?) if you use adblockers - which you should do for security reasons anyway - then you’ll see “sponsored” or “advertising” often even without the ad it was meant for after full load of the page. so a single “sponsored” without a clear boundary showing what would be that sponsored content, does not state anything to be an ad, it is purely meaningless and the lack of such boundary always is intentional to distract the user from what he wanted and trap him somehow.

    a clear thumbs-down for ‘zoho assist’ from me here just for paying for (or trying out for free or such) such an advertising type.

    And in most cases ads simply beeing ads are traps too. by the very concept of ads.

    around 80 % of all things i actually still wanted after i bought them were recommendations by people i met in person. 15 % are things recommended by real persons i met on the internet. around 5% are things i bought without it beeing recommded by anyone (not even an ad) things i still wanted after i bought it due to an ad are nearly not existant. ok, i have stopped viewing television in 1997, have a sticker at my postbox that forbids to throw ads in (works where i live), use dns entries to remove most ads in my network, use browsers/extension that remove most crapjunkwastelitterrubbishads and skip webpages that still show too many ads or too offensive cookieterrorbanners. i use google search only sometimes for comparison of results, but near to zero for actual searching. i feel safe to say i am not that much distracted by ads. (however open source projects and authors do get money from me on a monthly basis, where i want to support them, either direct lly sent from my bank account or indirect).

    for me personally an ad just saying “you might like this” drives me away from that product, if it needs or wants an ad, i don’t want it, even more so the more it states how difficult and horrible my life would be without the product or how easy it’ll be with it, go away ad-needing products, get recommended personally by those who actually use it, not by those who want to sell it. period. there is no better ad than true recommendation and its also free, no marketing monkey needs to get payed for bs, only an actually good product is needed… and there we go what types of products actually need ads…

    once in my life i discovered a product that i first explicitly not bought for a decade because of the awful ad for it, but bought it another decade later by an absentminded accident and found it to be a good product despite its awful ad. then they increased packaging/reduced the product within to cover up a price increase in trade of more waste production, so i abandoned that product again and found something cheaper more eco friendly instead, yes, the cheaper one is really not as good, but i feel better with it and especially less betrayed by the vendor, so the eco one is the better one alltogether. and also i think its better to buy products where you don’t see ads for cause this behaviour could actually fix this advertising storm in the long run, so in this way its the better choice to buy products that don’t have ads for it.

    again:

    An ad with an install button is always a trap, even more so when the real install follows a single misclick on it. il’d say it would be quite fair to downvote/zerostars an app for how foulish-sneaky it was positioned in the search results if it is shown like an actual result with a f’ing install button. as its advertising type is always also part of the brand and the product itself. maybe make a sports out of that, klick the clickbait install buttons only to downvote the app for beeing intrusive and deinstall it again without even starting the app once, just to train advertisers to do it right instead of wrong next time. maybe. but for security reasons better don’t do that (at least not with a device with sensitive data on it)

    please do not blame users to fall for ads. advertising industry now had centuries to learn to trap users and literally thousands or millions of marketing guys, designers, psycologists, neurologists or whatever only to learn and establish new abusive ways to distract and trap users. but a user only has his own lifespan to counteract that and learn to avoid those manipulations, and he also has to do other important stuff in his life too.

    please don’t blame users for beeing humans. blame the industry where they are intentionally abusive, inhumane and/or counterproductive.


  • you should definitely know what type of authentication you use (my opinion) !! the agent can hold the key forever, so if you are just not asked again when connecting once more, thats what the agent is for. however its only in ram, so stopping the process or rebooting ends that of course. if you didn’t reboot meanwhile maybe try unload all keys from it (ssh-add -D, ssh-add -L) and see what the next login is like.

    btw: i use ControlMaster /ControlPath (with timeouts) to even reduce the number of passwordless logins and speed things up when running scripts or things like ansible, monitoring via ssh etc. then everything goes through the already open channel and no authentication is needed for the second thing any more, it gets really fast then.


  • The whole point of ssh-agent is to remember your passphrase.

    replace passphrase with private key and you’re very correct.

    passphrases used to login to servers using PasswordAuthentication are not stored in the agent. i might be wrong with technical details on how the private key is actually stored in RAM by the agent, but in the context of ssh passphrases that could be directly used for login to servers, saying the agent stores passphrases is at least a bit misleading.

    what you want is:

    • use Key authentication, not passwords
    • disable passwordauthentication on the server when you have setup and secured (some sort of backup) ssh access with keys instead of passwords.
    • if you always want to provide a short password for login, then don’t use an agent, i.e. unset that environment variable and check ssh_config
    • give your private key a password that fits your needs (average time it shoulf take attackers to guess that password vs your time you need overall to exchange the pubkey on all your servers)
    • change the privatekey every time immediately after someone might have had access to the password protected privkey file
    • do not give others access to your account on your pc to not have to change your private key too often.

    also an idea:

    • use a token that stores the private key AND is PIN protected as in it would lock itself upon a few tries with a wrong pin. this way the “password” needed to enter for logins can be minimal while at the same time protecting the private key from beeing copied. but even then one should not let others have access to the same machine (of course not as root) or account (as user, but better not at all) as an unlocked token could also possibly be used to place a second attacker provided key on the server you wanted to protect.

    all depends on the level of security you want to achieve. additional TOTP could improve security too (but beware that some authenticator providers might have “sharing” features which could compromise the TOTP token even before its first use.


  • My theory is that you already have something providing ssh agent service

    in the past some xserver environments started an ssh-agent for you just in case of, and for some reason i don’t remember that was annoying and i disabled it to start my agent in my shell environment as i wanted it.

    also a possibility is tharlt there are other agents like the gpg-agent that afaik also handles ssh keys.

    but i would also look into $HOME/.ssh/config if there was something configured that matches the hostname, ip, or with wildcards* parts of it, that could interfere with key selection as the .ssh/id_rsa key should IMHO always be tried if key auth is possible and no (matching) key is known to the ssh process, that is unless there already is something configured…

    not sure if a system-wide /etc/ssh/ssh_config would interfere there too, maybe have a look there too. as this behaviour seems a bit unexpected if not configured specially to do so.



  • smb@lemmy.mltoProgrammer Humor@programming.dev"prompt engineering"
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    5 months ago

    that a moderately clever human can talk them into doing pretty much anything.

    besides that LLMs are good enough to let moderately clever humans believe that they actually got an answer that was more than guessing and probabilities based on millions of trolls messages, advertising lies, fantasy books, scammer webpages, fake news, astroturfing, propaganda of the past centuries including the current made up narratives and a quite long prompt invisible to that human.

    cheerio!




  • after looking at the ticket myself i think the relevant things IMHO are:

    • a person filed a bug report due to not seeing what changes in the new version caused a different behaviour
    • that person seemed pushy, first telling the dev where patches should be sent to (is this normal? i guess not, better let the dev decide where patches go or -in this case- if patches are needed at all), then coming up with ceo style wordings (highly visible, customer experience of untested but nevertheless released to live product is bad due to this (implicitly “your”) bug)
    • pushiness is counterparted by “please help”
    • free-of-charge consulting was given by the one pointing to changes likely beeing visible in changelog (i did not look though) but nevertheless it was pointed out to the parameter which assumes RTFM (if docs were indeed updated) that a default value had changed and its behavior could be adjusted by using that given parameter.

    up to there that person -belonging to M$ or not (don’t know and don’t care) - behaved IMHO rather correctly, submitting a bug report for something that looked like it, beeing a bit pushy, wanting priority, trying to command, but still formally at least “asking” for help. but at that point the “bug” seemed to have been resolved to me, it looks like the person was either not reading the manual and changelog, or maybe manual or changelog lacks that information, but that was not stated later so i guess that person just did not read neither changelog nor manual.

    instead - so it seems to me - that person demanded immediate and free-of-charge consulting of how exactly the switch should be used to work in that specific use case which would imply the dev looks into the example files, maybe try and error for himself just so that that person does not need to neither invest the time to learn use the software the company depends on, nor hire a consultant to do the work.

    i think (intentional or not) abusing a bug tracker for demanding free-of-charge enduser consulting by a dev is a bad idea unless one wants(!) to actively waste the precious time of the dev (that high priority ticket for the highly visible already live released product relies on) or has even worse intentions like:

    • uploading example files with exploits in them, pointing to the exact versions that include the RCE vulnerability that sample file would abuse and the “bug” was just reported cause it fits the version needed for exploitation and pressure was made by naming big companies to maybe make the dev run a vulnerable version on it on his workstation before someone finds out, so that an upstream attack could take place directly on the devs workstation. but thats just creating a fictive worst case scenario.

    to me this clearly looks like a “different culture” problem. in companies where all are paid from basically the same employer, abusing an internal bug tracker for quick internal consulting would probably be seen as just normal and best practice because the dev who knows and is actually working on the code is likely to have the solution right at hand without thinking much while the other person, who is in charge of quick fixing an untested but already live to customers released product, does not have sufficient knowledge of how the thing works and neither is given the time to learn or at least read changelogs and manual nor the time to learn the basics of general upstream software culture.

    in companies the https://en.m.wikipedia.org/wiki/Peter_principle could be a problem that imho likely leads to such situations, but this is a guess as i know nobody working there and i am not convinced that that person is in fact working for the named company, instead in that ticket shows up a name that i would assume to be a reason to not rely too much about names in the tickes system always be realnames.

    the behaviour that causes the bad postings here in this lemmy thread is to me likely “just” a culture problem and that person would be advised well if told to learn to know the open source culture, netiquette etc and learn to behave differently depending on to who, where and how they communicate with, what to expect and how to interact productively to the benefit of their upstream too, which is the “real price” all so often in open source. it could be that in the company that rolled out the untested product it is seen to be best practice to immediately grab the dev who knows a software and let him help you with whatever you can’t on your own (for whatever reason) whenever you manage to encounter one =]

    i assume the pushyness could likely come from their hierarchy. it is not uncommon that so called leaders just create pressure to below because they maybe have no clue of the thing and not want to gain that clue, but that i cannot know, its just a picture in my head. but in a company that seems to put pressure on releasing an untested product to customers i guess i am not too wrong with the direction of that assumption. what the company maybe should learn is that releasing untested and/or unfinished products to live is a bad habit. but i also assume that if they wanted to learn that, they maybe would have started to learn it like roundabout 2 decades ago. again, i do not know for what company that person works -or worked- for, could be just a subcontractor of the named one too. and also could be that the pushyness (telling its for m$, that its live, has impact to customers etc) was really decided by someone up the latter who would have literally no experience at all on how to handle upstream in such situations. hierarchies can be very dysfunctional sometimes and in companies saying “impact to customers” sometimes is likely the same as saying “boss says asap”.

    what i would suggest their customers (those who were given a beta version as production ready) should learn is that when someone (maybe) continously delivers differently than advertised, that after some few times of experiencing this, the customer would be insane when assuming that that bad behaviour would vanish by pure hope + throwing money into hands where money maybe already didn’t help improving their habits for assumingly decades. And when feeding everhungry with money does not resolve the problems, that maybe looking towards those who do have a non-money-dependant grown-up culture could actually provide more really usable products. Evaluation of new solutions (which one would really be best for a specific usecase i.e.) or testing new versions before really rolling them out to live might be costly especially when done throughout, but can provide a lot of really high valueable stability otherwise unreachable by those who only throw money at shareholders of brands and maybe rely on pure hope for all of the rest. Especially when that brand maybe even officially anounced to remove their testing department ;+) what should a sane and educated customer expect then ? but again to note, i do not know which companies really are involved and how exactly. from the ticket i do not see which company that person directly works for, nor if the claim that m$ is involved is a fact or just a false claim in hope for quicker help (companies already too desperate to test products before live could be desperate again in need for even more help when their bad habits piled up too long and begin falling on their heads)


  • the xz vulnerability was done through a superflous dependency to systemd, xz was only the library that was abused to use systemd’s superflous dependency hell. sshd does not use xz, but systemd does depend on it. sshd does not need systemd, but it was attacked through its library dependency.

    we should remove any pointless dependencies that can be found on a system to prevent such attacks in future by reducing dependency based attack vectors to a minimum.

    also we should increase the overall level of privilege separation where systemd is a good bad example, just look at the init binary and its capability zoo.

    The company who hired “the” systemd developer should IMHO start to really fix these issues !

    so please hold your “$they have fixed it” back until the the root cause that made the xz dependency level attack possible in the first place has been really fixed =)

    Of course pointing it out was good, but now the root cause should be fixed, not just a random symptom that happened to be the first visible atrack that used this attack vector introduced by systemd.


  • looking at the official timeline it is not completely a microsoft product, but…

    1. microsoft hated all of linux/open source for ages, even publicly called it a cancer etc.
    2. microsoft suddenly stopped it’s hatespeech after the long-term “ineffectivenes” (as in not destroying) of its actions against the open source world became obvious by time
    3. systemd appeared on stage
    4. everything within systemd is microsoft style, journald is literally microsoft logging, how services are “managed” started etc is exactly the flawed microsoft service management, how systemd was pushed to distributions is similar to how microsoft pushes things to its victi… eh… “custumers”, systemd breaks its promises like microsoft does (i.e. it has never been a drop-in-replacement, like microsoft claimed its OS to be secure while making actual use of separation of users from admins i.e. by filesystem permissions first “really” in 2007 with the need of an extra click, where unix already used permissions for such protection in 1973), systemd causes chaos and removes the deterministic behaviour from linux distributions (i.e. before systemd windows was the only operating system that would show different errors at different times during installtion on the very same perfectly working hardware, now on systemd distros similar chaos can be observed too). there AFAIK still does not exist a definition of the 'binary" protocol of journald, every normal open source project would have done that official definition in the first place, systemd developers statement was like “we take care for it, just use our libraries” wich is microsoft style saying “use our products”, the superflous systems features do harm more than they help (journald’s “protection” from log flooding use like 50% cpu cycles for huge amount of wanted and normal logs while a sane logging system would be happily only using 3%cpu for the very same amount of logs/second whilst ‘not’ throwing away single log lines like journald, thus journald exhaustively and pointlessly abuses system resources for features that do more harm where they are said to help with in the first place), making the init process a network reachable service looks to me like as bad as microsoft once put its web rendering enginge (iis) into kernelspace to be a bit faster but still beeing slower than apache while adding insecurity that later was an abused attack vector. systemd adding pointless dependencies all along the way like microsoft does with its official products to put some force on its customers for whatever official reason they like best. systemd beeing pushed to distributions with a lot of force and damage even to distributions that had this type of freedom of choice to NOT force their users to use a specific init system in its very roots (and the push to place systemd inside of those distros even was pushed furzher to circumvent the unstable->testing->stable rules like microsoft does with its patches i.e.), this list is very far from complete and still no end is in sight.
    5. “the” systemd developer is finally officially hired by microsoft

    i said that systemd was a microsoft product long before its developer was then hired by microsoft in 2022. And even if he wasn’t hired by them, systemd is still a microsoft-style product in every important way with all what is wrong in how microsoft does things wrong, beginning with design flaws, added insecurities and unneeded attack vectors, added performance issues, false promises, usage bugs (like i’ve never seen an already just logged in user to be directly be logged off in a linux system, except for when systemd wants to stop-start something in background because of it’s ‘fk y’ and where one would 'just try to login again and dont think about it" like with any other of microsofts shitware), ending in insecure and instable systems where one has to “hope” that “the providers” will take care for it without continueing to add even more superflous features, attack vectors etc. as they always did until now.

    systemd is in every way i care about a microsoft product. And systemd’s attack vectors by “needless dependencies” just have been added to the list of “prooven” (not only predicted) to be as bad as any M$ product in this regard.

    I would not go as far to say that this specific attack was done by microsoft itself (how could i ?), but i consider it a possibility given the facts that they once publicly named linux/open source a “cancer” and now their “sudden” change to “support the open source world” looks to me like the poison “Gríma” used on “Théoden” as well as some other observations and interpretations. however i strongly believe that microsoft secretly actually “likes” every single damage any of systemd’s pointlessly added dependencies or other flaws could do to linux/open source very much. and why shouldn’t they like any damage that was done to any of their obvious opponents (as in money-gain and “dictatorship”-power)? it’s a us company, what would one expect?

    And if you want to argue that systemd is not “officially” a product of the microsoft company… well people also say “i googled it” when they mean “i used one of the search engines actually better than google.com” same with other things like “tempo” or “zewa” where i live. since the systemd developer works for microsoft and it seems he works on systemd as part of this work contract, and given all the microsoft style flaws within from the beginning, i consider systemd a product of microsoft. i think systemd overall also “has components” of apple products, but these are IMHO none of technical nature and thus far from beeing part of the discussion here and also apple does not produce “even more systemd” also apple has -as of my experience- very other flaws i did not encounter in systemd (yet?) thus it’s clearly not an apple product.



  • Before pointing to vulnerabilities of open source software in general, please always look into the details, who -and if so - “without any need” thus also maybe “why” introduced the actual attack vector in the first place. The strength of open source in action should not be seen as a deficit, especially not in such a context.

    To me it looks like an evilish company has put lots of efforts over many years to inject its very own overall steady attack-vector-increase by “otherwise” needless increase of indroduction of uncounted dependencies into many distros.

    such a ‘needless’ dependency is liblzma for ssh:

    https://lwn.net/ml/oss-security/20240329155126.kjjfduxw2yrlxgzm@awork3.anarazel.de/

    openssh does not directly use liblzma. However debian and several other distributions patch openssh to support systemd notification, and libsystemd does depend on lzma.

    … and that was were and how the attack then surprisingly* “happened”

    I consider the attack vector here to have been the superlfous systemd with its excessive dependency cancer. Thus result of using a Microsoft-alike product. Using M$-alike code, what would one expect to get?

    *) no surprises here, let me predict that we will see more of their attack vectors in action in the future: as an example have a look at the init process, systemd changed it into a ‘network’ reachable service. And look at all the “cute” capabilities it was designed to “need” ;-)

    however distributions free of microsoft(-ish) systemd are available for all who do not want to get the “microsoft experience” in otherwise security driven** distros

    **) like doing privilege separation instead of the exact opposite by “design”


  • there was a study saying that there is not “the” best way of learning, but it is best to combine multiple ways, like with an app, by book, listening to audio only (i listened to radio stations via internet and got some exercise for free), a bit of talking, visiting a country that only speaks that language and so on. trying everything a bit in parallel.

    that is because of our brain learns better when given more different types of “connections” to learn.

    i started with duolingo (website only, not the app and only the free parts) 4 years ago and now i speak quite fluently. but i also partly read a book about grammatics, visited a spanish speaking country (more than once), viewed movies with only subtitle in my language and did lots of phone calls in spanish only.

    my advice is:

    look at free apps, whatever pleases you, take chances, listen to the sound (movies, radio), try to speak, and read easy books or go through exercise books.

    duolingo is good to keep on going while not really motivated as the shortest thing that counts are really only minutes and one can choose to do something that is already easy. this way at least continuation is kept even if pace is down for a while. and it is much easier to go on with pace when not having really stopped.


  • i am happy to have a raspberry pi setup connected to a VLAN switch, internet is behind a modem (like bridged mode) connected with ethernet to one switchport while the raspi routes everything through one tagged physical GB switchport. the setup works fine with two raspi’s and failover without tcp disconnections during an actual failover, only few seconds delay when that happens, so basically voip calls recover after seconds, streaming is not affected, while in a game a second off might be too much already, however as such hardware failures happen rarely, i am running only one of them anyway.

    for firewall i am using shorewall, while for some special routing i also use unbound dns resolver (one can easily configure static results for any record) and haproxy with sni inspection for specific https routing for the rather specialized setup i have.

    my wifi is done by an openwrt but i only use it for having separate wifis bridged to their own vlans.

    thus this setup allows for multi-zone networks at home like a wifi for visitors with daily changing passwords and another fror chromecast or home automation, each with their own rules, hardware redundancy, special tweaking, everything that runs on gnu/linux is possible including pihole, wireguard, ddns solutions, traffic statistics, traffic shaping/QOS, traffic dumps or even SSL interception if you really want to import your own CA into your phone and see what data your phones apps (those that don’t use certificate pinning) are transfering when calling home, and much more.

    however regarding ddns it sometimes feels more safe and reliable to have a somehow reserved IP that would not change. some providers offer rather cheap tunnels for this purpose. i once had a free (ipv6) tunnel at hurricane electronic (besides another one for IPv4) but now i use VMs in data centers.

    i do not see any ready product to be that flexible. however to me the best ready router system seems to be openwrt, you are not bound to a hardware vendor, get security updates longer than with any commercial product, can 1:1 copy your config to a new device even if the hardware changes and has the possibility to add packages with special features to it.

    “openwrt” is IMHO the most flexible ready solution for longtime use. same as “pfsense” is also very worth looking at and has some similarities to openwrt while beeing different.



  • smb@lemmy.mltoLinux@lemmy.mlBtw
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    6
    ·
    6 months ago

    woman would take care for a literal horse instead of going to therapy. i don’t see anything wrong there either.

    just a horse is way more expensive, cannot be put aside for a week on vacations (could a notebook be put aside?) and one cannot make backups of horses or carry them with you when visiting friends. Horses are way more cute, though.


  • sorry if i might repeat someones answer, i did not read everything.

    it seems you want it for “work” that assumes that stability and maybe something like LTS is dort of the way to go. This also assumes older but stable packages. maybe better choose a distro that separates new features from bugfixes, this removes most of the hassle that comes with rolling release (like every single bugfix comes with two more new bugs, one removal/incompatible change of a feature that you relied on and at least one feature that cripples stability or performance whilst you cannot deactivate it… yet…)

    likely there is at least some software you most likely want to update out of regular package repos, like i did for years with chromium, firefox and thunderbird using some shellscript that compared current version with latest remote to download and unpack it if needed.

    however maybe some things NEED a newer system than you currently have, thus if you need such software, maybe consider to run something in VMs maybe using ssh and X11 forwarding (oh my, i still don’t use/need wayland *haha)

    as for me, i like to have some things shared anyway like my emails on an IMAP store accessible from my mobile devices and some files synced across devices using nextcloud. maybe think outside the box from the beginning. no arch-like OS gives you the stability that the already years-long-hung things like debian redhat/centos offer, but be aware that some OSes might suddenly change to rolling release (like centos i believe) or include rolling-release software made by third parties without respecting their own rules about unstable/testing/stable branches and thus might cripple their stability by such decisions. better stay up to date if what you update to really is what you want.

    but for stability (like at work) there is nothing more practical than ancient packages that still get security fixes.

    roundabout the last 15 years or more i only reinstalled my workstation or laptop for:

    • hardware problems, mostly aged disk like ssd wearlevel down (while recovery from backup or direct syncing is not reinstalling right?)
    • OS becomes EOL. thats it.

    if you choose to run servers and services like imap and/or nextcloud, there is some gain in quickly switching the workstation without having to clone/copy everything but only place some configs there and you’re done.

    A multi-OS setup is more likely to cover “all” needs while tools like x2vnc exist and can be very handy then, i nearly forgot that i was working on two very different systems, when i had such a setup.

    I would suggest to make recovery easy, maybe put everything on a raid1 and make sure you have on offsite and an offline backup with snapshots, so in case of something breaks you just need to replace hardware. thats the stability i want for the tools i work with at least.

    if you want to use a rolling release OS for something work related i would suggest to make sure no one externally (their repo, package manager etc) could ever prevent you from reinstalling that exact version you had at that exact point in time (snapshots from repos install media etc). then put everything in something like ansible and try out that reapplying old snapshots is straight forward for you, then (and not earlier) i would suggest that those OSes are ok for something you consider to be as important as “work”. i tried arch linux at a time when they already stopped supporting the old installer while the “new” installer wasn’t yet ready at all for use, thus i never really got into longterm use of archlinux for something i rely on, bcause i could’nt even install the second machine with the then broken install procedure *haha

    i believe one should consider to NOT tinker too much on the workstation. having to fix something you personally broke “before” beeing able to work on sth important is the opposite of awesome. better have a second machine instead, swappable harddrive or use VMs.

    The exact OS is IMHO not important, i personally use devuan as it is not affected by some instability annoyances that are present in ubuntu and probably some more distros that use that same software. at work we monitor some of those bugs of that software. within ubuntu cause it creates extra hassle and we workaround those so its mostly just a buggy annoying thing visible in monitoring.


  • i have to admit, that my point ‘just don’t do it’ in reality does not garantee to prevent any trouble. it still is possible to be sued for things someone else did.

    also one suggestion to think about:

    if the seller just sprays some random changes over a book for every sold version, one would have differences in “every” sold version to every other sold version. by blindly changing those parts to something else you could reveal which exact two/three versions you had for diffing.

    UPDATE: someone else here had the same thought a bit earlier…

    my suggestion to not do it stays the same ;-)

    it could be interesting to figure things out how they work, what could be done to prevent or circumvent such prevention, but actually doing it seems risky no matter what.