• 0 Posts
  • 160 Comments
Joined 2 years ago
cake
Cake day: July 9th, 2023

help-circle
  • Could you let me know what sort of models you’re using? Everything I’ve tried has basically been so bad it was quicker and more reliable to to the job myself. Most of the models can barely write boilerplate code accurately and securely, let alone anything even moderately complex.

    I’ve tried to get them to analyse code too, and that’s hit and miss at best, even with small programs. I’d have no faith at all that they could handle anything larger; the answers they give would be confident and wrong, which is easy to spot with something small, but much harder to catch with a large, multi process system spread over a network. It’s hard enough for humans, who have actual context, understanding and domain knowledge, to do it well, and I’ve, personally, not seen any evidence that an LLM (which is what I’m assuming you’re referring to) could do anywhere near as well. I don’t doubt that they flag some issues, but without a comprehensive, human, review of the system architecture, implementation and code, you can’t be sure what they’ve missed, and if you’re going to do that anyway, you’ve done the job yourself!

    Having said that, I’ve no doubt that things will improve, programming languages have well defined syntaxes and so they should be some of the easiest types of text for an LLM to parse and build a context from. If that can be combined with enough domain knowledge, a description of the deployment environment and a model that’s actually trained for and tuned for code analysis and security auditing, it might be possible to get similar results to humans.


  • I’m unlikely to do a full code audit, unless something about it doesn’t pass the ‘sniff test’. I will often go over the main code flows, the issue tracker, mailing lists and comments, positive or negative, from users on other forums.

    I mean, if you’re not doing that, what are you doing, just installing it and using it??!? Where’s the fun in that? (I mean this at least semi seriously, you learn a lot about the software you’re running if you put in some effort to learn about it)


  • ‘AI’ as we currently know it, is terrible at this sort of task. It’s not capable of understanding the flow of the code in any meaningful way, and tends to raise entirely spurious issues (see the problems the curl author has with being overwhealmed for example). It also wont spot actually malicious code that’s been included with any sort of care, nor would it find intentional behaviour that would be harmful or counterproductive in the particular scenario you want to use the program.


  • It’s an interesting observation. We observe the world in landscape because our eyes are positioned to give us a goid balance between binocular vision and seeing predators in our peripheral vision, but most of our interactions are portrait, I suspect due to our upright posture. Most of the instances you mentioned are with things that either are, or are evolutiobs if things that were, designed around the fact we are talker than we are wide.

    It would be interesting to observe whether animals with a different posture interact differently.



  • A closed group of users can all have a seed ratio above 1.0, but it’s a bit of a contrived set up. For simplicity, in the following examples we assume that each file is the same size, but this also works for other combinations.

    Consider the smallest group, two users. If user A seeds a file and user B downloads it, whilst B seeds a different file, which A downloads, both users will have a ratio of 1.0 as they’ve up and down loaded the same amount.

    For three users, A seeds a file, B and C then download a different half each, which they then share with each other. A has a total (upload, download) of (1,0), whilst B and C have (0.5,1). If you repeat this with B seeding and A and C downloading, then C seeding to A and B, you get each peer uploading 2 files worth of data, and downloading 2 files worth, for a ratio of 1.0 each.

    You can keep adding peers and keep the ratios balanced, so it is possible for all the users on a private tracker to have a 1.0 ratio, but it’s very unlikely to work out like that in real life, which is why you have other ways to boost your ratio.



  • I’ve read the NYT article, and I can’t see anywhere where the author ‘sincerely considers the idea that Rachel Griffin-Accurso, the popular children’s entertainer known as Ms. Rachel, might be financially compensated by Hamas.’ Instead they report that ‘the advocacy group StopAntisemitism’ ‘sent a letter urging Attorney General Pam Bondi to investigate whether Accurso is receiving funding to further Hamas’s agenda.’

    The article as a whole seems pretty positive towards Miss Rachel, and uses her comments to point out how bad things are in Gaza, and insinuates that StopAntisemitism are the problematic ones.


  • They seem to have completely lost sight of the fact that a phone is a tool. I don’t want ‘springy’ animations when I tap a button, I want my tool to do what I intend. I don’t want notifications that ‘subtly’ stretch when I dismiss a different notification, I want the dismissed notification to go away and the others to close up around it.

    What I do want is a phone that works securely, quickly, efficiently, doesn’t waste battery on nonsense, and doesn’t distract me from what I’m doing. I guess we get ‘pretty’ geegaws instead.



  • It sucks that we need such an extensive amount of work put in to make devices private

    The issue is that, short of the extremes suggested in places like privacyguides, you’re not really making the device private. You could argue that you’re making it more private, but the counter-argument is that you’re still leaking so much data that you haven’t significantly improved your situation.

    Doing something probably is better than doing nothing, but it’s not going to satisfy those who seek actual privacy. If you’ve got a particular leak that you’re worried about it’s definitely worth looking to address it though.









  • notabot@lemm.eetoxkcd@lemmy.worldxkcd #3081: PhD Timeline
    link
    fedilink
    English
    arrow-up
    31
    ·
    3 months ago

    I did the same, pressed on it for the text, got sent straight to the video, and swore under my breath in admiration. In the current climate what he’s done isn’t risk free, despite the fact it a) should be, and b) shouldn’t be needed in the first place.

    Nothing but respect for people calling out the crimes of thus administration, and when it’s someone with an unrelated platform and an audience, so much the better.


  • notabot@lemm.eetoAndroid@lemmy.worldWhat's your take on biometric security?
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    3 months ago

    For proper user authentication the model always used to be that the user should present three things: something they were (a username for instance), something they knew (a password), and something they had (a OTP from a device, or a biometric). The idea being that, even if a remote attacker got hold of the username and password, they didn’t have the final factor, and if the user was incapacitated or otherwise forced to provide a biometric, they wouldn’t necessarily supply the password (or on really secure systems, they’d use a ‘panic’ password that would appear to work, but hide sensitive information and send an alert to the security team).

    Now we seem to be rushing into a system where you have only two factors, the thing you have, namely your phone, and the other thing you have, namely a fingerprint or your face. Notably you can’t really change either of those, especially your biometrics, so they’re entirely useless for security. Instead your phone should require a biometric and a password to unlock. The biometric being ‘the thing you are’, the phone ‘the thing you have’, and the password being 'the thing you know.

    So, yes, I’m entirely against fingerprint unlocking.