deleted by creator
deleted by creator
I tend to agree. I think there’s little need as a developer to go that extra mile for accurate browser detection without UA unless it’s for fingerprinting. Most feature sets are supported and where it isn’t you have a polyfil or whatever shim to make it work. So in the case of fingerprinting you try not to rely fully on anything the user can alter easily.
.
deleted by creator
Browser detection is rarely done through User Agent lookup anymore. Nowadays we determine browser through feature detection.
Say no more, I’m sold
Al Jazeera live broadcast shows Hamas rocket breaking up above the hospital shortly before explosion
deleted by creator
I had an idea about this today but I don’t know enough about Lemmy to confirm it. Thought I’d run it by you just in case.
Could you create a post and lock it normally, then directly edit the postgres row to unlock the post? I’m wondering if this would federate the lock but not federate your unlock causing all outside users to see a lock and all internal users see an unlocked post.
Possible edge case: users who subscribe to the community after the unlock will receive the initial data dump of posts and this will include the post in its current unlocked state.
However, this would be an easy way to block the majority commenting on a post while maintaining a seemless experience for your internal users.
Wouldn’t it make a difference in cases where the nameserver and host are not the same entity?
If the intention is to have an internal, instance-only post, I believe such a thing could be enforced with an automoderator bot. I had a lot of success throwing the Lemmy API into an AI and generating my own moderator bot from that. Could work for you.
Fair point, I agree there should be such a check. It seems for now that the only ones affected were people who tried to intentionally mess with it. It will be a hard goal to reach completely because what’s ok and healthy for some could also be a deathly allergic reaction for others. There’s always going to have to be some personal accountability for the person preparing a meal to understand what they’re making is safe.
That’s a bit dramatic of a take. The AI makes recipe suggestions based on ingredients the user inputs. These users inputted things like bleach and glue, and other non-food items, to intentionally generate non-food recipes.
I wonder if they’ll renew the twitter.com domain name
Removed by mod
deleted by creator