Morton up in here spreading free salt.
Someone interested in many things.
Morton up in here spreading free salt.
Patching a newer version of the Youtube app resolved the issues with playback I was having.
Perhaps, but I sucked at touch typing when I was younger.
No idea; does autocorrect even exist in an inbuilt fashion on Windows? I’ve never really tried using anything like that.
Oh, and here’s a one-off test I just did without autocorrection turned on. With a few more tries, I’m sure I could get up to 100+.
Ironically, I can almost type as fast on my phone (102 WPM PB) as I can on most keyboards (110 WPM PB), and that’s with my weird improper method of touch typing. These scores are for the 15 second word test on MonkeyType.
I feel like my obsession with Mavicas has just been dismissed as invalid.
We do something similar over at !mavica@normalcity.life, but with photos. Of course, we’re using old floppy disk cameras, so the compression, aberration, and CCD weirdness is indeed authentic.
I forgot: are Lemmy’s active and hot sorts chronological? They’re pretty decent, but I do find stale content does get stuck on one that isn’t there on the other.
Tbh, I haven’t really had this issue in a few weeks. I’m tempted to think it’s usage-related, and could possibly indicate that my memory allocation for the DB is still too high.
Like I said, I’m aware of extant measures to try and steer models, but people often assume a level of craftsmanship in censoring models that simply does not exist. Jailbreakchat.com is an endless stream of examples of this very fect; it’s very hard, especially with the limited context lengths of current models, to effectively give them any hard directives.
And back to foundational models, which are essentially free of censorship, they will still exhibit a similar level of political bias unless prompted otherwise. All this to say that, discounting OpenAI’s attempts to control their models, the model itself will inherently learn from and mirror the real-world biases of the text it was trained on. Those biases happen to fall along lines that often ignore subtlety in debates regarding illegality and morality.
It’s hard to say what LLMs are “programmed” to do, as they’re largely untamed beasts of text prediction. In fact, I would suspect its built-in biases are less the result of pre-prompting or post-foundational-model training and really just what a lot of people tend to think online. In a way, it’s more like people in general often equate illegality with immorality.
You can see similar biases in many of the open-source LLMs that are floating around. Even though they’re basically built outside of large corporate cultures and large-scale monetary incentive, they still retain a lot of political bias that tends to favor governmental measures heavily.
ChatGPT: Your argument is invalid because it doesn’t change the legal reality of things.
Me: The legal reality needs changed.
You can if you want. Reply here with the link if you do (or mention me if that’s a thing on Lemmy).
Yeah, mine have technically happened after reboots, although things typically take a few days at least for the problem to creep up. This past time, I basically have a whole entire week in before things went to crap.
I did that a while ago, and unfortunately, it didn’t really help. I don’t think it’s an issue of RAM, but rather a daemon or something periodically going nuclear with resource utilization. A configuration issue, perhaps?
The problem is that an update will inherently involve a restart of everything, which tends to solve the problem anyway. Whether the update fixed things or restarting things temporarily did is only something you can find out in a few days.
I’ll save this to look at later, but I did use PGTune to set my total RAM allocation for PostgreSQL to be 1.5GB instead of 2. I thought this solved the problem initially, but the problem is back and my config is still at 1.5GB (set in MB to something like 1536 MB, to avoid confusion).
This issue occured a few weeks ago as well, even when we had very little traffic. We still have peanuts when compared with other instances.
Oh, and for completeness:
We’ve deleted the vast majority of the spam bots that spammed our instance, are currently on closed registration with applications, and have had no anomalous activity since.
Our server is essentially always at 50% memory (1GB/2GB), 10% CPU (2 vCPUs), and 30% disk (15-20GB/60GB) until a spike. Disk utilization does not change during a spike.
Our instance is relatively quiet, and we probably have no more than ten truly active users at this point. We have a potential uptick in membership, but this is still relatively slow and negligible.
This issue has happened before, but I assumed it was fixed when I changed the PostgreSQL configuration to utilize less RAM. This is still the longest lead-up time before the spikes started.
When the spike resolves itself, the instance works as expected. The issues with service interruptions seems to stem from a drastic increase in resource utilization, which could be caused by some software component that I’m not aware of. I used the Ansible install for Lemmy, and have only modified certain configuration files as required. For the most part, I’ve only added a higher max_client_body_size in the nginx configs for larger images, and have added settings for an SMTP relay to the main config.hjson file. The spikes occured before these changes, which leads me to believe that they are caused by something I have not yet explored.
These issues occured on both 0.17.4 and 0.18.0, which seems to indicate it’s not a new issue stemming from a recent source code change.
I had no idea FOSS tax software was a thing. Huh. I’ll try and play around with it at some point and let you know.