• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • marmarama@lemmy.worldtoDevOps@programming.devDSLs are a waste of time
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Pulumi code ends up looking like a DSL anyway with all the stuff you end up using from the top-level pulumi package to do anything vaguely complicated.

    Only now, compared with Terraform, you need to worry about resource ordering and program flow, because when you have a dependency between resources, the resource object you depend on has to be instantiated (within the program flow, I mean - Pulumi handles calculating the ordering of actual cloud resource creation) before the dependent resource. This gets old really quickly if you’re iterating on a module that creates more than a few interdependent resources. So much cut, paste, reorder. FWIW CDK has the same issue, and for the same reason - because it’s using a general-purpose programming language to model a domain which it doesn’t fit all that well.

    I like Pulumi and it’s got a lot going for it, especially if you have complex infrastructure requirements. You get a bunch of little quality of life enhancements that I wish Terraform would adopt, like cloud state management by default, and a built-in mechanism for managing secrets in a sane way. Python/TypeScript etc. modules are much more flexible than Terraform modules, and really help with building large chunks of reusable infrastructure. The extra programmability can be useful, though you need to be extra-careful of side-effects. You get more power, but you also get some extra work.

    But for most people deploying a bit - or even quite a lot - of cloud infrastructure, Terraform is honestly just easier. It’s usually some fairly simple declarative config with some values passed from one resource to another, and a small amount of variation that might require some limited programmability. Which is exactly what Terraform targets with HCL. It’s clear to me that Pulumi sees this too, since they introduced the YAML syntax later on. But IMO HCL > YAML for declarative config.


  • Do we have to bring this up again? It’s just boring.

    systemd is here and it isn’t going anywhere soon. It’s an improvement over SysV, but the core init system is arguably less well-designed than some of the other options that were on the table 10 years ago when its adoption started. The systemd userspace ecosystem has significantly stifled development of alternatives that provide equivalent functionality, which has led to less experimentation and innovation in those areas. In many cases those systemd add-on services provide less functionality than what they have replaced, but are adopted simply because they are part of the systemd ecosystem. The core unit file format is verbose and somewhat awkward, and the *ctl utilities are messy and sometimes unfriendly.

    Like most Red Hat-originated software written in the last 15 years, it valiantly attempts to solve real problems with Linux, and mostly achieves that, but there are enough corner cases and short-sighted design decisions that it ends up being mediocre and somewhat annoying.

    Personally I hope that someone comes along and takes the lessons learned and rewrites it, much like Pulseaudio has been replaced by Pipewire. Perhaps if someone decides it needs rewriting in Rust?


  • The WiFi card is probably a Realtek 8852AE, which has become very common in laptops since 2021. Unfortunately Realtek driver support tends to lag quite a bit.

    If you want to run Ubuntu Desktop 22.04, then you’re probably best off waiting a few weeks for the Ubuntu Desktop 22.04.4 point release. It’s due sometime this month. It will boot and install an “HWE” (Hardware Enablement) kernel and drivers, that are based on the kernel from Ubuntu 23.04, and therefore should work out of the box with your WiFi card.

    While it’s possible to upgrade an existing Ubuntu 22.04 installation with the latest HWE kernel, doing it by downloading the relevant packages on another machine and moving them across using a USB stick is going to be somewhat frustrating if you’ve not done it before. You’ll certainly learn a few things, but it may not be an enjoyable experience. I’m a grizzled Linux veteran, and I’m pretty sure I’d end up forgetting to download one or more packages and having to swap back and forth between machines.

    In the meantime, I would just continue to use Ubuntu 23.04. In fact, if it was me, I would probably just stick with 23.04, upgrade to 23.10 and then subsequently 24.04 when they become available. What you do once you’re on the 24.04 LTS release is up to you. By that time, other distros will probably also work out of the box too.



  • Apple users have been sending text messages interchangeably between their phones and computers/tablets for years.

    As have Android users. Microsoft Phone Link/My Phone Companion and KDE Connect have supported this for years on their relevant PC platforms. The Phone Link Android app is even preinstalled on Samsung devices. There’s a teensy bit of setup but nothing complicated. KDE Connect even supports stuff like using the phone as a touchpad, remote keyboard, or media/presentation controller.

    If your PC is a Chromebook then you don’t even need these. If you sign into the phone and Chromebook with the same Google account, the integration just works, much as it does on Apple devices.

    Most of your arguments can be boiled down to “everything is really slick if you use an all-Apple ecosystem”. Which is fine, but the same can be said about Android - if you use an all-Google ecosystem with Pixels, Chromebooks and Google Workspace then most, if not all of your complaints about Android go away. Pixel Android is more consistent and less buggy than most vendor versions of Android. Integration with Chromebooks works out of the box. Google Workspace MDM is simple and straightforward, and you don’t really need to buy a separate MDM solution.

    The difference is that Android at least makes a decent effort to cater for a heterogeneous ecosystem. With Apple, if you’re not entirely onboard with an all-Apple ecosystem then it starts getting messy quickly.


  • At least for me, there is a big difference between naming things at home and naming things for work.

    Work “pet” machines get systematic names based on function, location, ownership and/or serial/asset numbers. There aren’t very many of them these days. If they are “cattle” then they get random names, and their build is ephemeral. If they go wrong or need an upgrade, they get rebuilt and their replacement build gets a new random name. Whether they are pets or cattle, the hostnames are secondary to tags and other metadata, and in most cases the tags are used to identify the machines in the first instance, because tags are far more flexible and descriptive than a hostname.

    At home, where the number of machines is limited, I know all of them like the back of my hand, and it’s mostly just me touching them, whimsical names are where it’s at.


  • marmarama@lemmy.worldtoSelfhosted@lemmy.worldWhat is your machine naming scheme?
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    edit-2
    1 year ago

    Ungulates. Because who doesn’t like a hoofed animal?

    My client machines are even-toed ungulates (order Artiodactyla) and my servers/IoT machines are odd-toed (order Perissodactyla). I’m typing this on Gazelle. My router is called Quagga, both after the extinct zebra subspecies and the routing protocol software (I don’t use it any more but hey, it’s a router).

    Biological taxonomy is a great source of a huge number of systematic (and colloquial) names.


  • I could well be wrong about the AAC passthrough, and I should have hedged that statement with “allegedly” as I’ve not tested it myself.

    To your other point though, I disagree - there are plenty of ways you could pass through an unchanged AAC bitstream, but still mix in other sounds when required. For example, having the sender duck the original bitstream out temporarily and send a mixed replacement bitstream while the other sound is playing. Or (and this would only work if you control the firmware on the receiver, but if you’re using Apple headphones with an Apple device, that’s not a problem) sending multiple bitstreams to the receiver and letting the receiver mix them.


  • I can only comment on my experience with my own equipment and ears, but in my experience, 990Kbps LDAC is noticeably more transparent than 256Kbps AAC for Bluetooth audio.

    I can fairly reliably guess whether or not I remembered to switch my Sony XM4s out of multipoint mode the last time I used them (when in multipoint pairing mode LDAC is not supported and 256Kbps AAC is usually what gets negotiated). The difference is small, but over a few minutes of listening, the sonic signature when it’s using AAC is just a little bit “off” and my ears don’t like it as much.

    Could I ABX the difference using the usual ABX setup with short samples of music I’m not familiar with? Probably not. Can I tell the difference over an extended period using music I know well, and that I often listen to uncompressed? Yes, pretty easily.

    LDAC is not a particularly sophisticated codec, but it doesn’t have to be when it has a 990Kbps bitrate. It’s also possible that the FDK-AAC codec that I think both Pipewire and Android use for real-time AAC encoding is not the best tuned for 256Kbps CBR. AIUI in 256Kbps CBR mode, FDK-AAC has a hard low-pass filter at 17KHz, and I can still hear above 17KHz.


  • Yeah, I agree.

    I bought them for their noise cancelling primarily, and they’re excellent at that, but otherwise they’re not great. The un-EQed frequency response is terrible for headphones in their price range: flabby, wildly over-exaggerated bass and no mids at all. Running without EQ I can barely hear lyrics - every singer sounds like they’re mumbling underwater. I’ve had $20 IEMs with better tonal balance. They respond well to EQ but the on-board EQ doesn’t have enough frequency bands to even come close to fixing them. Wavelet on Android doing EQ duty makes them listenable. Even when you do EQ them properly, they still sound a bit dull and lifeless.

    No idea how they got so much praise when they were launched. The power of marketing budgets I guess. For a while I was gaslighting myself thinking I had a faulty pair or maybe there was something going wrong with my hearing, but having heard another pair, and doing comparisons with my other headphones - most of which are far cheaper - I realised that no, they’re just not very good as headphones.


  • It is worse than uncompressed, but 990Kbps LDAC is the closest codec to totally transparent I’ve heard for Bluetooth audio. AptX HD is nearly as good to my ears, and is better than 660Kbps LDAC. The differences are very small though, especially when compared with the differences on the analog side, e.g. the amp, and particularly the headphone design.

    Apple side-steps the problem by, at least when you’re listening to Apple Music, simply sending the AAC stream as-is to the headphones and has them decode the audio. I don’t know why that isn’t a more common approach.

    I’m still somewhat bemused that we’re talking about Bluetooth codecs at all. It surely can’t be that difficult technically to get 1.5Mbps actual throughput on Bluetooth and simply send raw 16-bit/44.1Khz PCM. 2.4Ghz WiFi is capable of hundreds of times that speed. Bluetooth has been stuck at the same speeds for decades.


  • I have a Radsone ES100 Bluetooth DAC/headphone amp, and that supports LDAC, multipoint, and doesn’t compromise the LDAC bitrate when you have multipoint enabled. You can even leave it plugged in as a USB DAC and still use multipoint BT with LDAC, and it switches smoothly between the sources depending on which device started playing a stream most recently.

    I was distinctly underwhelmed by the BT implementation when I got my Sony XM4s, it’s kinda weak by comparison.




  • Converted-to-Bluetooth Stadia controller.

    It’s actually a really nice controller. The ergonomics are great for my big meaty hands, it’s got some weight to it and feels really solidly built. The heft means the vibration really has some kick to it. The battery life is really good too - it was specced for having Wi-Fi on all the time, so now it’s running only a little Bluetooth LE radio, the battery is massive. Even when it runs down, the charge rate is quick - full in about half an hour, and then good to go for weeks. Again, probably because it was specced for Wi-Fi, the radio circuitry is way above average and the range is stupid - I can control a Steam Deck from two rooms away, through two solid brick walls, something none of my other controllers can do.

    The sticks are accurate and don’t drift, the buttons are pretty good, and the D-Pad is a bit stiff but perfectly serviceable. My one significant complaint is that the springback on the triggers is way too light, which makes it difficult to be subtle with the triggers, a little annoying for driving games.

    Still, if you see one at a sensible price, they’re a steal.


  • Nvidia drivers have (slightly) more timely support for the latest cards, and more mature support for non-3D uses of the GPU, especially scientific computing. To a large extent they are the same code as the Windows drivers, and that has positives in terms of breadth and maturity of support.

    For everything else, the AMD drivers are better. Because they are a separate codebase from the Windows drivers, and are part of the de-facto Linux GPU driver stack Mesa, they integrate much better into the overall Linux experience, especially around support for Wayland. Unless you have an absolutely bleeding-edge card, they “just work” more often than the Nvidia drivers. If you like doing serious tinkering on your Linux system, then the AMD drivers being fully integrated and having the source available is a major win. Also, it used to be that the Nvidia drivers did a much better job of squeezing performance out of the hardware, but today there’s very little in it, and the AMD drivers might even be a little more efficient.

    I’ve got both AMD and Nvidia GPUs currently in different machines, and I much prefer the Linux experience with AMD. I don’t think I’ll be buying another Nvidia GPU unless the driver situation changes significantly.

    FWIW I don’t stream so I can’t comment on the exact situation, but I have used the video encode hardware on AMD cards via VAAPI and it was competent and much faster than x264/x265 on the CPU. I think OBS has a plugin to use VAAPI (which is the “standard” Linux video decode/encode acceleration interface that everyone but Nvidia supports).