• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle
  • The problem with this question is your friends, if whatever you decide on isn’t something your friends have or are willing to get, then it’s not useful for you. Signal offers probably the best mix of adoption and security. It however misses a few notable features, for example the iOS client has no way to back up or restore your messages. I’m a big fan of matrix, which seems very extensible and has good security, but if you are in a sensitive application like an authoritarian country, it wouldn’t be my choice. All the messages are stored on the server and while they are encrypted it’s still not what I would use for a chat I never want to see in court.



  • SirEDCaLot@lemmy.fmhy.mltoSelfhosted@lemmy.worldSynology vs DIY
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Honestly I think you’ll be happy either way. Synology is very very good at some things. And the software makes it very easy and approachable to spin up a lot of private cloud type stuff without a lot of technical messing around. That said, you will get more hardware/performance for your dollar with a PC server. You can go the DIY route, or if you don’t mind a little more power consumption and want more performance buy a used Dell PowerEdge on eBay. Based on what you say, I think you’ll be happy either way. The real value you get from Synology is their software. Their photo app is very wife friendly. And I don’t think you’ll find any serious restrictions with it, you get full root SSH access into the box.

    So I guess my suggestion would be evaluate the photo management in TrueNAS versus Synology. You can spin up a virtual machine of TrueNAS on your desktop and play with it if you want. The only other gotcha is if you want Plex to do transcoding you definitely want the PC because you can throw in a GPU and accelerate that a lot.

    //edit- the one other thing to mention is backups- Synology has GREAT backup software and it’s free. Active Backup for Business will back up your desktop/laptop, versioned, deduplicated, very efficiently. And Hyper Backup will backup your Synology itself (or some parts of it) to the cloud, optionally with client-side encryption. I suggest Wasabi as the backend for that, it’s only like $7/TB/mo. Or just get another Synology and put it at the house of someone you know and you have an instant offsite backup with no recurring cost.



  • From the bottom up…

    Whatever you say asshole.
    A moron like you has no idea on how arguments should work.
    Your self righteous infographic is just arrogant.
    I know how to argue far better than you do.
    I get in many arguments and I almost always win them.
    You talk about disagreement, but your pyramid only works when both people are arguing in good faith.
    You say that attacking the central point of an argument is the most effective, but often the stated central point is not the central point at all, especially with emotion based positions. For example, a more conservative person arguing against liberal changes will state specific objections to these changes, but arguing those objections is futile if the real underlying objection is simple fear of change.


    Jokes aside-this pyramid is right on the money.


  • I love this whole cyberdeck thing.

    I remember back in the early 2000s, there was a lot more innovation when it came to portable devices. There were gadgets that sort of resemble modern smartphones just clunkier (iPaq), ones with keyboards below the screen (BlackBerry), ones with slide out keyboards (HTC and others), ones that flipped open like laptops but could fit in your pocket (HP Jornada), etc.

    Somewhere along the line all that innovation went out the window and now every single phone or gadget looks more or less exactly the same. Like take the top 10 or 15 smartphones, debrand them, and put them in a box, and 99% of people couldn’t tell the hardware apart.

    You would think there would be a market for some level of variation, or just have one company that makes the phone 5 mm thicker but the battery lasts for 3 days. But we don’t even see that.

    Foldable screens seem to be spurring a little bit of innovation so I have hopes. But until then, I would love to see some of these cyberdeck designs put into production. I would happily pay a couple hundred bucks for a raspberry pi equivalent of a Jornada 720 (as long as the keyboard is touch typeable like the old one).





  • While it has its benefits; is it suitable for vehicles, particularly their safety systems? It isn’t clear to me, as it is a double-edged sword.

    Perhaps, but if you are developing a tech that can save lives, doesn’t it make sense to put that out in more cars faster?

    I would be angry that such a modern car with any form of self driving doesn’t have emergency braking. Though, that would require additional sensors…

    Tesla does this with cameras whether you pay for FSD or not. It can also detect if you’re near an object and slam on gas instead of brake, it will cancel that out. These are options you can turn off if you don’t want them.

    I’d also be angry that L2 systems were allowed in that environment in the first place, but as you say it is ultimately the drivers fault.

    I’m saying- imagine if the car has L2 self driving, and the driver had that feature turned off. The human was driving the car. The human didn’t react quickly enough to prevent hitting your loved one, but the computer would have.
    Most of the conversation around FSD type tech revolves around what happens when it does something wrong that the human would have done right. But as the tech improves, we will get to the point where the tech makes fewer mistakes than the human. And then this conversation reverses- rather than ‘why did the human let the machine do something bad’ it becomes ‘why did the machine let the human do something bad’.

    I would hope that the manufacturer would make it difficult to use L2 outside of motorway driving.

    Why? Tesla’s FSD beta L2 is great. It’s not perfect, but it does a very good job for most parts of driving on surface streets.

    I would prefer they had no self driving rather than be under the mistaken impression the car could drive for them in the current configuration. The limitations of self driving (in any car) are often not clear to a lot of people and can vary greatly.

    This is valid. I think the name ‘full self driving’ is problematic somewhat. I think it will get to the point of actually being fully self driving, and I think it will get there soon (next year or two). But they’ve been using that term for several years now and especially the first few versions of ‘FSD’ were anything but. And before they started with driver monitoring, there were a bunch of people who bought ‘FSD’ and trusted it a lot more than they should have.

    If Tesla offer a half-way for less money would you not expect the consumer to take the cheapest option? If they have an accident it is more likely someone else is injured, so why pay more to improve the self driving when it doesn’t affect them?

    That’s not how their pricing works. The safety features are always there. The hardware is always there. It’s just a function of what software you get. And if you don’t buy FSD when you buy the car, you can buy it later and it will be unlocked over the air.
    What you get is extra functionality. There is no ‘my car ran over a little kid on a bike because I didn’t pay for the extra safety package’. It’s ‘my car won’t drive itself because I didn’t pay for that, I just get a smart cruise control’.

    Tesla is the only company I know steadfastly refusing to use any other sensor types and the only reason I see is price.

    Price yes, and difficulty integrating different data sets. On their higher end cars they’ve re-introduced a high resolution radar unit. Haven’t see much on how that’s being used though.
    The basic answer is they can get to where we need with cameras alone because our software is better than others. For any other automaker that doesn’t have Tesla’s AI systems, LiDAR is important.

    Another concern is that any Tesla incidents, however rare, could do huge damage to people’s perception of self driving.

    This already happens whether the computer is driving or not. Lots of people don’t understand Teslas and think that if you buy one it’ll drive you into a brick wall and then catch on fire while you’re locked inside. Bad journalists will always put out bad journalism. That’s not a reason to stop tech progress tho.

    If Tesla is much cheaper than LIDAR-equipped vehicles will this kill a better/safer product a-la betamax?

    Right now FSD isn’t a main selling point for most drivers. I’d argue that what might kill others is not that Tesla’s system is cheaper, but that it works better and more of the time. Ford and GM both have a self driving system, but it only works on certain highways that have been mapped with centimeter-level LiDAR ahead of time. Tesla has a system they’re trying to make general purpose, so it can drive on any road. So if the Tesla system takes you driveway-to-driveway and the competition takes you onramp-to-offramp, the Tesla system is more flexible and thus more valuable regardless of the purchase price.

    Do you pick your airline based on the plane they fly and it’s safety record or the price of the ticket, being confident all aviation is held to rigorous safety standards? As has been seen recently with a certain submarine, safety measures should not be taken lightly.

    I agree standards should apply, that’s why Tesla isn’t L3+ certified even though on the highway I really think it’s ready for it.


  • Not sure the exact details- I heard they were sampling 10 bits per pixel but a bunch of their release notes talked about photon count detection back when they switched to that system.
    Given that the HW3 cameras started being used to just generate RGB images, I suspect the current iteration is working by just pulling RAW format frames and interpreting them as a photon count grid, from there detecting edges and geometry with the occupancy network.

    I’ve not seen much of anything published by Tesla on the subject. I suspect most of their research they are keeping hush hush to get a leg up on the competition. They share everything regarding EV tech because they want to push the industry in that direction, but I think they see FSD as their secret sauce that they might sell hardware kits but not let others too far under the hood.


  • In our town we had a Tesla shoot through red traffic lights near our local school barely missing a child crossing the road. The driver was looking at their lap (presumably their phone). I looked online and apparently autopilot doesn’t work with traffic lights, but FSD does?

    There’s a few versions of this and several generations with different capability. The early Tesla Autopilot had no recognition of stop signs, it was literally just ‘cruise control that keeps you in your lane’. FSD for sure does recognize stop signs, traffic lights, etc and reacts correctly to them. I BELIEVE that the current iteration of Traffic Aware Cruise Control (what you get if you don’t pay extra for FSD or Enhanced Autopilot) will stop for traffic lights but I could be wrong on that. I know it detects pedestrians but its detection isn’t nearly as advanced as FSD.

    I will give you that in theory, the time-of-flight data from a LiDAR pulse will give you a more reliable point cloud than anything you’d get from cameras. But I also know Tesla is doing things with cameras that border on black magic. They gave up on getting images out of the cameras and are now just using the raw photon count data from the sensor, and with the AI trained it can apparently detect edges with only a few photons of difference between pixels (below the noise floor). And I can say from experience that a few times I’ve been in blackout rainstorms where even with full wipers I can barely see anything, and the FSD visualization doesn’t skip a beat and it sees other cars before I do.

    Would you still feel the same about Tesla if your car injured/killed someone or if someone you care about was injured/killed by a Tesla?

    As a Level 2 system, the Tesla is not capable of injuring or killing someone. The driver is responsible for that.

    But I’d ask- if a Tesla saw YOUR loved one in the road, and it would have reacted but it wasn’t in FSD mode and the human driver reacted too slowly, how would you feel about that? I say this not to be contrarian, but because we really are approaching the point where the car has better situational awareness than the human.

    If we can put extra sensors in and it objectively makes it safer why don’t we? Self driving cars are a luxury.

    For the reason above with the loved one. If you can use cameras and make a system that costs the manufacturer $3000/car, and it’s 50 times safer than a human, or use LiDAR and cost the manufacturer $10,000/car, and it’s 100 times safer than a human, which is safer?
    The answer is the cameras, because it will be on more cars, thus deliver more overall safety.
    I understand the thinking that ‘Elon cheaped out, Tesla FSD is a hack system on shitty hardware that uses clever programming to work around a cut-rate sensor suite’. But I’d also argue- if they can get similar performance out of a camera, and put it on more cars, doesn’t that do more to overall improve safety?

    In the example above, if the car didn’t have the self driving package because the guy couldn’t afford it, wouldn’t you prefer that a decent but better than human self driving system was on the car?


  • Don’t have the paper, my info comes mainly from various interviews with people involved in the thing. Elon of course, Andrej Karpathy is the other (he was in charge of their AI program for some time).

    They apparently used to use feature detection and object recognition in RGB images, then gave up on that (as generating coherent RGB images just adds latency and object recognition was too inflexible) and they’re now just going by raw photon count data from the sensor fed directly into the neural nets that generate the 3d model. Once trained this apparently can do some insane stuff like pull edge data out from below the noise floor.

    This may be of interest– This is also from 2 years ago, before Tesla switched to occupancy networks everywhere. I’d say that’s a pretty good equivalent of a LiDAR scan…


  • Or maybe power grids are teetering because utilities raked in profit for the last two decades by ignoring upgrades that would obviously be necessary… Just a thought :)

    My utility sells $400 Wi-Fi touchscreen thermostats for like $25, the catch being you let them turn your AC down/off when grid load peaks. A few truckloads of thermostats are cheaper than grid upgrades, so they do the thermostats and kick the can down the road more.




  • My point stands- drive the car.
    You’re 100% right with everything you say. It has to work 100% of the time. Good enough most of the time won’t get to L3-5 self driving.

    Camera only is not authorize in most logistic operation in factory, im not sure what changes for a car.

    The question is not the camera, it’s what you do with the data that comes off the camera.
    The first few versions of camera-based autopilot sucked. They were notably inferior to their radar-based equivalents- that’s because the cameras were using neural network based image recognition on each camera. So it’d take a picture from one camera, say ‘that looks like a car and it looks like it’s about 20’ away’ and repeat this for each frame from each camera. That sorta worked okay most of the time but it got confused a lot. It would also ignore any image it couldn’t classify, which of course was no good because lots of ‘odd’ things can threaten the car. This setup would never get to L3 quality or reliability. It did tons of stupid shit all the time.

    What they do now is called occupancy networks. That is, video from ALL cameras is fed into one neural network that understands the geometry of the car and where the cameras are. Using multiple frames of video from multiple cameras at once, it then generates a 3d model of the world around the car and identifies objects in it like what is road and what is curb and sidewalk and other vehicles and pedestrians (and where they are moving and likely to move to), and that data is fed to a planner AI that decides things like where the car should accelerate/brake/turn.
    Because the occupancy network is generating a 3d model, you get data that’s equivalent to LiDAR (3d model of space) but with much less cost and complexity. And because you only have one set of sensors, you don’t have to do sensor fusion to resolve discrepancies between different sensors.

    I drive a Tesla. And I’m telling you from experience- it DOES work. The latest betas of full self driving software are very very good. On the highway, the computer is a better driver than me in most situations. And on local roads- it navigates them near-perfectly, the only thing it sometimes has trouble with is figuring out when is it’s turn in an intersection (you have to push the gas pedal to force it to go).

    I’d say it’s easily at L3+ state for highway driving. Not there yet for local roads. But it gets better with every release.