OMG fucking techbros. Yes, technology can be useful for many a thing, can both alleviate social issues by providing legit wealth, as well as shape society by its own shape (e.g. the interactions possible and encouraged by a social network).
It won’t, it can’t, however, bring about utopia for us. For to shape technology such that it would shape us in beneficial ways we’d have to fucking know what we want at which point we wouldn’t have the issue in the first place. Society, as a superorganism, will have to understand human nature, first.
Go outside. Talk to people in the real world. Use the faculties nature has given to you to fix shit, like your body, your mind, both ratio and instincts, don’t pray to some technological spectre that it shall deliver us from evil you’re displacing.
Go outside. Talk to people in the real world. Use the faculties nature has given to you […]
Look, I don’t want to pull the faculties card, so I’ll tell you a very easy thing to do: go to your city hall, or whatever public place you have with ALL THE LAWS applicable to you personally, and read them ALL. Just once, no more.
Then go outside, and find a single person who has done the same, with whom you can have even a remote chance of talking about the real and full consequences of any single law change proposal.
If you do that… congratulations, you’re better than a whole law firm with a hundred lawyers taken all together. And congratulate the other one for being one of the only two people in the whole world who can do it too.
For the rest of us, having an AI read all that stuff, then make it compare whatever we think we want, with what the result of changing even a couple words would be, much less 50 or 200 pages of amendments, is not about bringing some “utopia”… it’s about having a fighting chance of not stepping on a landmine in a quagmire at night in the middle of a tornado.
Right now, we have to pray to a party, a bunch of representatives, all their staff, their lobbyists, and several law firms. I’d rather pray to a single “technological spectre” that I could turn off and on again, as many times as I wanted.
An AI being able to do that kind of analysis would be an AGI. Also: Garbage in, garbage out. Without knowledge of the system you cannot know what you actually want.
Let’s take NIMBYs as an example: A municipality wants to drop parking minimums and fund public transport and start up a couple of medium-density housing/commercial developments around new tram stops in the suburbs, to fix their own finances (not having to subsidise infrastructure in low-density areas with high-density land taxes), as well as save money for suburbanites (cars are expensive and those tram stops are at most a short bike ride away from everywhere), and generally make the municipality nicer and more liveable. Suburbia is up in arms, because suburbanites are, well, not necessarily idiots but they don’t understand city planning.
The issue here is not one of having time to read through statutes, but a) knowing what you want, b) trust in that decision-makers aren’t part of the “big public transport” conspiracy trying to kill car manufacturers and your god-given right as a god-damned individual to sit in a private steel box four hours a day while commuting and not even being able to play flappy bird while doing it.
Even if your AI were able to see through all that and advise our NIMBYs there that the new development is in their interest – why would the NIMBYs trust it any more than politicians? Sure the scheme would save on steel and other resources but who’s to say the AI doesn’t want to use all that steel to produce more paper clips?
Questions to answer here revolve around trust, the atomisation of society, and alienation. AI ain’t going to help with that.
OMG fucking techbros. Yes, technology can be useful for many a thing, can both alleviate social issues by providing legit wealth, as well as shape society by its own shape (e.g. the interactions possible and encouraged by a social network).
It won’t, it can’t, however, bring about utopia for us. For to shape technology such that it would shape us in beneficial ways we’d have to fucking know what we want at which point we wouldn’t have the issue in the first place. Society, as a superorganism, will have to understand human nature, first.
Go outside. Talk to people in the real world. Use the faculties nature has given to you to fix shit, like your body, your mind, both ratio and instincts, don’t pray to some technological spectre that it shall deliver us from evil you’re displacing.
Look, I don’t want to pull the faculties card, so I’ll tell you a very easy thing to do: go to your city hall, or whatever public place you have with ALL THE LAWS applicable to you personally, and read them ALL. Just once, no more.
Then go outside, and find a single person who has done the same, with whom you can have even a remote chance of talking about the real and full consequences of any single law change proposal.
If you do that… congratulations, you’re better than a whole law firm with a hundred lawyers taken all together. And congratulate the other one for being one of the only two people in the whole world who can do it too.
For the rest of us, having an AI read all that stuff, then make it compare whatever we think we want, with what the result of changing even a couple words would be, much less 50 or 200 pages of amendments, is not about bringing some “utopia”… it’s about having a fighting chance of not stepping on a landmine in a quagmire at night in the middle of a tornado.
Right now, we have to pray to a party, a bunch of representatives, all their staff, their lobbyists, and several law firms. I’d rather pray to a single “technological spectre” that I could turn off and on again, as many times as I wanted.
An AI being able to do that kind of analysis would be an AGI. Also: Garbage in, garbage out. Without knowledge of the system you cannot know what you actually want.
Let’s take NIMBYs as an example: A municipality wants to drop parking minimums and fund public transport and start up a couple of medium-density housing/commercial developments around new tram stops in the suburbs, to fix their own finances (not having to subsidise infrastructure in low-density areas with high-density land taxes), as well as save money for suburbanites (cars are expensive and those tram stops are at most a short bike ride away from everywhere), and generally make the municipality nicer and more liveable. Suburbia is up in arms, because suburbanites are, well, not necessarily idiots but they don’t understand city planning.
The issue here is not one of having time to read through statutes, but a) knowing what you want, b) trust in that decision-makers aren’t part of the “big public transport” conspiracy trying to kill car manufacturers and your god-given right as a god-damned individual to sit in a private steel box four hours a day while commuting and not even being able to play flappy bird while doing it.
Even if your AI were able to see through all that and advise our NIMBYs there that the new development is in their interest – why would the NIMBYs trust it any more than politicians? Sure the scheme would save on steel and other resources but who’s to say the AI doesn’t want to use all that steel to produce more paper clips?
Questions to answer here revolve around trust, the atomisation of society, and alienation. AI ain’t going to help with that.