• 0 Posts
  • 42 Comments
Joined 2 years ago
cake
Cake day: July 9th, 2023

help-circle

  • Don’t get me wrong, AI has its uses, but their whole “solution for everything” mentality

    They are trying to somehow undo or redo personal computers.

    To create a non-transparent tool that replaces the need (and thus social possibility) to have a universal machine.

    The difference between thinking robots and computers as we have them is that thinking robots take some place in the social hierarchy, and computers help everyone who has a computer and uses it.

    Science fiction usually portrayed artificial humans, not computers, before actually, ahem, seeing the world as it turned out.

    It’s sort of a social power revolt against intellectual power (well, some kind of it).

    Like a tantrum. People who don’t like how it really happened still want their deus ex machina, an obedient slave at that, that can take responsibility at that. Their 50 years long shock has receded and they now think they are about to turn this defeat into victory.

    only making it bigger and last longer which will only make it worse when it does actually pop

    I think that’s deliberate. There are a few companies which will feel very well when the bubble pops, having the actual audience as their main capital, while their capitalization and technologies are secondary. The rest are just blindly led by short-term profits.



  • Why do all these idiots behave as if they knew where the future is?

    If it’s about all the achievements they’ve read about and seen in games like Civilization, real-life doesn’t quite look like that. Though in some sense these games, though good, have kinda simplified and made degenerate the understanding of the progress by many people. Similarly to what Soviet school program did, but in a more persuasive and pleasant way.

    There’s no tech tree. There’s been plenty of attempts at any breakthrough before it actually happened. Suppose this “AI” is to some real future AGI what Leonardo’s machines were to Wright brothers’ machines, even in that case there’s no hurry to embrace it.

    If he thinks he’s looking at a 90% achieved tech tree point with powerful perks, then his profession should probably be that of a janitor. Same day schedule, same places to mop up, you know.



  • It’s supposed to be growth related to the things which didn’t progress, so to say. So it’s not literally supposed to be growth of processes, just that stagnation makes things diminish in value, and compared to them things more alive “grow”. Something like that.

    Kinda like inflation. And that’s fine, that can describe a pretty sustainable society, it’s not about consuming more and more, it’s like rotation.

    Except with today’s oligopolies there’s a different idea, that they really have to grow as in capturing more and more of humanity’s resources. The AI bubble (or not) is their most recent approach to that.

    That’s because expectations were shaped by the 90s when many things exploded (unfortunately much of that were countries, also landmines and other expendable means of destruction).

    In the 00s it was possible to create illusion of that explosion still going on brighter and brighter, despite just continuing what started in the 90s, and then to create a few large-scale scams (or madness pandemics, or tech fashions, whatever ; point is they weren’t the same as years 1993-1999) with iPhones, new Apple in general, Google, Facebook, Twitter.

    I’m not saying it was fake or worthless, it was a revolution too, but not what companies try to show since the dotcom bubble.

    So - they are still trying to show that, with kinda rough, generic, and insincere effort, a bit like sex workers in their makeup.

    And they can’t show that without such expansion in width, not in height.






  • Well, from this description it’s still usable for things too complex to just do Monte-Carlo, but with possible verification of results. May even be efficient. But that seems narrow.

    BTW, even ethical automated combat drones. I know that one word there seems out of place, but if we have an “AI” for target\trajectory\action suggestion, but something more complex\expensive for verification, ultimately with a human in charge, then it’s possible to both increase efficiency of combat machines and not increase the chances of civilian casualties and friendly fire (when somebody is at least trying to not have those).





  • The point is to make children used to checks.

    It’s a didactic law.

    IRL usually children grow up feeling they are free (except for their parents) to an extent.

    This is intended so that identifying yourself in the Internet were normal by the time you grow up for it to matter.

    But, of course, there might be some good considerations, if you’re into playing devil’s advocate. People might remember which stupid shit they were posting when they were younger, and want for future generations to be always conscious of the difference between pseudonymity and anonymity, and superficial anonymity vs real. People might want to make it so that nobody had a false sense of security, leading to really bad mistakes. People might want this to be the step preceding some way to fight bots.

    And they might even not have good considerations, but eventually realize that the oppressive system they are building is best rebuilt for something better and used differently. Wouldn’t be the first time in history.

    It’s just that laying down your arms in hopes for that is unwise.