My favorite part of this story:

“The rocket terminated the flight after judging that the achievement of its mission would be difficult.”

“Man, this is too hard, better explode!”

  • Wrench@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    8 months ago

    I think you’re misusing the term “AI”.

    This would just be presets that would trigger if sensors detected problems, and if enough parts trigger, and automated response of destroying the craft would trigger.

    That is in no way artificial intelligence. Just automated safety features.

    Just like your car deploys an airbag if it’s sensors detect a collision.

    • AbouBenAdhem@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      5
      ·
      edit-2
      8 months ago

      Except for this line: “The rocket terminated the flight after judging that the achievement of its mission would be difficult”.

      Either the company president being quoted or the translator seems to be implying that the system is modeling the outcome of the whole mission, not just checking if sensor readings exceed some preset threshold. They’re trying to portray it as an AI-like decision, whether that’s really the case or not.

      • Wrench@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        8 months ago

        It’s going to be a combination of red flags that an algorithm weighs, and triggers the self destruct if exceeded. Probably even gives HQ a short window to override it (if coms are working).

        It’s not going to have a built in “AI” making “intelligent” decisions in a dynamic way. That would be extremely dangerous/unreliable, as well as require a shit ton of processing power.

        Stop buying into the AI bullshit. Algorithms != AI

        • idiomaddict@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          8 months ago

          It’s not buying into AI bullshit to infer some processing and assessment from something said to have decided something. Decisions involve consideration, they’re not like instincts.

          It seems like the person saying that misspoke.

          • MartianSands@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            8 months ago

            They didn’t misspeak, they anthropomorphised. People do that all the time, and calling it an error is pedantic to the point of being incorrect.

            Also, that statement was probably in Japanese. You can’t read that kind of implication from it, even if it would have been correct to do so in English (which it wouldn’t)

            • idiomaddict@feddit.de
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              8 months ago

              That’s misleadingly inaccurate if it wasn’t misspeaking, calling it a mistake was charitable (though the issue could definitely rest in translation, you’re right).

      • jimbolauski@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        8 months ago

        They will not put AI on flight critical pieces for planes. It’s impossible to fully test and verify the software will behave in a predictable fashion. Instead the ai is used in a layer outside the critical path and it’s decisions are vetted by flight critical pieces.

        Destroying the rocket was done after flight critical software calculated the probability of failure as too high.

        If( notGoingToMakeIt() ) goBoom();