• webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      I understand what your saying. It definitely is the eliza effect.

      But you are taking sementics quite far to state its not ai because it has no “intelligence”

      I have you know what we define as intelligence is entirely arbitrary and we actually keep moving the goal post as to what counts. The invention of the word “ai” happened along the way.

        • webghost0101@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          1 day ago

          Sorry to say but your about as reliable as llm chatbots when it comes to this.

          You are not researching facts and just making things up that sound like they make sense to you.

          Wikipedia: “It (intelligence) can be described as the ability to perceive or infer information to retain it as knowledge be applied to adaptive behaviors within an environment or context.”

          When an llm uses information found in a prompt to generate about related subjects further down the line in the conversation it is demonstrating the above.

          When it adheres to the system prompt by telling a user it cant do something. Its demonstrating the above.

          Thats just one way humans define intelligence. Not perse the best definition in my opinion but if we start to hold opinions like there common sense then we really are not different from llm.

          • outhouseperilous@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            Eliza with an api call is intelligence, then?

            opinions

            Llm’s cannot do that. Tell me your basic understanding of how the technology works.

            common sense

            What do you mean when we say this? Lets define terms here.

            • webghost0101@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              1 day ago

              Eliza is an early artificial intelligence and it artificially created something that could be defined as intelligent yes. Personally i think it was not just like i agree llm models are not. But without global consensus on what “intelligence” is we cannot conclude they ard not.

              Llms cannot produce opinions because they lack a subjective concious experience.

              However opinions are very similar to ai hallucinations where “the entity” confidently makes a claim that is either factually wrong or not verifyable.

              Wat technology do you want me to explain? Machine learning, diffusion models, llm models or chatbots that may or may not use all of the above technologies.

              I am not sure there is a basic explanation, this is very complex field computer science.

              If you want i can dig up research papers that explain some relevant parts of it. That is if you promise to read them I am however not going to write you a multi page essay myself.

              Common sense (from Latin sensus communis) is “knowledge, judgement, and taste which is more or less universal and which is held more or less without reflection or argument”.

              If a definition is good enough for wikipedia which has thousands of people auditing and checking and is also the source where people go to find the information it probably counts as common sense.

              A bit off topic but as an autistic person i note You where not capable from perceiving the word “opinion” as similar to “hallucinations in ai” just like you reject the term ai because you have your own definition of intelligence.

              I find i do this myself also on occasion. If you often find people arguing with you you may want to pay attention to wether or not semantics is the reason. Remember that the Literal meaning of a word (even with something less vague then “intelligence”) does not always match with how the word i used and the majority of people are ok with that.

              • outhouseperilous@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 day ago

                could be defined as intelligent

                Okay but what are some useful definitions for us to use here? I could argue a pencil is intelligent if i can play with terms enough.

                Id like to have a couple, because it’s such a broad topic. Give them different names.

                opinions

                The capacity to be wrong is not what matters; garbage in garbage out. Lets focus on why it’s wrong, how it gets there.

                llm models or chatbots

                Arent all modern chatbots based on llm’s?

                subjective conscious

                Conscious. Define. Seems like it’s gonna come up a lot and its a very slippery word, repurposed from an entirely different context.

                common sense is information held uncritically

                Okay! I can work with that.

                language is fluid and messy

                Yeah, but in common use it matters. Not necessarily that they stick to original uses, but the political implications and etymology of new uses should be scrutinized, because it does shape thought, especially for NT’s.

                But i recognize that it’s messy. that’s why we’re defining terms.

                • webghost0101@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 day ago

                  I am not sure there is a point to us deciding on terms because my entire point is that there is no single agreed definition of “intelligence”

                  And of the definitions we do have , ai fits some. I give you an example above from wikipedia. But there are many reasonable ways one can argue the current definition work. Regardless of that definition being actual correct.

                  I really like the example of how the turing test was considered proof a computer can think of a human. Which many computers now have and we keep having to change what we consider “thinking like a human”

                  Modern chatbots depending which one tend to be a combination of a mix of different llm models, non llm ai, a database, api accessible tools and a lot of code to bring it all together.

                  But if your a little tech savy you can just spin one up and build your own however you like.

                  Google actually has one that does not use an llm at all but diffusion generation instead. It creates the text output similar to how image generation creates a picture. Mind though i don’t think this is much better but maybe combined it might be.

    • Valmond@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      9
      ·
      1 day ago

      Of course it is AI, you know artificial intelligence.

      Nobody said it has to be human level, or that people don’t do anthropomorphism.

          • outhouseperilous@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            3
            ·
            edit-2
            1 day ago

            No, it doesnt. There is no interiority, no context, no meaning, no awareness, no continuity, such a long list of things intelligence does that this simply cqnt-not because its too small, but because the fundamental method cannot, at any scale, do these things.

            There are a lot of definitions of intelligence, and these things dont fit any of them.

            • Valmond@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              4
              ·
              1 day ago

              Dude you mix up so many things having nothing to do with intelligence. Consciousness? No. Continuity? No. Awareness (what does that even mean for you in this context)?

              Intelligence isn’t to be human, it’s about making rational decisions based on facts/knowledge, and even an old VCR has a tiny bit of it programmed into it.

              • outhouseperilous@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                7
                arrow-down
                2
                ·
                edit-2
                1 day ago

                rational

                It literally cannot do that

                Decisions

                In the dame way a fist full of dice can make decisions; sure.

                facts

                If its programmed to run a script to do a google search and cite the first paragraph of wikipedia; sure. That function is basically eliza with an api call.

                knowledge

                Okay, sketch on what this actually means, but every answer i can think of, none of which im strongly committed to: still no.

                Its a bullshit machine. Like recognizes like, but it can’t do anything else. If you think its intelligent, that’s because you are not.

                Edit: And im really disappointed. I kind of always wanted a computer friend. I would adore the opportunity to midwife whole new forms of intelligence. That sounds really fucking cool. It’s the kind of thing i dreamed of as a kid, and this shit being sold as my childhood aspirations is blackpilling as fuck. I think the widespread acceptance of the bullshit sales pitch, and fact it means we’re less likely to get the real thing, has lead me to a lot of much more anti-human opinions than i used to have.

                • Valmond@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  1 day ago

                  A computer cannot make a rational decision?

                  It’s literally the only thing it does.

                  You’re throwing around a lot of assumptions IMO. One seems to be that intelligent is some sort of special human-only thing, like friendship or IDK. It is of course not. Neither is it human, conscious or has feelings of course.

                  Also, you can learn a lot from AI, like you can learn a lot from the internet (PCs hooked together, nothing more) with the difference the AI nowadays emulates a human to a certain degree.

                  But you should work on your anger issues dude no need to getting riled up like that, not a good way to start the weekend IMO.

                  Well cheers anyways!

                  • outhouseperilous@lemmy.dbzer0.com
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    ·
                    edit-2
                    1 day ago

                    computer=rational

                    Traditional code, yes. For some definitions of rational. This is the way to make it not be that.

                    throwing around a lot if assumptions

                    No, i understand things. I know the idea is foreign to you, but i do have some relevant domain knowledge. I have actually looked at the underlying technology, i have a basic understanding of math and computer science and philosophy of mind, and any of the three, separately, expose this as bullshit.

                    you can learn a lot from “ai”!

                    You can learn a lot from the bible, reading tea leaves, or listening to your friend’s schizophrenic uncle when he’s off his meds and into your friend’s mushrooms, too.

                    Edit: i would genuinely love to argue philosophy and ‘what is intelligence’, but none of the advocates of this technology are smart enough to even try to understand what that is, much less articulate and argue the concept.

                    It’s all just ‘nuh uh! It’s totally my friend! You haters just dont understand!’ Like a sicker dumber version of the arguments i had about nft’s five years ago. Fuck im sick of being earnest. I get more coherent responses and feel less like im shouting into the void when i just think of the dumbest shit i can possibly say and post that.

              • MystikIncarnate@lemmy.ca
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                1
                ·
                1 day ago

                Nope. There’s no cognition, no cognitive functions at all in LLMs. They are incapable of understanding actions, reactions, consequences and outcomes.

                Literally all it’s doing is giving you a random assortment of words that vaguely correlate to indicators that scored highly for the symbols (ideas/intents) that the prompt you entered contained.

                Literally that’s fucking it.

                You’re not “talking with an AI” you’re interacting with an LLM that is an amalgam of the collective responses for every inquiry, statement, reply, response, question, etc… That is accessible on the public Internet. It’s a dilution of the “intelligence” that can be derived from what everyone on the Internet has ever said, and what that cacophony of mixed messages, on average, would reply with.

                The reason why LLMs have gotten better is because they’ve absorbed more data than previous attempts and some of the outlying extremist messages have been carefully pruned from the library, so the resultant AI trends more towards the median persons predicted reply, versus everyone’s voice being weighed evenly.

                It only seems like “AI” because the responses are derived from real, legitimate human replies that were posted somewhere on the Internet at some point in time.

                • Valmond@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  1 day ago

                  Not very different from the human brain then 🤷🏼‍♀️, with the exception of qualia we’re just like that.

                  And cognitive functions are just nerves triggering other nerves and so on, just like computers’ bits & instructions…