• Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    11 months ago

    Calling the over glorified chatbots and LLMs like GPT or Claude AGI would be like me calling a preschool finger painting a master class work of art, from my understanding of them. Though, I can’t say I’m anywhere near an expert, so definitely take what I say with a major grain of salt.

    What these AI chatbots and LLMs can do is sometimes impressive, but that’s all I can say about them. Intelligence is definitely not their strong suit when half of the time you’ll ask for a summary of a well known and loved TV show only for it to just make up anything that sounds right.

    • ConsciousCode@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      LLMs are not chatbots, they’re models. ChatGPT/Claude/Bard are chatbots which use LLMs as part of their implementation. I would argue in favor of the article because, while they aren’t particularly intelligent, they are general-purpose and exhibit some level of intelligence and thus qualify as “general intelligence”. Compare this to the opposite, an expert system like a chess computer. You can’t even begin to ask a chess computer to explain what a SQL statement does, the question doesn’t even make sense. But LLMs are capable of being applied to virtually any task which can be transcribed. Even if they aren’t particularly good, compared to GPT-2 which read more like a markov chain they at least attempt to complete the task, and are often correct.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        11 months ago

        LLMs are capable of being applied to virtually any task which can be transcribed

        Where “transcribed” means using any set of tokens, be it extracted from human written languages, emojis, pieces of images, audio elements, spatial positions, or any other thing in existence that can be divided and represented by tokens.

        PS: actually… why “in existence”? Why not throw in some “customizable tokens” into an LLM, for it to come up with whatever meaning it fancies for them?

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    I wonder how many people here actually looked at the article. They’re arguing that ability to do things not specifically trained on is a more natural benchmark of the transition from traditional algorithm to intelligence than human-level performance. Honestly, it’s an interesting point; aliens would not be using human-level performance as a benchmark so it must be subjective to us.

  • ConsciousCode@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    Actually a really interesting article which makes me rethink my position somewhat. I guess I’ve unintentionally been promoting LLMs as AGI since GPT-3.5 - the problem is just with our definitions and how loose they are. People hear “AGI” and assume it would look and act like an AI in a movie, but if we break down the phrase, what is general intelligence if not applicability to most domains?

    This very moment I’m working on a library for creating “semantic functions”, which lets you easily use an LLM almost like a semantic processor. You say await infer(f"List the names in this text: {text}") and it just does it. What most of the hype has ignored with LLMs is that they are not chatbots. They are causal autoregressive models of the joint probabilities of how language evolves over time, which is to say they can be used to build chatbots, but that’s the first and least interesting application.

    So yeah, I guess it’s been AGI this whole time and I just didn’t realize it because they aren’t people, and I had assumed AGI implied personhood (which it doesn’t).

    • sandriver@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      I’m not sure how the tech is progressing, but ChatGPT was completely dysfunctional as an expert system, if the AI field still cares about those. You can adapt the Chinese Room problem to whether a model actually has applicability outside of a particular domain (say, anything requiring guessing words on probabilities, or stabilising a robot).

      Another problem is that probabilistic reasoning requires data. Just because a particular problem solving approach is very good at guessing words based on a huge amount of data from a generalist corpus, doesn’t mean it’s good at guessing in areas where data is poor. Could you comment on whether LLMs have good applicability as expert systems in, say, medicine? Especially obscure diseases, or heterogeneous neurological conditions (or both like in bipolar disorders and schizophrenia-related disorders)?

      • ConsciousCode@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        LLMs are not expert systems, unless you characterize them as expert systems in language which is fair enough. My point is that they’re applicable to a wide variety of tasks which makes them general intelligences, as opposed to an expert system which by definition can only do a handful of tasks.

        If you wanted to use an LLM as an expert system (I guess in the sense of an “expert” in that task, rather than a system which literally can’t do anything else), I would say they currently struggle with that. Bare foundation models don’t seem to have the sort of self-awareness or metacognitive capabilities that would be required to restrain them to their given task, and arguably never will because they necessarily can only “think” on one “level”, which is the predicted text. To get that sort of ability you need cognitive architectures, of which chatbot implementations like ChatGPT are a very simple version of. If you want to learn more about what I mean, the most promising idea I’ve seen is the ACE framework. Frameworks like this can allow the system to automatically look up an obscure disease based on the embedded distance to a particular query, so even if you give it a disease which only appears in the literature after its training cut-off date, it knows this disease exists (and is a likely candidate) by virtue of it appearing in its prompt. Something like “You are an expert in diseases yadda yadda. The symptoms of the patient are x y z. This reminds you of these diseases: X (symptoms 1), Y (symptoms 2), etc. What is your diagnosis?” Then you could feed the answer of this question to a critical prompting, and repeat until it reports no issues with the diagnosis. You can even make it “learn” by using LoRA, or keep notes it writes to itself.

        As for poorer data distributions, the magic of large language models (before which we just had “language models”) is that we’ve found that the larger we make them, and the more (high quality) data we feed them, the more intelligent and general they become. For instance, training them on multiple languages other than English somehow allows them to make more robust generalizations even just within English. There are a few papers I can recall which talk about a “phase transition” which happens during training where beforehand, the model seems to be literally memorizing its corpus, and afterwards (to anthropomorphize a bit) it suddenly “gets” it and that memorization is compressed into generalized understanding. This is why LLMs are applicable to more than just what they’ve been taught - you can eg give them rules to follow within the conversation which they’ve never seen before, and they are able to maintain that higher-order abstraction because of that rich generalization. This is also a major reason open source models, particularly quantizations and distillations, are so successful; the models they’re based on did the hard work of extracting higher-order semantic/geometric relations, and now making the model smaller has minimal impact on performance.

      • Fidelity9373@artemis.camp
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        You may say that jokingly, but at some point if the tech keeps improving, that may be the only way the world continues to exist without destabilizing. OpenAI already says* that their end goal is to make the world powered by a form of universal basic income by having AI do most jobs. Having the AI be paid on task completion and distributing that accumulated wealth, removing a portion to cover maintenance, would be one method of doing so.

        *that said, the words of a potential megacorporation aren’t really to be trusted, and the whole thing would have massive issues of “how do you distribute the money” and “what am I giving up in terms of personal safety and privacy”. Having to make an account with a specific AI company and providing all your governmental identification to receive that funds for example would be terrible.

    • h3ndrik@feddit.de
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      11 months ago

      (Wow. That’s really a bad article. And even though the author managed to ramble on for quite some pages, they somehow completely failed to address the interesting and well discussed arguments.)

      [Edit: I disagree -strongly- with the article]

      We’ve discussed this in June 2022 after the Google engineer Blake Lemoine claimed his company’s artificial intelligence chatbot LaMDA was a self-aware person. We’ve discussed both intelligence and conciousness.

      And my -personal- impression is: If you use ChatGPT for the first time, it blows you away. It’s been a ‘living in the future’ moment for me. And I see how you’d write an excited article about it. But once you used it for a few days, you’ll see every 6th grade teacher can distinguish if homework assignments were done by a sentient being or an LLM. And ChatGPT isn’t really useful for too many tasks. Drafting things, coming up with creative ideas or giving something the final touch, yes. But defenitely limited and not something ‘general’. I’d say it does some of my tasks so badly, it’s going to be years before we can talk about ‘general’ intelligence.

    • Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Have you seen this paper?

      Abstract:

      While large language models (LLMs) have demonstrated impressive performance on a range of decision-making tasks, they rely on simple acting processes and fall short of broad deployment as autonomous agents. We introduce LATS (Language Agent Tree Search), a general framework that synergizes the capabilities of LLMs in planning, acting, and reasoning. Drawing inspiration from Monte Carlo tree search in model-based reinforcement learning, LATS employs LLMs as agents, value functions, and optimizers, repurposing their latent strengths for enhanced decision-making. What is crucial in this method is the use of an environment for external feedback, which offers a more deliberate and adaptive problem-solving mechanism that moves beyond the limitations of existing techniques. Our experimental evaluation across diverse domains, such as programming, HotPotQA, and WebShop, illustrates the applicability of LATS for both reasoning and acting. In particular, LATS achieves 94.4% for programming on HumanEval with GPT-4 and an average score of 75.9 for web browsing on WebShop with GPT-3.5, demonstrating the effectiveness and generality of our method.

      Graphs:

      I think we can’t really get the most out of current LLMs because of how much they cost to run. Once we can get speeds up and costs down, they’ll be able to do more impressive things.

      https://www.youtube.com/watch?v=Zlgkzjndpak

      https://www.youtube.com/watch?v=NfGcWGaO1E4

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      My standard for agi is that its able to do a low-level human work from home job.

      If it needs me to pre-chew and check every single step then it can still be a smart tool but its definitely not intelligent.

    • ConsciousCode@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      “You can use them for all kinds of tasks” - so would you say they’re generally intelligent? As in they aren’t an expert system?

  • The Doctor@beehaw.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Oh, for fuck’s sake… no. It isn’t. And I find myself pondering whether or not the article’s authors are themselves sapient.

    • khalic@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      11 months ago

      I kind of regret learning ML sometimes. Being one of the 10 people per km2 who understand how it works is so annoying. It’s just a fancy mirror ffs, stop making weird faces at it you baboons!

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        11 months ago

        Do you really understand how it works? What would you call a neural network with mirror neurons primed to react to certain stimuli patterns as the network gets trained… a mirror, or a baboon?

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            11 months ago

            What do you call a neuron “that reacts both when a particular action is performed and when it is only observed”? Current LLMs are made out exclusively of mirror neurons, since their output (what they perform) is the same action as their input (what they observe).

            • EthicalAI@beehaw.org
              link
              fedilink
              arrow-up
              1
              ·
              11 months ago

              I can’t even parse what you mean when you say their input is the same as their output, that would imply they don’t transform their input, which would defeat their purpose. This is nonsense.