• bbb@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    12 hours ago

    It’s interesting that you point to https://en.wikipedia.org/wiki/Hard_problem_of_consciousness when the term was coined by David Chalmers, who published Could a Large Language Model be Conscious?. From the abstract:

    I conclude that while it is somewhat unlikely that current large language models are conscious, we should take seriously the possibility that successors to large language models may be conscious in the not-too-distant future.

    So are we all just arguing about how likely it is, or are you arguing that current AI systems are definitely not conscious? If the latter, what do you think about the not-too-distant future ones?

    But a neuroscientist will tell you it’s not simple at all. It’s not info in, info out.

    The system is changed, biologically, by the input.

    The same input given twice will result in a different output the 2nd time.

    And the 3rd. And how frequently the input is given or it’s temporal relation to other stimuli will also change its output.

    I thought online learning was possible with current LLMs, just not worth the cost. I mean, you can at least fine tune offline based on previous outputs and feedback, e.g. RLHF. I feel like maybe neither should count, but can’t say why exactly. Not many end users bother with fine tuning anymore because there are usually more effective alternatives like RAG.

    What do you think about agentic systems, i.e. running an LLM in a loop with a scratchpad and tools? They just write their “memories” into text files, but if you consider those text files part of the system, then the input does technically change the system. Of course, you could argue that doesn’t count because it’s no different to changing the input. So to count, it would have to store neuralese or a LoRA or something?

    • daannii@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      11 hours ago

      Agenic systems are definitely more sophisticated but still just directed programming.

      Humans do not learn like machines learn.

      I’ve already explained that the exact same input , put in twice into a human will not result in the same exact output.

      But it would for a model where nothing has changed.

      I also gave links to the binding problem and biopsychology of personality and how traits change how information is processed in humans.

      I didn’t even go into neural noise or brain oscillations but that’s a whole other factor for processing information.

      Computers don’t have any of that. They don’t actually perceive or understand anything.

      This is why a human can produce new problem solving solutions.

      Apply things unrelated to new problems.

      We can think outside the box without producing more nonsense than useful outputs.

      Machines produce mostly nonsense when parameters are relaxed.

      Also Chalmers is saying he thinks potentially in the future. Someone could create artificial intelligence and it may , in part use LLMs.

      That’s just him having an open mind about it.

      I don’t share his sentiments. But I admit I’m open to changing my mind if I see some very convincing evidence that works with current knowledge and theories of neuroscience.

      Because I’m not convinced that something is sentient because “it looks real”. Or “sounds like a person”.

      It has to function in ways that would lead to evolution outside of human intervention and control with systems that would create sense of self and understanding.

      Mathematical formulas cannot do either of those things.

      A program directed by code a human put in, cannot do those things.

      Its like cgi. It can look very realistic. But it’s not actually a real person.

      Even when motion capture is used. It’s still just a program mimicking human movements because someone (a human) told it to.