• daannii@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    11 hours ago

    Agenic systems are definitely more sophisticated but still just directed programming.

    Humans do not learn like machines learn.

    I’ve already explained that the exact same input , put in twice into a human will not result in the same exact output.

    But it would for a model where nothing has changed.

    I also gave links to the binding problem and biopsychology of personality and how traits change how information is processed in humans.

    I didn’t even go into neural noise or brain oscillations but that’s a whole other factor for processing information.

    Computers don’t have any of that. They don’t actually perceive or understand anything.

    This is why a human can produce new problem solving solutions.

    Apply things unrelated to new problems.

    We can think outside the box without producing more nonsense than useful outputs.

    Machines produce mostly nonsense when parameters are relaxed.

    Also Chalmers is saying he thinks potentially in the future. Someone could create artificial intelligence and it may , in part use LLMs.

    That’s just him having an open mind about it.

    I don’t share his sentiments. But I admit I’m open to changing my mind if I see some very convincing evidence that works with current knowledge and theories of neuroscience.

    Because I’m not convinced that something is sentient because “it looks real”. Or “sounds like a person”.

    It has to function in ways that would lead to evolution outside of human intervention and control with systems that would create sense of self and understanding.

    Mathematical formulas cannot do either of those things.

    A program directed by code a human put in, cannot do those things.

    Its like cgi. It can look very realistic. But it’s not actually a real person.

    Even when motion capture is used. It’s still just a program mimicking human movements because someone (a human) told it to.