The little particular person on the management panel, the one who sees what the retina produces, the one who decides, the one who speaks up…
(This is the dualist answer to the free will drawback–sure, I’ve a bodily physique, they are saying, however I even have a bit human within me that will get to make free choices separate from that…)
Anthropomorphism is a strong software. When we encounter one thing complicated, we think about that, like us, it has a bit particular person on the controls, somebody who, if we have been on the management panel, would do what we do.
A tiger or a lion isn’t an individual, however we attempt to predict their habits by imagining that they’ve a bit particular person (maybe extra feline, extra wild and fewer ‘smart’ than us) on the controls. Our expertise of life on Earth is a sequence of narratives in regards to the little individuals within everybody we encounter.
Artificial intelligence is an issue, then, as a result of we will see the code and thus proof that there’s no little particular person inside.
So when computer systems beat us at chess, we stated, “that’s not artificial intelligence, that’s simply dumb code that can solve a problem.”
And we did the identical factor when computer systems began to “compose” music or “draw” photos. The quotes are necessary, as a result of the pc couldn’t presumably have a bit particular person inside.
And now, LLM and issues like ChatGPT flip this all the wrong way up. Because it’s primarily unattainable, even for AI researchers, to work with these instruments with out imagining the little particular person inside.
The perception that could be useful is this: We don’t have a bit particular person within us.
None of us do.
We’re merely code, all the best way down, similar to ChatGPT.
It’s not that we’re now discovering a brand new form of magic. It’s that the previous form of magic was at all times an phantasm.