Cognition is wordless, dude. The entire premise of it is built upon layers of sharp wave ripples that integrate memory from sense, emotion and landmark/non-landmark.
This is wax fruit. It's like sportscasting what cognition appears to be doing from a post-hoc, using things already patterned.
You're right about substrate. But you're missing the point about utility.
I'm not debating the term is wrong. It IS wrong. I agree. THat's kind of my point. I'm saying a wrong term is yielding measurably positive results and it's being dismissed because it's not right.
For whatever reason, the term works. When "Cognition" is paired with Plato's modes and a single line of guidance, there's a big point improvement.
Measurable.
Replicable across models.
It seems, from what I can tell, to give the LLM the right lens to look through from the outset and therefore sends it down the right path early on.
If it's wax fruit, prove it. I am totally up for being proved wrong and someone showing me that those few words don't make a big difference.
Again, I agree. Cognition is way too vague a term.
You're actually making the argument FOR what I'm saying.
From what I can see, Buzsáki says "cognition" is bad for neuroscience BECAUSE it's philosophically loaded.
Doesn't that reinforce my point that it's good for LLM engineering PRECISELY because it's philosophically loaded?
Basically, if cognition is philosophically inherited then maybe that's why it's working for LLMs with Plato's model. Because training corpora are philosophy-heavy?
No, it's folk science. It isn't vague, it's an illusion. Cognition is false on all counts because it's folk science impregnating science and CS (and philosophy). Read his argument carefully, don't just select a downstream argument, that's not scientifically viable. Cognition doesn't exist because it's based in the lowest res meaning possible. Sequestered cause/effect, it's a magic trick. Like LLMs.
You keep reinforcing my point. Absolutely true that folk psychology impregnates CS. I mean, look at Churchland. Totally spot on with stuff being flawed.
I am not disputing that. It is flawed. I agree. But what if those flaws impregnate a system to the point they produce functional utility? Because when a system is trained on that same folk psychology, it can yield measurable results.
This isn't about "does cognition exist", or even "is cognition correct". It's about "what happens when you stop listening to theory and start looking at empirical evidence that seems to show that these structured relational patterns are producing measurable differences." If its distinguishable empirically, does it matter if it's "folk science", "vague" or "an illusion".
If I add 10 words to a prompt, regardless of what any expert says, and it actually produces better quality, isn't that worth exploring?
Anyway, let's see what independent testing shows. We might be hitting a philosophical impasse.
[dead]
Cognition is wordless, dude. The entire premise of it is built upon layers of sharp wave ripples that integrate memory from sense, emotion and landmark/non-landmark.
This is wax fruit. It's like sportscasting what cognition appears to be doing from a post-hoc, using things already patterned.
You're right about substrate. But you're missing the point about utility.
I'm not debating the term is wrong. It IS wrong. I agree. THat's kind of my point. I'm saying a wrong term is yielding measurably positive results and it's being dismissed because it's not right.
For whatever reason, the term works. When "Cognition" is paired with Plato's modes and a single line of guidance, there's a big point improvement.
Measurable.
Replicable across models.
It seems, from what I can tell, to give the LLM the right lens to look through from the outset and therefore sends it down the right path early on.
If it's wax fruit, prove it. I am totally up for being proved wrong and someone showing me that those few words don't make a big difference.
It's wax fruit because we know the term cognition is bunk as intelligence:
https://pmc.ncbi.nlm.nih.gov/articles/PMC7415918/
Again, I agree. Cognition is way too vague a term.
You're actually making the argument FOR what I'm saying.
From what I can see, Buzsáki says "cognition" is bad for neuroscience BECAUSE it's philosophically loaded.
Doesn't that reinforce my point that it's good for LLM engineering PRECISELY because it's philosophically loaded?
Basically, if cognition is philosophically inherited then maybe that's why it's working for LLMs with Plato's model. Because training corpora are philosophy-heavy?
No, it's folk science. It isn't vague, it's an illusion. Cognition is false on all counts because it's folk science impregnating science and CS (and philosophy). Read his argument carefully, don't just select a downstream argument, that's not scientifically viable. Cognition doesn't exist because it's based in the lowest res meaning possible. Sequestered cause/effect, it's a magic trick. Like LLMs.
You keep reinforcing my point. Absolutely true that folk psychology impregnates CS. I mean, look at Churchland. Totally spot on with stuff being flawed.
I am not disputing that. It is flawed. I agree. But what if those flaws impregnate a system to the point they produce functional utility? Because when a system is trained on that same folk psychology, it can yield measurable results.
This isn't about "does cognition exist", or even "is cognition correct". It's about "what happens when you stop listening to theory and start looking at empirical evidence that seems to show that these structured relational patterns are producing measurable differences." If its distinguishable empirically, does it matter if it's "folk science", "vague" or "an illusion".
If I add 10 words to a prompt, regardless of what any expert says, and it actually produces better quality, isn't that worth exploring?
Anyway, let's see what independent testing shows. We might be hitting a philosophical impasse.
[dead]
Ah, you're misunderstanding. I'm not measuring "cognition."
See, it's much simpler.
Concrete test setup:
Agent B found 20% more flaws than Agent A. Only variable: those 4 lines.Objective measurement: count of flaws detected.
N=40 runs, statistically significant improvement.
The evidence is all in the repo.