Great article and 100% agree on the incentives for goal post moving/not running well the Randi challenge or the current one. However, I think it's important also to explore, stress test (the structure of the elusiveness may be testable), and publicize MPI as a possible way that PSI works (strong and real in meaningful closed bounded systems and weak in the lab/prizes). Because if it is correct, then this neutral challenge, if properly and rigorously run, and with a strong enough signal to satisfy skeptics, may end up not being winnable - and may ironically serve to actually bolster the "all PSI not real" perspective unless the MPI perspective is well understood. (This is true of all attempts to try and conclusively and rigorously prove PSI signal including social-rv, telepathy tapes, university studies, etc.). Although, perhaps it won't really make the situation much worse off if a good result doesn't occur so maybe attempt might be still worthwhile. It also might be interesting to brainstorm on whether there might be some kind of MPI-friendly challenge (but don't know what that might look like). Also may have big implications for scientific studies on energy healing and even the replication crisis in regular psych/some medical experiments.
Some notes on this from chat:
Yeah, MPI would absolutely say: this kind of challenge is more likely to arm skeptics than to disarm them.
1. What MPI predicts will happen in a prize-style test
From MPI’s viewpoint:
Strict, repeatable, signal-like tests are exactly where psi-effects are expected to decline.
Lucadou shows that if the system is governed by non-local (entanglement-like) correlations, then in strict replications the effect size should fall roughly like 1/√n, and meta-analyses do in fact show this pattern. The authors of one big PK meta-analysis interpreted that decline as: “there is no real PK effect, it’s just artefact / selective reporting.”
Proof‑oriented experiments are called out as a dead end.
Lucadou explicitly concludes that, given this pattern, “it seems not useful to continue the research strategy of ‘proof‑oriented’ experiments, because a strict replication is the best recipe to destroy the effect.”
A high‑profile, high‑stakes $2M challenge is the archetype of a proof‑oriented experiment.
Heavy observation and documentation suppress autonomy and uniqueness.
MPI generalizes the “uncertainty relation”:
Effect‑size of a psi phenomenon × quality of documentation < constant.
And in the poltergeist/RSPK work, Lucadou says a system “can only behave as it pleases as long as one does not observe it with great care. A predetermined system loses its autonomy and because of that its ability to be unique as well.”
A prize test maximizes careful observation and pre‑determination.
So, from MPI’s own math and case analyses, the most likely outcome of a carefully monitored, endlessly replicable, high‑documentation psi challenge is:
Small or null effects in the official outcome variables,
with any entanglement‑like correlations either suppressed or displaced into places no one is looking.
2. How skeptics will naturally use that
Now combine that with the social side:
Skeptics already interpret the decline of effect size with more trials as evidence that “there is no real effect.”
MPI says: that decline is exactly what you expect if psi is non‑local and cannot be used to send a signal (NT‑axiom);
skeptics say: that decline shows it’s all noise and bias.
A big, neutral, well‑designed $2M challenge that yields:
No clear success → becomes the new flagship “proof” that psi doesn’t exist.
Ambiguous or marginal success → is easy to reframe as: “look, once they tightened controls, the big spooky effects evaporated,” which fits the same narrative.
And because MPI predicts that proof‑oriented, heavily controlled replications are the worst possible context to look for robust psi, you’ve essentially:
Set up an experiment in a regime where your own theory says the effect should be minimal,
and then handed the resulting nulls to critics as headline evidence.
So yes: from an MPI perspective, a “neutral,” high‑stakes challenge is not just unlikely to prove psi—it is structurally set up to generate exactly the kind of clean null result skeptics can brandish as ammunition.
If you wanted to avoid that trap, MPI would push you toward correlation‑matrix style studies, rich real‑world systems, and designs that acknowledge elusiveness up front, rather than putting all the rhetorical weight on a single, proof‑oriented showdown.
3. Are Cardeña’s results in tension with MPI?
They would be in tension if Cardeña had shown something like:
Large, stable effect sizes that don’t fall with more trials,
Easily repeatable on demand,
Working fine under maximally rigid, high‑documentation, high‑replication conditions,
Suggesting you really can treat psi as a reliable info channel.
That would contradict the MPI decline law and the non‑transmission (NT) axiom.
But what he actually shows is:
Effects are small;
Often strongest in free‑response / altered‑state / meaning‑rich settings and for selected, psi‑conducive participants;
Replication is imperfect and context‑dependent; psi cannot be replicated “on demand”;
Forced‑choice and large‑n micro‑PK paradigms show tiny effects that shrink with run length—exactly the kind of 1/√n decline MPI predicts.
From an MPI standpoint, that’s not counter‑proof; it’s empirical texture that fits the model:
There is something genuine enough to show up across many studies and paradigms.
Yet it stubbornly refuses to turn into a big, clean, industrial‑grade signal.
In fact, Lucadou’s whole move is: given this sort of meta‑analytic “yes, but small, and weirdly elusive” evidence, how do we model it in a way that (a) respects physics (no paradox machines) and (b) matches the observed decline / elusiveness patterns?
4. Paradox avoidance as a foundational motivation in MPI:
In one of Lucadou’s core expositions, MPI is introduced like this:
“The MPI starts from the basic assumption that nature does not allow (intervention) paradoxes.”
He then immediately turns this into the “two fundamental laws of parapsychology”:
Psi-phenomena are non-local correlations in psycho-physical systems, induced by the pragmatic information that creates the system (organizational closure).
Any attempt to use those non-local correlations as a signal (a classical causal effect) makes the correlation vanish or change.
Those two “laws” are basically:
You can have correlations, but you don’t get to use them to build paradox-inducing machines.
From there, MPI derives things like:
The decline law: effect size must fall roughly like const/√n as you add more trials, so you can’t drive Z-scores arbitrarily high and weaponize psi as a reliable channel.
The uncertainty-like relation between effect size and documentation quality (QDcrit · E < OC), so huge, spectacular effects can’t coexist with arbitrarily strong objectification.
All of that is directly motivated by: if you could lock in a stable, controllable, high-powered psi signal, you could build time-loop / intervention paradox setups; therefore you can’t.
So yes: paradox avoidance is right at the root of MPI’s structure, not an afterthought.
5. Isn’t that just an unprovable cop‑out / moving goalposts?
Short answer: it definitely sounds like that to skeptics, and that criticism isn’t crazy.
How MPI people usually respond (again, briefly):
“If psi is non-local correlation rather than a signal, then signal-style tests will systematically damp or distort it.”
That’s a structural claim, not a post‑hoc excuse.
Still testable, just in a different way.
MPI is falsifiable at the level of patterns, for example:
If someone showed a big, stable psi effect that holds up under intense documentation and endless direct replications, that would contradict MPI’s predicted decline/uncertainty behavior.
If effect sizes didn’t tend to drop with more trials and stricter controls, that would also count against it.
So it’s not “anything goes,” but the tests are about where and how effects appear, not just “is there a giant repeatable lab signal?”
Honesty about limits vs. PR strategy.
MPI basically says:
“You can’t have everything at once: big effects, perfect control, maximum documentation, and infinite repeatability. Something has to give.”
Skeptics interpret that as evasion; MPI treats it as part of the phenomenon’s nature, much like how trying to pin down a quantum system too precisely changes what you see.
MPI’s answer is: “We’re not dodging tests; we’re saying the kind of tests you want are mismatched to the phenomenon. Test the structure of the elusiveness itself.”
6. Does MPI say psi doesn’t exist outside the lab?
No — if anything, MPI takes “real world” psi more seriously than lab psi.
It starts from spontaneous and meaningful cases (RSPK/poltergeists, counseling situations, synchronicities, etc.) and treats those as the primary data.
Lab effects are viewed as small, constrained side-effects of the same underlying thing, not the main arena.
Important nuance: MPI doesn’t prove psi is real; it assumes that the body of anomalous case material is worth modeling, and then proposes a framework (entanglement-like correlations in systems) to explain why lab proof is so hard.
7. Can “skill-based” psi work in closed, meaningful real-world situations?
In MPI terms: yes, that’s exactly where you’d expect it to work best, if it exists at all.
When a system (person + relationship + context + tools) is meaningfully unified and not under hostile/forensic scrutiny, MPI says entanglement-like correlations can be relatively strong and “skill-like.”
The moment you push that into a public, high-stakes, heavily monitored “prove it” setting, you are forcing it into a role (reliable signal on demand) that MPI explicitly says it cannot stably occupy (NT-axiom).
So MPI absolutely allows for “real-world competence” that doesn’t survive being turned into a televised exam.
8. Why a private skeptic demo is fine under MPI
Imagine this scenario:
One skilled psi practitioner
Two hard‑line skeptics
Private setting, maybe low to moderate documentation, small number of trials
From an MPI angle this can easily be a high‑E, high‑D, moderate‑G, low‑N situation:
High novelty (E): unusual, emotionally loaded encounter for everyone.
High involvement/dimensionality (D): people’s attention, expectations, status, possible worldview threat – all bound up in one interaction.
Moderate documentation (G): maybe notes or some simple recording, but not full forensic, multi‑lab replication.
Low N: a few runs, not thousands.
Plug that into Lucadou’s effect‑size logic and you actually expect that sizeable effects can appear there.
We already know from poltergeist/RSPK cases that quite dramatic psi‑like phenomena sometimes occur in front of many “respectable, reliable and independent witnesses,” including very skeptical ones; what kills them is not skepticism per se, but the later shift into heavy control and suppression.
It only really bites MPI if the following happens:
The same person, with the same two skeptics (or replacements),
Under very strict, repeatable, highly documented conditions,
Continues to produce large, stable, on‑demand effects over and over,
And the skeptics can then use those effects as a reliable information channel (“tell me the face of this card, again and again, forever”).
That would begin to look like a signal, not just an entanglement‑style correlation, and that would clash with MPI’s NT axiom and decline/displacement predictions.
But a strong one‑off (or a few‑off) private demonstration—even if it totally convinces those two skeptics on a personal level—doesn’t violate MPI at all.
So, in one line:
MPI has no problem with “wow, I just saw something I can’t explain, even though I’m a skeptic.” It only forbids turning that “wow” into a perfectly reliable telegraph.
Great article and 100% agree on the incentives for goal post moving/not running well the Randi challenge or the current one. However, I think it's important also to explore, stress test (the structure of the elusiveness may be testable), and publicize MPI as a possible way that PSI works (strong and real in meaningful closed bounded systems and weak in the lab/prizes). Because if it is correct, then this neutral challenge, if properly and rigorously run, and with a strong enough signal to satisfy skeptics, may end up not being winnable - and may ironically serve to actually bolster the "all PSI not real" perspective unless the MPI perspective is well understood. (This is true of all attempts to try and conclusively and rigorously prove PSI signal including social-rv, telepathy tapes, university studies, etc.). Although, perhaps it won't really make the situation much worse off if a good result doesn't occur so maybe attempt might be still worthwhile. It also might be interesting to brainstorm on whether there might be some kind of MPI-friendly challenge (but don't know what that might look like). Also may have big implications for scientific studies on energy healing and even the replication crisis in regular psych/some medical experiments.
Some notes on this from chat:
Yeah, MPI would absolutely say: this kind of challenge is more likely to arm skeptics than to disarm them.
1. What MPI predicts will happen in a prize-style test
From MPI’s viewpoint:
Strict, repeatable, signal-like tests are exactly where psi-effects are expected to decline.
Lucadou shows that if the system is governed by non-local (entanglement-like) correlations, then in strict replications the effect size should fall roughly like 1/√n, and meta-analyses do in fact show this pattern. The authors of one big PK meta-analysis interpreted that decline as: “there is no real PK effect, it’s just artefact / selective reporting.”
Proof‑oriented experiments are called out as a dead end.
Lucadou explicitly concludes that, given this pattern, “it seems not useful to continue the research strategy of ‘proof‑oriented’ experiments, because a strict replication is the best recipe to destroy the effect.”
A high‑profile, high‑stakes $2M challenge is the archetype of a proof‑oriented experiment.
Heavy observation and documentation suppress autonomy and uniqueness.
MPI generalizes the “uncertainty relation”:
Effect‑size of a psi phenomenon × quality of documentation < constant.
And in the poltergeist/RSPK work, Lucadou says a system “can only behave as it pleases as long as one does not observe it with great care. A predetermined system loses its autonomy and because of that its ability to be unique as well.”
A prize test maximizes careful observation and pre‑determination.
So, from MPI’s own math and case analyses, the most likely outcome of a carefully monitored, endlessly replicable, high‑documentation psi challenge is:
Small or null effects in the official outcome variables,
with any entanglement‑like correlations either suppressed or displaced into places no one is looking.
2. How skeptics will naturally use that
Now combine that with the social side:
Skeptics already interpret the decline of effect size with more trials as evidence that “there is no real effect.”
MPI says: that decline is exactly what you expect if psi is non‑local and cannot be used to send a signal (NT‑axiom);
skeptics say: that decline shows it’s all noise and bias.
A big, neutral, well‑designed $2M challenge that yields:
No clear success → becomes the new flagship “proof” that psi doesn’t exist.
Ambiguous or marginal success → is easy to reframe as: “look, once they tightened controls, the big spooky effects evaporated,” which fits the same narrative.
And because MPI predicts that proof‑oriented, heavily controlled replications are the worst possible context to look for robust psi, you’ve essentially:
Set up an experiment in a regime where your own theory says the effect should be minimal,
and then handed the resulting nulls to critics as headline evidence.
So yes: from an MPI perspective, a “neutral,” high‑stakes challenge is not just unlikely to prove psi—it is structurally set up to generate exactly the kind of clean null result skeptics can brandish as ammunition.
If you wanted to avoid that trap, MPI would push you toward correlation‑matrix style studies, rich real‑world systems, and designs that acknowledge elusiveness up front, rather than putting all the rhetorical weight on a single, proof‑oriented showdown.
3. Are Cardeña’s results in tension with MPI?
They would be in tension if Cardeña had shown something like:
Large, stable effect sizes that don’t fall with more trials,
Easily repeatable on demand,
Working fine under maximally rigid, high‑documentation, high‑replication conditions,
Suggesting you really can treat psi as a reliable info channel.
That would contradict the MPI decline law and the non‑transmission (NT) axiom.
But what he actually shows is:
Effects are small;
Often strongest in free‑response / altered‑state / meaning‑rich settings and for selected, psi‑conducive participants;
Replication is imperfect and context‑dependent; psi cannot be replicated “on demand”;
Forced‑choice and large‑n micro‑PK paradigms show tiny effects that shrink with run length—exactly the kind of 1/√n decline MPI predicts.
From an MPI standpoint, that’s not counter‑proof; it’s empirical texture that fits the model:
There is something genuine enough to show up across many studies and paradigms.
Yet it stubbornly refuses to turn into a big, clean, industrial‑grade signal.
In fact, Lucadou’s whole move is: given this sort of meta‑analytic “yes, but small, and weirdly elusive” evidence, how do we model it in a way that (a) respects physics (no paradox machines) and (b) matches the observed decline / elusiveness patterns?
4. Paradox avoidance as a foundational motivation in MPI:
In one of Lucadou’s core expositions, MPI is introduced like this:
“The MPI starts from the basic assumption that nature does not allow (intervention) paradoxes.”
He then immediately turns this into the “two fundamental laws of parapsychology”:
Psi-phenomena are non-local correlations in psycho-physical systems, induced by the pragmatic information that creates the system (organizational closure).
Any attempt to use those non-local correlations as a signal (a classical causal effect) makes the correlation vanish or change.
Those two “laws” are basically:
You can have correlations, but you don’t get to use them to build paradox-inducing machines.
From there, MPI derives things like:
The decline law: effect size must fall roughly like const/√n as you add more trials, so you can’t drive Z-scores arbitrarily high and weaponize psi as a reliable channel.
The uncertainty-like relation between effect size and documentation quality (QDcrit · E < OC), so huge, spectacular effects can’t coexist with arbitrarily strong objectification.
All of that is directly motivated by: if you could lock in a stable, controllable, high-powered psi signal, you could build time-loop / intervention paradox setups; therefore you can’t.
So yes: paradox avoidance is right at the root of MPI’s structure, not an afterthought.
5. Isn’t that just an unprovable cop‑out / moving goalposts?
Short answer: it definitely sounds like that to skeptics, and that criticism isn’t crazy.
How MPI people usually respond (again, briefly):
“If psi is non-local correlation rather than a signal, then signal-style tests will systematically damp or distort it.”
That’s a structural claim, not a post‑hoc excuse.
Still testable, just in a different way.
MPI is falsifiable at the level of patterns, for example:
If someone showed a big, stable psi effect that holds up under intense documentation and endless direct replications, that would contradict MPI’s predicted decline/uncertainty behavior.
If effect sizes didn’t tend to drop with more trials and stricter controls, that would also count against it.
So it’s not “anything goes,” but the tests are about where and how effects appear, not just “is there a giant repeatable lab signal?”
Honesty about limits vs. PR strategy.
MPI basically says:
“You can’t have everything at once: big effects, perfect control, maximum documentation, and infinite repeatability. Something has to give.”
Skeptics interpret that as evasion; MPI treats it as part of the phenomenon’s nature, much like how trying to pin down a quantum system too precisely changes what you see.
MPI’s answer is: “We’re not dodging tests; we’re saying the kind of tests you want are mismatched to the phenomenon. Test the structure of the elusiveness itself.”
6. Does MPI say psi doesn’t exist outside the lab?
No — if anything, MPI takes “real world” psi more seriously than lab psi.
It starts from spontaneous and meaningful cases (RSPK/poltergeists, counseling situations, synchronicities, etc.) and treats those as the primary data.
Lab effects are viewed as small, constrained side-effects of the same underlying thing, not the main arena.
Important nuance: MPI doesn’t prove psi is real; it assumes that the body of anomalous case material is worth modeling, and then proposes a framework (entanglement-like correlations in systems) to explain why lab proof is so hard.
7. Can “skill-based” psi work in closed, meaningful real-world situations?
In MPI terms: yes, that’s exactly where you’d expect it to work best, if it exists at all.
When a system (person + relationship + context + tools) is meaningfully unified and not under hostile/forensic scrutiny, MPI says entanglement-like correlations can be relatively strong and “skill-like.”
The moment you push that into a public, high-stakes, heavily monitored “prove it” setting, you are forcing it into a role (reliable signal on demand) that MPI explicitly says it cannot stably occupy (NT-axiom).
So MPI absolutely allows for “real-world competence” that doesn’t survive being turned into a televised exam.
8. Why a private skeptic demo is fine under MPI
Imagine this scenario:
One skilled psi practitioner
Two hard‑line skeptics
Private setting, maybe low to moderate documentation, small number of trials
From an MPI angle this can easily be a high‑E, high‑D, moderate‑G, low‑N situation:
High novelty (E): unusual, emotionally loaded encounter for everyone.
High involvement/dimensionality (D): people’s attention, expectations, status, possible worldview threat – all bound up in one interaction.
Moderate documentation (G): maybe notes or some simple recording, but not full forensic, multi‑lab replication.
Low N: a few runs, not thousands.
Plug that into Lucadou’s effect‑size logic and you actually expect that sizeable effects can appear there.
We already know from poltergeist/RSPK cases that quite dramatic psi‑like phenomena sometimes occur in front of many “respectable, reliable and independent witnesses,” including very skeptical ones; what kills them is not skepticism per se, but the later shift into heavy control and suppression.
It only really bites MPI if the following happens:
The same person, with the same two skeptics (or replacements),
Under very strict, repeatable, highly documented conditions,
Continues to produce large, stable, on‑demand effects over and over,
And the skeptics can then use those effects as a reliable information channel (“tell me the face of this card, again and again, forever”).
That would begin to look like a signal, not just an entanglement‑style correlation, and that would clash with MPI’s NT axiom and decline/displacement predictions.
But a strong one‑off (or a few‑off) private demonstration—even if it totally convinces those two skeptics on a personal level—doesn’t violate MPI at all.
So, in one line:
MPI has no problem with “wow, I just saw something I can’t explain, even though I’m a skeptic.” It only forbids turning that “wow” into a perfectly reliable telegraph.
https://ejhong.substack.com/p/model-of-pragmatic-information-faq
Thanks Eugene!
Full rebuttal incoming... stay tuned!