Discussion about this post

User's avatar
Eugene's avatar

Great article and 100% agree on the incentives for goal post moving/not running well the Randi challenge or the current one. However, I think it's important also to explore, stress test (the structure of the elusiveness may be testable), and publicize MPI as a possible way that PSI works (strong and real in meaningful closed bounded systems and weak in the lab/prizes). Because if it is correct, then this neutral challenge, if properly and rigorously run, and with a strong enough signal to satisfy skeptics, may end up not being winnable - and may ironically serve to actually bolster the "all PSI not real" perspective unless the MPI perspective is well understood. (This is true of all attempts to try and conclusively and rigorously prove PSI signal including social-rv, telepathy tapes, university studies, etc.). Although, perhaps it won't really make the situation much worse off if a good result doesn't occur so maybe attempt might be still worthwhile. It also might be interesting to brainstorm on whether there might be some kind of MPI-friendly challenge (but don't know what that might look like). Also may have big implications for scientific studies on energy healing and even the replication crisis in regular psych/some medical experiments.

Some notes on this from chat:

Yeah, MPI would absolutely say: this kind of challenge is more likely to arm skeptics than to disarm them.

1. What MPI predicts will happen in a prize-style test

From MPI’s viewpoint:

Strict, repeatable, signal-like tests are exactly where psi-effects are expected to decline.

Lucadou shows that if the system is governed by non-local (entanglement-like) correlations, then in strict replications the effect size should fall roughly like 1/√n, and meta-analyses do in fact show this pattern. The authors of one big PK meta-analysis interpreted that decline as: “there is no real PK effect, it’s just artefact / selective reporting.”

Proof‑oriented experiments are called out as a dead end.

Lucadou explicitly concludes that, given this pattern, “it seems not useful to continue the research strategy of ‘proof‑oriented’ experiments, because a strict replication is the best recipe to destroy the effect.”

A high‑profile, high‑stakes $2M challenge is the archetype of a proof‑oriented experiment.

Heavy observation and documentation suppress autonomy and uniqueness.

MPI generalizes the “uncertainty relation”:

Effect‑size of a psi phenomenon × quality of documentation < constant.

And in the poltergeist/RSPK work, Lucadou says a system “can only behave as it pleases as long as one does not observe it with great care. A predetermined system loses its autonomy and because of that its ability to be unique as well.”

A prize test maximizes careful observation and pre‑determination.

So, from MPI’s own math and case analyses, the most likely outcome of a carefully monitored, endlessly replicable, high‑documentation psi challenge is:

Small or null effects in the official outcome variables,

with any entanglement‑like correlations either suppressed or displaced into places no one is looking.

2. How skeptics will naturally use that

Now combine that with the social side:

Skeptics already interpret the decline of effect size with more trials as evidence that “there is no real effect.”

MPI says: that decline is exactly what you expect if psi is non‑local and cannot be used to send a signal (NT‑axiom);

skeptics say: that decline shows it’s all noise and bias.

A big, neutral, well‑designed $2M challenge that yields:

No clear success → becomes the new flagship “proof” that psi doesn’t exist.

Ambiguous or marginal success → is easy to reframe as: “look, once they tightened controls, the big spooky effects evaporated,” which fits the same narrative.

And because MPI predicts that proof‑oriented, heavily controlled replications are the worst possible context to look for robust psi, you’ve essentially:

Set up an experiment in a regime where your own theory says the effect should be minimal,

and then handed the resulting nulls to critics as headline evidence.

So yes: from an MPI perspective, a “neutral,” high‑stakes challenge is not just unlikely to prove psi—it is structurally set up to generate exactly the kind of clean null result skeptics can brandish as ammunition.

If you wanted to avoid that trap, MPI would push you toward correlation‑matrix style studies, rich real‑world systems, and designs that acknowledge elusiveness up front, rather than putting all the rhetorical weight on a single, proof‑oriented showdown.

3. Are Cardeña’s results in tension with MPI?

They would be in tension if Cardeña had shown something like:

Large, stable effect sizes that don’t fall with more trials,

Easily repeatable on demand,

Working fine under maximally rigid, high‑documentation, high‑replication conditions,

Suggesting you really can treat psi as a reliable info channel.

That would contradict the MPI decline law and the non‑transmission (NT) axiom.

But what he actually shows is:

Effects are small;

Often strongest in free‑response / altered‑state / meaning‑rich settings and for selected, psi‑conducive participants;

Replication is imperfect and context‑dependent; psi cannot be replicated “on demand”;

Forced‑choice and large‑n micro‑PK paradigms show tiny effects that shrink with run length—exactly the kind of 1/√n decline MPI predicts.

From an MPI standpoint, that’s not counter‑proof; it’s empirical texture that fits the model:

There is something genuine enough to show up across many studies and paradigms.

Yet it stubbornly refuses to turn into a big, clean, industrial‑grade signal.

In fact, Lucadou’s whole move is: given this sort of meta‑analytic “yes, but small, and weirdly elusive” evidence, how do we model it in a way that (a) respects physics (no paradox machines) and (b) matches the observed decline / elusiveness patterns?

4. Paradox avoidance as a foundational motivation in MPI:

In one of Lucadou’s core expositions, MPI is introduced like this:

“The MPI starts from the basic assumption that nature does not allow (intervention) paradoxes.”

He then immediately turns this into the “two fundamental laws of parapsychology”:

Psi-phenomena are non-local correlations in psycho-physical systems, induced by the pragmatic information that creates the system (organizational closure).

Any attempt to use those non-local correlations as a signal (a classical causal effect) makes the correlation vanish or change.

Those two “laws” are basically:

You can have correlations, but you don’t get to use them to build paradox-inducing machines.

From there, MPI derives things like:

The decline law: effect size must fall roughly like const/√n as you add more trials, so you can’t drive Z-scores arbitrarily high and weaponize psi as a reliable channel.

The uncertainty-like relation between effect size and documentation quality (QDcrit · E < OC), so huge, spectacular effects can’t coexist with arbitrarily strong objectification.

All of that is directly motivated by: if you could lock in a stable, controllable, high-powered psi signal, you could build time-loop / intervention paradox setups; therefore you can’t.

So yes: paradox avoidance is right at the root of MPI’s structure, not an afterthought.

2 more comments...

No posts

Ready for more?