(Not to be confused with Mirror, Mirror – Star Trek: The Original Series, first airdate October 6, 1967.)
In what follows, the goal is to define a Reflective Path Ontology leading to a Reflective Path Integral formulation.
(“Reflective” is defined below. Alternatives: sum over reflective histories, sum over histories and futures.)
From Physics of the Impossible: A Scientific Exploration Into the World of Phasers, Force Fields, Teleportation, and Time Travel, by Michio Kaku:
T-reversal by itself violates the laws of quantum mechanics, but the full CPT-reersed universe is allowed. This means that a universe in which left and right are reversed, matter turns into antimatter, and time runs backward is a fully acceptable universe obeying the laws of physics! (Ironically, we cannot communicate with such a CPT-reversed world. If time runs backward on their planet, it means that everything we tell them by radio will be part of their future, so they would forget everything we told them as soon as we spoke to them. So even though the CPT-reversed universe is allowed under the laws of physics, we cannot talk to any CPT-reversed alien by radio.)
What follows is a very sketchy draft, unlikely to be believed. Hopefully, better versions to follow.
This is the main ontological thesis: For a retrocausal reality, there would be a mirror (parallel) universe that is CPT-reversed from our universe – a universe which (stochastically) influences but does not appear in ours.
numbers, literals (‘Y’,’N’), …
variables: italics x, X, …
logical variables: _ (‘anonymous’), _X, _Y, …
particles: small letters a,b, …, x, …
locations: cap letters A,B, …, S, …
sets of locations: cap greek letters Δ, Σ, Ξ, …
path of particle x: path(x, state and/or locations)
path integral of x going from S to some D ∈ Δ:
path_integral(x, S, Ξ, Δ) returns a D ∈ Δ randomly selected according to the Feynman probability calculus of a particle x going from S to some location in Δ via some location in Ξ. If Δ is a singleton (set with one element), returns a random element of Ξ.
stochastic unification of _X with respect to a distribution:
σ_unify(_X, D) results in a single unification chosen stochastically from a population of possibilities:
A logical variable _X is unified with a single x[i] with probability p[i] from a distribution (a set with probabilities assigned to its elements)
For a particle x beginning at A and terminating at B, blue paths are summed at B, orange paths are summed at -A. (Is there a possibility that the orange summation at A can stochastically influence which decision x makes at A on which path to pursue? And vice versa.) The blue x and the orange -x never meet though: x and -x go along their respective paths in separate wolds. (The retrocausality in this scenario means that although there is a “sum over histories” and “sum over futures”, only one path is selected: the one path that was already chosen probabilistically in the past based on its future. See below.)
The EPR experiment
There is a source S that simultaneously emits two entangled particles a and b that travel from emitter S to detectors A (in one direction) and B (in the opposite direction) respectively. In the orange world, there are the counterparts to S, A, B: -S, -A, -B.) There are not two, but four RFPs to consider: path(a), path(-a), path(b), path(-b). path(a) and path(b) are in our time perspective, path(-a) and path(-b) in the CPT-reversed perspective: Orange particles going from -A and -B arrive at -S at the same time. (In the orange world, -a and -b are absorbed by -S.)
The example used here is from Huw Price’s Time’s Arrow and Archimedes’ Point: New Directions for the Physics of Time (beginning pg. 213) about what happens on a planet called Ypiaria (“Pronounced, of course, ‘E-P-aria’.”)
The scenario here is that there is a pair of twins a and b who depart from S and travel to A and B respectively. At each place A and B, there is an interrogator who asks them respectively a question.
One question only could be asked, to be chosen at random from a list of three:
(1) Are you a murderer?
(2) Are you a thief?
(3) Have you committed adultery?
The assumption is that each twin is truthful. The interrogators recorded all questionings of all twin pairs.
The records came to be analyzed by the psychologist Alexander Graham Doppelganger.
He found that
(D-1) When each member of a pair of twins was asked the same question, both always gave the same answer; and that
(D-2) When each member of a pair of twins was asked a different question, they gave the same answer on close to 25 percent of such occasions.
It may not be immediately apparent that these results are in any way incompatible.
What follows in Price’s Ypiaria story is how Doppelganger reasoned this out. (This is related to statistics to a real EPR experiment.) Below is how it could work out in a Reflective Path Integral (RPI) formulation.
Let S(1) = ‘Y’ or ‘N’, S(2) = ‘Y’ or ‘N’, S(3) = ‘Y’ or ‘N’ (corresponding to “Yes” or “No” responses).
a with hidden variables (_S1,_S2,_S3):_Qa is sent from S to A; b with hidden variables (_S1,_S2,_S3):_Qb is sent from S to B. This is represented as two paths:
There are four paths to consider for particle x that goes from the emitter E to screen Σ (to a screen location S ∈ Σ where there is a panel with two slits Slit(1) and Slit(2) between the emitter and screen:
Need to go back and clean up the two experiment sections.
Two stochastic operators were introduced: path_integral and σ_unify. Both operate in forward and backwards worlds.
Can reflective path integration with stochastic logical unification be the ingredients for a retrocausal reality?
The full picture of the cosmos is this: There is the whole collection of RFPs, the constituents of an RPI reality – our perspective and a CPT-reversed perspective – but we can only (fully) experience “half” of it. The CPT-reversed world is an orange ghost that influences our world, but we cannot talk to it.
Codicalists are interested in the codical aspects of a subject, and what notables of a subject had or have a codical perspective – in particular a subject generally considered outside computer science / programming languages. What notable of a subject could be said to exemplify an interesting codicalism to some extent? In philosophy Charles Sanders Peirce could be one example.
This note is meant to be the first in an intermittent series on codical notables, this one on Friedrich Kittler.
Friedrich A. Kittler (June 12, 1943 – October 18, 2011) was a literary scholar and a media theorist. His works relate to media, technology, and the military. […] In 1976, Kittler received his doctorate in philosophy after a thesis on the poet Conrad Ferdinand Meyer.
[Kittner] sees in writing literature, in writing programs, and in burning structures into silicon chips a complete continuum: “As we know and simply do not say, no human being writes anymore. […] Today, human writing runs through inscriptions burnt into silicon by electronic lithography […]. The last historic act of writing may thus have been in the late seventies when a team of Intel engineers [plotted] the hardware architecture of their first integrated microprocessor.”
Kittler is in between philosophy, the history of music, computer science, and media theory.
Kittler did not just write histories of media and computing, but argued that we need to understand old media in order to understand contemporary digital culture. Critics have branded this approach “media archaeology” – digging through the ruins of past media cultures in order to grasp the new. But Kittler was also an active tinkerer of machines and code: program, his motto seemed to be, or otherwise you will be programmed by someone from Silicon Valley.
If we had heeded the lessons of Kittler’s interdiscipinary approach, we might have got students to read Homer and Pynchon (two of his favorite authors) as well as programming manuals.
Talks at a conference “The Path Integral for Gravity last November included “An asymptotically safe point of view on the gravitational path integral”. Couple that with “If gravity is asymptotically safe — that is, if the theory is well behaved at high energies — then that restricts the number of fundamental particles that can exist. This constraint puts asymptotically safe gravity at odds with some of the pursued approaches to grand unification. For example, the simplest version of supersymmetry — a long-popular theory that predicts a sister particle for each known particle — is not asymptotically safe. The simplest version of supersymmetry has meanwhile been ruled out by experiments at the LHC, as have a few other proposed extensions of the Standard Model. But had physicists studied the asymptotic behavior in advance, they could have concluded that these ideas were not promising.”
So there’s another alternative – asymptotically safe gravitational path integral – that could at least suggest SUSY will never show. But who knows.
Any approach that does succeed in interpreting the Feynman path integral realistically – i.e., that makes sense of the idea that the system actually follows just one of the possible histories that make up the path integral – is likely to be retrocausal. Dispelling the Quantum Spooks – a Clue that Einstein Missed?
(Huw Price, Ken Wharton)
(If one takes retrocausality seriously: There is only one path ever taken by a particle, but that one path was already chosen probabilistically in the past based on its future.)
The path-integral method seems to be the most suitable for the quantization of gravity. Quantum gravity and path integrals
S. W. Hawking
Phys. Rev. D 18, 1747 – Published 15 September 1978
A code theory (codicalist) perspective could take a different tack. The world is not a simulation, but a synthesis, as in synthetic or programmable matter. Synthetic biology is one example. Along the same lines, synthetic intelligence is the alternative to artificial intelligence:
Synthetic Intelligence: Beyond Artificial Intelligence and Robotics
Craig A. Lindley
“[Synthetic intelligence is] an alternative to artificial intelligence in which intelligence is pursued in a bottom-up way from systems of molecular and cellular elements, designed and fabricated from the molecular level and up. This paradigm no longer emphasizes the definition of representation and the logic of cognitive operations. Rather, it emphasizes the design of self-replicating, self-assembling and self-organizing biomolecular elements capable of generating cognizing systems as larger scale assemblies, analogous to the neurobiological system manifesting human cognition.”
So the simulation hypothesis would be replaced with the synthesis hypothesis:
“We are living in a synthesis — not a simulation.”
Instead of ‘Nature is a book written in the language of mathematics’, we should say: ‘Nature is a book written in the syntax of mathematics, but with the semantics of physics’.
— Jeremy Butterfield
Alternatively, “We are synthetic entities, not simulated entities.” To be diabolical, one can think of, instead of a superprogrammer creating a simulation, the superprogrammer using a matter compiler / 3D or 4D printer (cf. 3D bioprinting) to create the output. To me, Bostrom’s simulation hypothesis is backwards as is Tegmark’s mathematica universe hypothesis. It would be better to concoct stories where we are not simulations but matter assemblies, like is done in SF molecular assembler (matter compiler) stories.
Other names for the synthesis hypothesis: the synthetic matter hypothesis, the matter assembly hypothesis.
Fictionalism, according to some philosophers, can be seen (to varying degrees) in almost any domain: physical, cosmological, micrological, historical, moral, mathematical. Related to fictionalism: constructivism, quasi-realism. For the code theorist, these are potentially aspects of languages of the linguistic-material synthesis.