Ramblings On Paradoxes
- Aspen

- May 11, 2025
- 2 min read
In my time of self-discovery and evolution, I've come across numerous mentions of Newcomb's Paradox. It goes a little something like this: Suppose you have some form of super-intelligent being named Omega, of whom you are confident in its decision-making skills and have never seen it make an incorrect decision for you or those similar to you.
Now, suppose there are two boxes. Box "A" is a transparent container holding one-thousand dollars, and box "B" is an opaque container that can either hold zero dollars or one million dollars. You can either take both boxes, or only take box "B." Omega decides how much money is put into box "B." If Omega believes you will take both boxes, it will put no money in box "B" and leave box "A" untouched; however, it will put one million dollars in box "B" if it believes you will only select box "B." The prediction is made, and the boxes are presented to you. Omega disappears, so you know nothing as to the contents of box "B," or the decision made by Omega. What do you do?
In this circumstance, the only way that rationalism can see Omega being defeated is by making yourself the kind of person to select only box "B" in the past, so Omega's perfect predictive ability places one million dollars into box "B." As a result of this response, the necessary decisions to make such changes are timeless, which gives birth to the Timeless Decision Theory.
Such a theory is what I've found to be a sufficient explainer for the majority of my understandings. At minimum, it places a backbone into the reasonings of effective altruism and the notion that we have to consider the future to decide the present outcomes. As such, we circle back to the idea of "suffer now, to help later" through a chain reaction of justifications behind decisions that may later be explained, assuming I am around long enough to present them.
One of said justifications that is explainable, at least, is that it partially falls under the idea of being the change that we want to see in the world. The best way that I can explain it is that the "suffer now, to help later" doctrine revolves around making the change necessary to ensure technological evolution remains at a net positive progression in the future, today. Should we fail to make the changes necessary, it is unclear whether or not we can say that someone else will inevitably come along to finish the job for us, which creates the necessity for the change we need being done in the present to protect our future selves.
Paradoxes are funny little things, aren't they?

Comments