What are you? Please answer the question before getting into the explanation part of the video (starting at 3:32).
This is one of the better Veritasium videos I've watched.
I especially like the comments on free will, how to live your life, and rationality.
Whether we do or don't have free will, you have to live as though it exists.
This resonates with me because in my mid-20s, I entered a (quasi)depressive mindset where, after reading several books touching on this topic, it made me feel like whatever I did, it didn't matter.
I then, at some point, just decided to live by the idea that free will exists, as the alternative was not worth living anymore (and also, it almost alienated a few people around me when I tried to elaborate on those metaphysical ideas).
I am a one-boxer.44.4%
I am a two-boxer.55.6%
9 votes \ 1 day left
Computerphile did a video on this too.
I dunno man, I don't really agree when they say both approaches are valid. The one box logic is not valid given the way the problem is set up, assuming you believe that causality runs in one direction.
I suspect that this may have a lot more to do with risk tolerances and ex-post regret than peoples' views on rationality and free will.
For example, if the numbers changed from $1k / $1m to $500k, $510k, I suspect a lot more would take both boxes, despite the expected-utility calculations working the same.
Also tagging @Undisciplined and @denlillaapan, my two fellow SN economists, because this seems to be an area where the mathy nerds are touching up on behavioral economic theory and IMO they're taking the philosophy a bit further than it needs to go, when perfectly reasonable economic thinking can explain the data (i think).
Wait, so are you now a one-boxer or a two-boxer?
I was a one-boxer upon hearing the problem. I can appreciate the arguments for two-boxing, but if given the chance, I'd still one-box it.
Yeah, I agree that most people just answer based on gut feeling. A follow-up question in their survey should have been: can you define EU in a Bayesian context?
I think at $1k and $1M I'd also one-box it, but this would be motivated by vague superstitious notions / ambiguity aversion and also trying to minimize the feeling of regret. (If I 1-box, at most I lose out on $1,000 but if I 2-box I could feel like I missed out on $1M)
a harder decision is 10k vs 100k
I wish I had thought this through for myself before plowing ahead in the video.
In this situation, I would be thinking about whether I'm being given an honest and accurate description of the situation. If I could verify that everyone had gotten the million dollars and I'm not going to be the last participant, then I'm taking the one box.
That's only the strategically worse move if there's no uncertainty about whether the situation is what they claim it is.
Uncertainty over whether the situation is described accurately could also be uncertainty over whether the world works the way you think it does. So I think it's rational under many circumstances to take the one-box. But I don't think the video should have presented it as a valid EU-maixmization based chain of thought using the given probabilities. It's much more likely driven by unstated notions of risk aversion, ambiguity aversion, and regret minimization.
Let me Wikipedia those words tonight to see if I can give further intelligent input~~
Of course I agree with that.
I will say, the way I expect the world to work is that any bizarre highly contrived situation like this is probably a trick of some sort.
Also: #1451048
it would be nice to be able to merge posts into a megathread
I know this website is already incredibly feature rich for a small user base. But megathreads would be nice to counter duplicates
@k00b
(didn't think this merging suggestion through yet, but just pinging you in case you think this is something worth thinking about or there is a technical reason not to)
I want this too. The problem is specification. How exactly should it work?
You've thought about this more than I did, for sure.
Both levels of thinking are valuable. At my level of thinking it's easy to rule out awesome but hard things.
Then again, you went non-custodial. That's pretty awesome, yet, it was hard.
It was awesomely hard. It's not awesome to use ... yet.
Oh shit... I initially posted it as a link post to see if there was any dupe. There wasn't. I should have checked related content (or probably safely assumed you had already posted it~~).
Well, let's see if we get any other input than @DarthCoin's.
How about you check out the posts in your own territory? Hahaha
Don't you know I'm just here to collect rent~~
#fiatMindset
~lol~lol
If the supercomputer is indeed very accurate at predicting what you're going to do it is most logical to one-box it. Otherwise, you're just going to get slapped with $1,000 by two-boxing it instead of $1,000,000 by one-boxing it.
Guess that's why ratio was more in favor of the one box as the two box, for the people surveyed by veritasium, likely technical people, as opposed to the general population of the initial survey.
But that's maybe my bias speaking. I like to think of myself as rational and more erudite than average.
Mystery box + correct prediction --> $1M
Mystery box + incorrect prediction --> $0
Both boxes + correct prediction --> $1k
Both boxes + incorrect prediction --> $1.001M
With 50% accuracy:
Expected value of one box --> $500k
Expected value of two boxes --> $501k
With 50.005% accuracy:
Expected value of one box --> Expected value of two boxes -->$500.5k
With 99% accuracy:
Expected value of one box --> $990k
Expected value of two boxes --> $1991
How is this contentious? Your goal should be to maximize expected value, not prove to a fucking machine you have free will.
"Oh no, I must not have free will because I chose to be rich."
Yup, I had a same reasoning, and it didn't feel contentious either. But then, when I saw the two-box argument I was thinking: "ah why not?"
The video doesn't address the first question I had: how do you explain that the predictor can possibly have such a high accuracy?
Surely there is at least one person that, in such a scenario, will throw a fair coin to decide. You're telling me that the predictor can reliably predict the coin toss. If this is true, the consequences of this fact are far greater than the amount of money I win in the game...
So, you're a two-boxer? Your current action/thinking having an impact on the predictor's choice is too hard to fathom in this gedanken experiment?
Yes, two-boxer.
I think the paradox comes because we have to assume an impossible precondition. An accurate predictor cannot exist as postulated. If it exists, our current understanding of the world is wrong.
It reminds me of Pascal's wager, and one-boxing is believing the predictor can exist.
One-box.
If state of world is such, that this scenario actually happens to me, I must assume that this scenario (in more of less similar form) can happen again in a future. So I must build good track record.
If game is treated as iterative rational action is to one-box.