this post was submitted on 18 Jul 2023
71 points (98.6% liked)
Asklemmy
43948 readers
520 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I WISH that's how minecraft did it! But minecraft specifically does not do this. Once the chunks are generated, they are stored on disk as full voxels, from bedrock to sky. When minecraft version/generator function changes, you can see the transition line between old and new chunks (used to be abrupt, now smoothed a bit). Large worlds take up gigabytes, or even terabytes of space, even if most of it is wild terrain.
Yeah, I was simplifying the description a bit from the actual technical underpinnings of Minecraft specifically.
Also, yes, it's how Minecraft arguably should have been doing it, while versioning the seed function to ensure backwards compatibility with old worlds.
I disagree with the rest of your thesis too. You are saying that in principle, the state and dynamics of the world could be described by a generator function, such that you input (x,y,z,t) to it and it returns what is happening in that place at that time without needing to reference or calculate the rest of the world. Or it would, IF NOT for the free will. Like how if I asked you "what is the millionth Fibonacci number" you could use the Fibonacci formula to simply calculate the millionth number without needing to do a million intermediate additions.
But what if I asked you "what is the millionth SHA256 hash of ''"? The hash function is perfectly deterministic, there is no quantum woo involved, and definitely no free will. And yet you would not be able to answer me without calculating every single hash in between. Or for a physical system example - a double pendulum is extremely simple, yet you could not predict its state at time t, even knowing its starting parameters exactly, without calculating its dynamics for all the time in between.
This is my position. Humans are purely physical systems, there is no need to invoke magical outside supernatural influence. Physics does not behave differently, switching between "particle" and "wave", depending on whether a human is involved. This is a common misconception in popularized science. To determine what choice a human will make, knowing the starting positions of all the particles in the lightcone is sufficient. However you would not in general be able to predict the final configuration of a system without calculating every single intermediate state in between. Free will does exist, but to you making a decision it is impossible to tell whether your momentary mental state is part of the greater physical universe, or embedded in some calculation about that universe.
Except for both Bell's paradox and the recent Weigner's friend variation, superdeterminism is one of the three possible ways to resolve the paradoxes, so the notion that free will exists is very much not physically clear at all given the most recent experimental results.
Also, you seem to have misunderstood my point.
I'm saying that tracking non-deterministic state changes is easier in discrete data than continuous data, so if the universe we are in is one that was designed, the design detail of interacted with quanta resolving from continuous to discrete behavior at the point of interaction strongly lends itself to the rejection of superdeterminism.
There's no advantage to switching from continuous to discrete tracking at the point of interaction if interactions are entirely deterministic, and inconsistency between the two introduces unnecessary and unexpected side effects.
The quantum eraser behavior is pretty clearly in line with a lazy optimization at work, so this conversion is apparently expensive or undesirable enough to need to be optimized away from when possible.
Modeling a continuous universe (in line with general relativity) at macro scales but switching to discrete at micro could be advantageous for both deterministic and non-deterministic simulated systems running on discrete hardware; however, switching from one to the other exclusively around measurements and interactions rather than uniform discretization across the board would be a very bizarre design decision, no?
The general difficulty in calculating certain deterministic functions which you bring up is a non sequitur to my point, unless you can make the case that converting from continuous to discrete at the point of measurement/interaction would be advantageous to an entirely deterministic system where multi-body interactions still seem to occur continuously without issue prior to measurement?