[-] person594@feddit.de 3 points 11 months ago

Kind of a tangent at this point, but there is a very good reason that that couldn't be the case: according to the shell theorem , nowhere in the interior of a spherical shell of matter experiences any net gravitational force -- wherever you are inside the sphere, the forces counteract exactly.

Otherwise, though, the metric expansion of space is different from typical movement, and it isn't correct to say that things or being pushed or pulled. Rather, the distance between every pair of points in the universe increases over time, with that rate of increase proportional to the points' distance. Importantly, for very distant points, the distance can increse faster than the speed of light, which would be disallowed by any model which describes the expansion in terms of objects moving in a traditional sense.

[-] person594@feddit.de 4 points 11 months ago

That isn't really the case; while many neural network implementations make nondeterministic optimizations, floating point arithmetic is in principle entirely deterministic, and it isn't too hard to get a neural network to run deterministically if needed. They are perfectly applicable for lossless compression, which is what is done in this article.

[-] person594@feddit.de 0 points 1 year ago

Let's just outlaw racism too while we're at it!

[-] person594@feddit.de 3 points 1 year ago

On a torus, you can have up to seven mutually adjacent regions. See https://upload.wikimedia.org/wikipedia/commons/3/37/Projection_color_torus.png

[-] person594@feddit.de 1 points 1 year ago

Unix time is just the number of seconds since January 1 1970, isn't it? How is that base 10, or any other base? If anything, you might argue it's base 2, since computers generally store integers in binary, but the definition is base-independent afaik.

[-] person594@feddit.de 0 points 1 year ago* (last edited 1 year ago)

So to be honest, 90% of the time the base of the logarithm doesn't really matter as long as we are consistent. The main property we use logarithms for is that log_b(xy) = log_b(x) + log_b(y), and this holds for any base b. In fact, the change-of-base formula tells us that we can get from one base to another just by multiplying by a constant (log_a(x) = log_b(x) * 1/log_b(a)), and so there is a strong desire to pick one canonical "logarithm" function, and just take care of any base silliness by multiplying your final result by a scaling factor if needed.

Given that, the natural logarithm is quite "natural" because it is the inverse of the exponential function, exp(x) = e^x. The exponential function itself is quite natural as it is the unique function f such that f(0) = 1 and f'(x) = f(x). Really, I would argue that the function exp(x) is the fundamentally important mathematical object -- the natural logarithm is important because it is that function's inverse, and the number e just happens to be the value of exp(1).

person594

joined 1 year ago