Aceticon

joined 1 week ago
[–] Aceticon@lemmy.dbzer0.com -3 points 2 days ago* (last edited 2 days ago) (9 children)

They're deemed "lossless" because there are no data losses - the word actually comes from the broader domain of data handling, specifically Compression were for certain things - like images, audio and video - there are compression algorithms that lose some information (lossy) and those which don't (lossless), for example JPEG vs PNG.

However data integrity is not at all what your average "audiophile" would be talking about when they say there are audio losses, so when commenting on what an non-techie "audiophile" wrote people here used that "losslessness" from the data domain to make claims in a context which is broader that merelly the area were the problem of data integrity applies and were it's insuficient to disprove the claims of said "audiophile".

[–] Aceticon@lemmy.dbzer0.com 11 points 2 days ago (2 children)

I think it's a general thing with highly capable persons in expert and highly intellectual domains that eventually you kinda figure out what Socrates actually meant with "All I know is that I know nothing"

[–] Aceticon@lemmy.dbzer0.com 0 points 2 days ago* (last edited 2 days ago) (11 children)

My point being that unlike the misunderstanding (or maybe just mis-explanation) of many here, even a digital audio format which is technically named "lossless" still has losses compared to the analog original and there is no way around it (you can reduce the losses with a higher sampling rate and more bits per sample, but never eliminate it because the conversion to digital is a quantization of an infinite precision input).

"Losslessness" in a digital audio stream is about the integrity of the digital data itself, not about the digital audio stream being a perfect reproduction of the original soundwaves. With my mobile phone I can produce at home a 16 bit PCM @ 44.7 kHz (same quality as a CD) recording of the ambient sounds and if I store it as an uncompressed raw PCM file (or a Wav file, which is the same data plus some headers for ease of use) it's technically deemed "lossless" whilst being a shit reproduction of the ambient sounds at my place because the capture process distorted the signal (shitty shit small microphone) and lost information (the quantization by the ADC in the mobile phone, even if it's a good one, which is doubtful).

So maybe, just maybe, some "audiophiles" do notice the difference. I don't really know for sure but I certainly won't dismiss their point about the imperfect results of the end-to-process, with the argument that because after digitalization the digital audio data has been kept stored in a lossless format like FLAC or even raw PCM, then the whole thing is lossless.

One of my backgrounds is Digital Systems in Electronics Engineering, which means I also got to learn (way back in the days of CDs) how the whole process end to end works and why, so most of the comments here talking about the full end-to-end audio capture and reproduction process (which is what a non-techie "audiophile" would be commenting about) not being lossy because the digital audio data handling is "lossless", just sounds to me like the Dunning-Krugger Effect in action.

People here are being confidently incorrect about the confident incorrection of some guy on the Internet, which is pretty ironic.

PS: Note that with high enough sampling rates and bits per sample you can make it so precise that the quantization error is smaller that the actual noise in the original analog input, which de facto is equivalent to no losses in the amplitude domain and so far into the high frequencies in the time domain that no human could possibly hear it, and if the resulting data is stored in a lossless format you could claim that the end-to-end process is lossless (well, ish - the capture of the audio itself into an analog signal itself has distortions and introduces errors, as does the reproduction at the other end), but that's something quite different from claiming that merely because the audio data is stored in a "lossless" format it yields a signal as good as the original.

[–] Aceticon@lemmy.dbzer0.com 3 points 2 days ago (13 children)

Strictly speaking, as soon as an analog signal is quantized into digital samples there is loss, both in the amplitude domain (a value of infinite precision is turned into a value that must fit in a specific number of bits, hence of finited precision) and on the time domain (digitalization samples the analog input at specific time intervals, whilst the analog input itself is a continuous wave).

That said, whether that is noticeable if the sampling rate and bits per sample are high enough is a whole different thing.

Ultra high frequency sounds might be missing or mangled at a 44.7 kHz sampling rather (a pretty standard one and used in CDs) but that should only be noticeable to people who can hear sounds above 22.35kHz (who are rare since people usually only hear sounds up to around 20kHz, the oldest the person the worse it gets) and maybe a sharp ear can spot the error in sampling at 24 bit, even though its miniscule (1/2^24 of the sampling range assuming the sampling has a linear distribution of values) but its quite unlikely.

That said, some kinds of trickery and processing used to make "more sound" (in the sense of how most people perceive the sound quality rather than strictly measured in Phsysics terms) fit in fewer bits or fewer samples per second in a way that most people don't notice might be noticeable for some people.

Remember most of what we use now is anchored in work done way back when every byte counted, so a lot of the choices were dictated by things like "fit an LP as unencoded audio files - quite luterallyplain PCM, same as in Wav files - on the available data space of a CD" so it's not going to be ultra high quality fit for the people at the upper ends of human sound perception.

All this to say that FLAC encoded audio files do have losses versus analog, not because of the encoding itself but because Analog to Digital conversion is by its own nature a process were precision is lost even if done without any extra audio or data handling process that might distort the audio samples even further, plus generally the whole thing is done at sampling rates and data precision's fit for the average human rather than people at the upper end of the sound perception range.

[–] Aceticon@lemmy.dbzer0.com 32 points 2 days ago

Studies have shown that something as simple as being tall makes people be more likely to be looked towards as leaders.

[–] Aceticon@lemmy.dbzer0.com 2 points 2 days ago

Make nuke mad enough and nuke blows off.

I'm pretty sure the few survivors in the resulting wasteland would get bored pretty fast of making Non Credible Defense jokes about the waves of cockroaches trying to take over the World from humans.

Best not argue with nuke.

[–] Aceticon@lemmy.dbzer0.com 25 points 2 days ago (1 children)

Yeah, but the way things are going soon it will be cheaper to buy a B-52 to live in than a house.

[–] Aceticon@lemmy.dbzer0.com 2 points 3 days ago* (last edited 3 days ago)

"Your qualifications trump my own claims of expertise and your argument ravaged my deeply held little-more-than-political-slogan beliefs and I'm psychologically unable to handle it so I'm going to attack your style of writing, make broad claims about your personality and block you to stop the mental tension that what you wrote causes in my mind"

[–] Aceticon@lemmy.dbzer0.com 1 points 3 days ago* (last edited 3 days ago)

Most of that time in my career I spent designing and deploying algorithms was in Equity Derivatives and a lot of that work wasn't even for Market Traded instruments like Options but actually OTCs, which are Marked To Model, so all a bit more advanced than what you think I should be studying.

Also part of my background is Physics and another part is Systems Analysis, so I both understand the Maths that go into making models and the other parts of that process including the human element (such as how the definition of the inputs, outputs and even the selection of a model as "working" or "not working needs to be redone" is what shapes what the model produces).

One could say I'm intimately familiar with how the sausages are made, and we're not talking about the predictive kind of stuff which is harder to be controlled by humans (because the Market itself serves as reference for a model's quality and if it fails to predict the market too much it gets thrown out), but the kind of stuff for which there is no Market and everything is based on how the Traders feel the model should behave in certain conditions, which is a lot more like the kind of situation for how Algorithms are made for companies like Healthcare Insurers.

I can understand that if your background is in predictive modelling you would think that models are genuine attempts at modelling reality (hence isolating the makers of the model of the blame for what the model does), but what we're talking about here is NOT predictive modelling but something else altogether - an automation of the maximizing of certain results whilst minimizing certain risks - and in that kind of situation the model/algorithm is entirely an expression of the will of humans, from the very start because they defined its goals (minimizing payout, including via Courts) and made a very specific choice of elements for it to take in account (for example, using the history of the Health Insurance Company having their decision gets taken to Court and they lose, so that they can minimize it with having to pay too much out), thus shaping its goals and to a great extent how it can reach those goals. Further, once confronted with the results, they approved the model for use.

Technology here isn't an attempt at reproducing reality so as to predict it (though it does have elements of that in that they're trying to minimize the risk of having to pay lots of money from losing in Court, hence there will be some statistical "predicting" of the likelihood of people taking them to court and winning, which is probably based on the victim's characteristics and situation), it's just an automation of a particularly sociopath human decision process (i.e. a person trying to unfairly and even illegally denying people payment whilst taking in account the possibility of that backfiring) - in this case what the Algorithm does and even to a large extent how it does it is defined by what the decision makers want it to do, as is which ways of doing it are acceptable, thus the decision makers are entirely to blame for what it does.

Or if you want it in plain language: if I was making an AI robot to get people out of my way whilst choosing that it would have no limits to the amount of force it could use and giving it blade arms, any deaths it would cause would be on me - having chosen the goal, the means and the limits as well as accepting the bloody results from testing the robot and deploying it anyway, the blame for actually using such an autonomous device would've been mine.

People in this case might not have been killed by blades and the software wasn't put into a dedicated physical robotic body but it's still the fault of the people who decide to create and deploy an automated sociopath decider whose limits were defined by them and which they knew would result in deaths, for the consequences of the decisions of that automated agent of theirs.

[–] Aceticon@lemmy.dbzer0.com 1 points 3 days ago* (last edited 3 days ago) (4 children)

The individual on one side is indeed powerless (or at least it seemed to, until Luigi showed everybody that things aren't quite like that).

However on the other side there are individuals too and they are not powerless and have in fact chosen to set up the system to make everybody else powerless in order to take advantage of it, and then deflect the blame to "the rules", "the law" or "the algorithms", when those things are really just a 2nd degree expression of the will of said powerful individuals.

(And as somebody who worked in making and using Algorithms in places like Finance, algorithms are very much crafted to encode how humans thing they should work - unless we're talking about things done by scientists to reproduce natural processes, algorithms - AI or otherwise - are not some kind of technical embodiment of natural laws, rather they're crafted to produce the results which people want them to produce, via the formulas themselves used in them if not AI or what's chosen for the training set if AI)

My point is not about the point itself that you made, but the language you used: by going on and on about "the algorithm" you are using the very propaganda of the very people who make all other individuals powerless that deflects blame away from those decision makers. That's the part I disagree with, not the point you were making.

PS: If your point was however that even the decision makers themselves are powerless because of The Algorithm, then I totally disagree with it (and, as I've said, I've been part of creating Algorithms in an industry which is a heavy user of things like models, so I'm quite familiar with how those things are made to produce certain results rather than the results being the natural outcome of encoding some kind of natural laws) and think that's total bullshit.

[–] Aceticon@lemmy.dbzer0.com 1 points 3 days ago* (last edited 3 days ago) (6 children)

Oh, it's way worse than merely the algorithms.

You see, the algorithms are trained or designed according to the choices of people, the ones selected from the various possibilities to be put in place and used being the ones that people chose to put in place and use, and even after their nasty (sometimes deadly) effects for others have been observed they are kept in use by people.

The Algorithm isn't a force of nature or a entity with its own will, it's an agent of people, and in a company were the people creating the algorithms are paid for and follow other people's orders about how it should be, the people with for whom the Algorithm is an agent are the decision makers.

Deflecting the blame with technocratic excuses (such as that it's the Algorithm) is a very old and often used Neoliberal swindle (really just a Tech variant of rule-makers blaming problems on "the rules" as if there is nothing they can do about it, when they themselves had a saying on the design of those rules and knew exactly what they would lead to)

[–] Aceticon@lemmy.dbzer0.com 3 points 4 days ago* (last edited 4 days ago)

Maximum profit for Healthcare companies comes from people being chronically sick as soon as possible and remaining in that state (so, alive and uncured) for as long as possible.

As it so happens, American food quality (in terms of nutrition) is horrible, the regulatory environment when it comes to approving substances for contact with humans and even human consumption is appalling (it follows the "accepted until proven dangerous" principle rather than the precautionary principle followed in Europe) and pretty much anything goes when it comes to car pollution, so people end up with cardiovascular diseases and/or type II diabetes and/or all manner of cancers of the digestive and respiratory tracts quite early, so all the Healthcare sector needs to do is keep them alive as long as possible to extract the maximum amount of money from them.

view more: ‹ prev next ›