this post was submitted on 11 Jun 2025
383 points (98.2% liked)

Privacy

38798 readers
881 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS
 

Everyone talks about how evil browser fingerprinting is, and it is, but I don't get why people are only blaming the companies doing it and not putting equal blame on browsers for letting it happen.

Go to Am I Unique and look at the kind of data browsers let JavaScript access unconditionally with no user prompting. Here's a selection of ridiculous ones that pretty much no website needs:

  • Your operating system (Isn't the whole damn point of the internet that it's platform independent?)
  • Your CPU architecture (JS runs on the most virtual of virtual environments why the hell does it need to know what processor you have?)
  • Your JS interpreter's version and build ID
  • List of plugins you have installed
  • List of extensions you have installed
  • Your accelerometer and gyroscope (so any website can figure out what you're doing by analyzing how you move your phone, i.e. running vs walking vs driving vs standing still)
  • Your magnetic field sensor AKA the phone's compass (so websites can figure out which direction you're facing)
  • Your proximity sensor
  • Your keyboard layout
  • How your mouse moves every moment it's in the webpage window, including how far you scroll, what bit of text you hovered on or selected, both left and right clicks, etc.
  • Everything you type on your keyboard when the window is active. You don't need to be typing into a text box or anything, you can set a general event listener for keystrokes like you can for the mouse.

If you're wondering how sensors are used to fingerprint you, I think it has to do with manufacturing imperfections that skew their readings in unique ways for each device, but websites could just as easily straight up record those sensors without you knowing. It's not a lot of data all things considered so you likely wouldn't notice.

Also, canvas and webGL rendering differences are each more than enough to 100% identify your browser instance. Not a bit of effort put into making their results more consistent I guess.

All of these are accessible to any website by default. Actually, there's not even a way to turn most of these off. WHY?! All of these are niche features that only a tiny fraction of websites need. Browser companies know that fingerprinting is a problem and have done nothing about it. Not even Firefox.

Why is the web, where you're by far the most likely to execute malicious code, not built on zero trust policies? Let me allow the functionality I need on a per site basis.

Fuck everything about modern websites.

top 50 comments
sorted by: hot top controversial new old
[–] Ulrich@feddit.org 4 points 20 hours ago (2 children)

I don't get why people are only blaming the companies doing it and not putting equal blame on browsers for letting it happen

What do you expect browsers to do? They can stop telegraphing some of this information, but then the websites won't render properly (they use this information to display the website properly), and your fingerprint would just be even more unique.

Pretty much every browser outside of Chrome and Edge have implemented some sort of fingerprinting mitigation techniques.

[–] lambalicious@lemmy.sdf.org 5 points 12 hours ago (1 children)

They can stop telegraphing some of this information, but then the websites won’t render properly (they use this information to display the website properly),

Pretty much none of the information is necessary to ever render a site properly.

OS and CPU architecture? Ireelevant to whether you are sending a JPG or PNG background. Nearly irrelevant to whether you are using a vertical or horizontal screen (and browsers adverstise that info separately anyway, it's even part of CSS media queries).

Accelerometer and gyroscope? The only reason that could ever be needed for rendering is if the user is moving so incredibly fast that red pixels in their screen would become green due to shifting. And in any time between 2025 and 2999, if you have someone moving that fast, you have worse problems than the site not rendering adequately.

Keyboard layout? If the rendering of a site depends on whether I'm pulsing "g" vs "j" while it loads, then that's quite stupid anyway because that boldly assumes the app focus is on the page.

Proximity sensor? Again: absolutely useless unless rendering environment moving at incredibly superhigh speed (at which the sensor might be reading data wrong anyway).

[–] Ulrich@feddit.org -1 points 11 hours ago

That's incorrect. Different sites have different needs and the devs can't pre-program which of the billion sites need what.

[–] jnod4@lemmy.ca 2 points 14 hours ago (1 children)

Half of that shit ain't needed (I used a browser back in the old days before a gyroscope was even invented for any computational device except the Apollo rockets)

[–] Ulrich@feddit.org 0 points 13 hours ago* (last edited 9 hours ago)

Believe it or not websites don't work the same way they did "in the old days". But yes, a lot of it is not needed for many websites. Many of them do require it to function properly though and if devs don't make it available no one will use them.

[–] mesitoispro@ttrpg.network 3 points 1 day ago* (last edited 1 day ago)

Thanks for bringing attention to this.

I think a major issue with problems like these are bad designers who are including bad decisions to justify their existence. They never learned that "less is more" and will add things without thinking about why just to show that they can.

[–] hansolo@lemmy.today 56 points 2 days ago (3 children)

There's 2 separate universes here.

Devs and tech companies care only for UX, convenience, and reduced friction to use any service. They would put their granny's home address and SSN in the headers if it made a page load 10ms faster. Their incentives are all short-sighted to hit the next goal to outcompete other devs/companies and ship their end of history killer app that will solve all problems - and that will still get bloated and enshittified within 18 months.

Then there's us, a subset of rational people educated about how much data gets transmitted, who are horrified by the general state of being online, and are hard to impress when it comes to more than just saying "privacy!" when promoting anything at all.

IMO, we have to DIY and cobble together so much of our own protection, we're closer to artists that live a strange life that few people understand, seems weird from the outside, but we love for the peace of mind. Which is not enough to be any appreciable segment of the market to move the needle on any product worth real money.

[–] skarn@discuss.tchncs.de 26 points 2 days ago (3 children)

They would put their granny's home address and SSN in the headers if it made a page load 10ms faster.

Have they ever considered that pages would load faster if they didn't include 20MB of JavaScript?

[–] GnuLinuxDude@lemmy.ml 23 points 2 days ago

Just yesterday I was on a news website. I wanted to support it and the author of the piece so I opened a clean session of firefox. No extensions or blocking of any kind.

The "initial" payload (i.e. after I lost patience approximately 30s after initial page load and decided to call a number) was 14.79MB transferred. But the traffic never stopped. In the network view you could see the browser continually running ad auctions and about every 15s the ads on the page would cycle. The combination of auctions and ads on my screen kept that tab fully occupied at 25-40% of my CPU. Firefox self-reported the tab as taking over 400MB of RAM.

This was so egregious that I had to run one simple test. I set my DNS on my desktop to my PiHole and re-ran my experiment.

Initial payload went from almost 14.79 -> 4.00MB (much of which was fonts and oversized images to preview other articles). And the page took 1/4 the RAM and almost no CPU anymore.

Modern web is dogshit.

This was the website in question. https://www.thenation.com/article/politics/welcomefest-dispatch-centrism-abundance/

[–] raltoid@lemmy.world 5 points 2 days ago

Yes, but the manager with a shitty MBA doesn't care about overall company appearance of performance, as long as their department looks good on paper. And they figured that would be easier by using four different external libraries, and then let another department figure out the rest.

load more comments (1 replies)
[–] RhondaSandTits@lemmy.sdf.org 3 points 1 day ago

we’re closer to artists that live a strange life that few people understand, seems weird from the outside

Wow! That's a great way to put it!

Now I understand why my neighbors look at me like I'm one of the guys performing this act:

https://www.youtube.com/watch?v=byz7JCf5thM

[–] ztwhixsemhwldvka@lemmy.world 12 points 2 days ago

IMO, we have to DIY and cobble together so much of our own protection, we’re closer to artists that live a strange life that few people understand, seems weird from the outside, but we love for the peace of mind.

That's beautiful

[–] who@feddit.org 76 points 2 days ago* (last edited 2 days ago) (2 children)

Web developers are complicit in browser fingerprinting, by insisting that sites require JavaScript (or WASM).

All of us are complicit in browser fingerprinting, because we tolerate this script dependence.

IMHO, a web site being allowed to execute arbitrary code on visitors' hardware should be an anomaly. The vast majority of them could be built to deliver the same information without requiring that inherently dangerous permission.

[–] HiddenLayer555@lemmy.ml 53 points 2 days ago* (last edited 2 days ago) (6 children)

One of the biggest reasons websites need to run JS is submitting form data to a server. Like this website.

But old forums did all this without JS by just using the HTML form's submit functionality itself. The issue is it causes the page to refresh meaning you can't keep any other unsubmitted forms, and you can get those annoying "submit form data again?" popups. So every website writes code to submit everything asynchronously.

Another major reason for using JS is dropdown menus and panels. You need to either write code to listen for the click and reveal/hide it as needed, or you have to do weird CSS tricks that are usually inferior in UX to a JavaScript implementation, or you have to bastardize the form dropdown selector into your general purpose dropdown.

These shouldn't be things you need to implement yourself using a Turing complete programming language. These should be natively implemented in the browser and accessible through HTML.

Remember when the only way to play videos on websites was with Flash or Java applets? But then video playback got natively implemented into HTML and now it's way easier and doesn't even require JS.

If browsers did the same for asynchronous form submission and dropdown menus, it would get rid of 80% of websites' need to run JS. Including this one.

But obviously they want you to run JS so they won't do that.

[–] lambalicious@lemmy.sdf.org 3 points 12 hours ago

One of the biggest reasons websites need to run JS is submitting form data to a server. Like this website.

No. Forms function quite perfectly without JS thanks to action=.

Now whether you want to get "desktop app" fancy with forms and pretend you are a "first-class desktop citizen" that's a skill issue. But submitting form data, by itself, has not required JS since at least 1979. Maybe earlier.

[–] ulterno@programming.dev 2 points 18 hours ago

Another major reason for using JS is dropdown menus and panels. You need to either write code to listen for the click and reveal/hide it as needed, or you have to do weird CSS tricks that are usually inferior in UX to a JavaScript implementation, or you have to bastardize the form dropdown selector into your general purpose dropdown.

Look for text "HTML's got expandable sections baked in"

I had actually given up on expandable sections for my website (because I didn't want anymore JS than the dark/light switcher I made) until I found this.

[–] who@feddit.org 23 points 2 days ago* (last edited 1 day ago) (3 children)

Let's be careful how we phrase things here. JavaScript form submission and navigation are choices, not needs.

Also, progressive enhancement / graceful degradation exists. When competent developers (or bosses) want script effects on our sites, we can include them and make the sites continue to function with scripts disabled. It might require more work, but it is absolutely possible.

Framing the script-based approaches to these things as if they were needs contributes to the problem, IMHO.

(I am referring to the vast majority of web sites, of course, not special-purpose web applications like games.)

[–] ulterno@programming.dev 1 points 18 hours ago* (last edited 18 hours ago)

Navigation is a need.
It can be done without JS

You don't even need to be competent. Being obsessed works too.

The problem is, a manager will just get an intern with 0 web experience and hand them WordPress.

[–] Thorned_Rose@sh.itjust.works 2 points 1 day ago

I'm am ex web dev/des and still maintain some websites for non-profits. I think you underestimate the human stupidity factor. I already have to front an infuriating level of stupid questions and problems that people have caused themselves by not following the most simple of directions. Do I like JavaScript? No. Do I wish I could completely ditch it? 100%. But people are stupid. And without it's use, I would be handling even more CS than I am now. The average person expects a website to act a certain way and without that they lose their minds.

[–] Jakeroxs@sh.itjust.works 4 points 1 day ago

Homie the web platform I've been tasked with helping keep running/update is ASP Web forms and the lack of asynchronous updating is pretty annoying to work with in a ticketing software.

[–] warm@kbin.earth 15 points 2 days ago

I miss websites with simple text, links and graphics. We could navigate them perfectly fine without any JS, any dropdowns or whatever. They just displayed the information you came for, nothing extra.

[–] Zagorath@aussie.zone 9 points 2 days ago

I agree with you about dropdown menus being something that could/should be natively available to HTML, but I'm less convinced about form submission. Sure, if we assume everything is happy path it's a great idea, but a system needs to be robust enough to handle a variety of cases. Maybe you want to redirect a user to a log-on page if they get back a 401, or present an explanation if they get a 403. A 5XX should usually display some sort of error message to the user. A 201 probably needs to add an element into the page, while a 200 might do nothing, or might alter something on the page.

With the huge range of possible paths and desired effects, it pretty quickly becomes apparent that designing an HTML & CSS–only spec that can meet the needs is infeasible. There's definitely a case to be made that JavaScript has become too powerful and can do too many potentially dangerous or privacy-invading things. And maybe a new range of permissions could be considered to limit a lot of that at a more fundamental level. But what we're talking about here with the form submission stuff is the real bare-bones basic stuff JavaScript was designed to make easier—alter the contents of web pages on the fly in response to user actions. And it's really, really good at that.

load more comments (1 replies)
[–] SCmSTR@lemmy.blahaj.zone 6 points 2 days ago* (last edited 2 days ago)

I use noscript and VERY MUCH do not allow as much stuff as possible, by default, and don't allow anything if I don't want to or feel like it. I have been doing so since forever because of literal constant bullshit.

[–] dessalines@lemmy.ml 43 points 2 days ago (2 children)

100% agree. Browsers don't need to, and shouldn't be reporting all Javascript attributes that make us unique, especially things like canvas.

You can test this out here, but nowadays its rare for any out of the box browser to be anonymous.

https://www.amiunique.org/fingerprint

[–] Tower@lemmy.zip 28 points 2 days ago (1 children)

Agreed. Why the fuck is a browser allowed to know my battery status or connection strength?

[–] rimu@piefed.social 10 points 2 days ago (2 children)

The web app could switch to lower res images (etc) if your connection is weak. Or if your battery is low it might switch out YouTube embeds for clickable images instead.

[–] lambalicious@lemmy.sdf.org 1 points 12 hours ago

No need to report any sort of even remotely precise value then. Just report "low" or "high". Also it's bold of you to assume that just because I am plugged to the wall I want to be served 400 MB of exta javascript and MPEG4 instead of one CSS file and a simple PNG.

[–] trolololol@lemmy.world 22 points 2 days ago

No one does this and I'd rather not share my info even if it did because I'd rather decide that myself.

Bad reason to get spied on.

[–] Grapho@lemmy.ml 16 points 2 days ago

What the fuck. When I thought these were already comical amounts of data points they just kept going and going and going.

[–] adespoton@lemmy.ca 37 points 2 days ago (3 children)

This is why using a local web proxy is a good idea; it can standardize those responses (or randomize them) no matter what you’re actually using.

Personally, I keep JavaScript disabled by default specifically because of this, and turn on those features per-site. So if a website has a script that requires the accelerometer for what it does, that script gets to use it. Other sites keep asking for it? I suppress the requests on that site and if it fails to operate (throws one of those ad blocker or “you have JS disabled errors), I just stop going to the site.

I’ve found that with everything disabled by default, browsing the web is generally a pleasant experience… until it isn’t.

This of course requires using a JS management extension. What I’d really like to see is a browser that defaults to everything disabled, and if a site requests something, have the browser ask for permission to turn on the feature for that particular script, showing the URL for the script and describing what the code does that needs the permission. This seems like an obvious use for locally run AI models.

[–] MangoPenguin@lemmy.blahaj.zone 2 points 1 day ago (1 children)

This is why using a local web proxy is a good idea

Do you have one you've used that I can look at for this?

[–] adespoton@lemmy.ca 2 points 1 day ago

The one I use is part of a hardware UTM, but I also use Lockdown VPN on iOS, and https://pi-hole.net/ in a container on my LAN, and then VPN all my devices to my home network when I’m not at home.

[–] kionite231@lemmy.ca 11 points 2 days ago

Let's hope ladybird implement something like that

[–] kipo@lemm.ee 3 points 1 day ago (1 children)

This of course requires using a JS management extension.

What's a good extension for this? What do you use?

[–] adespoton@lemmy.ca 4 points 1 day ago (1 children)

Depends on the browser/OS.

My go-to for general browsing is Firefox with uBlock Origin and NoScript, which I also use in Edge; I have a few browsers that are still using uMatrix, and I have a proxy filter that strips calls to .js URLs by default except for specifically allowed URLs.

load more comments (1 replies)
[–] Zagorath@aussie.zone 32 points 2 days ago (3 children)
  • Your operating system
  • Your CPU architecture

Agree. No reason they should have this.

  • Your JS interpreter's version and build ID

I can see a reasonable argument for this being allowed. Feature detection should make this unnecessary, but it doesn't seem to be fully supported yet.

  • Plugins & Extensions

This is clearly a break of the browser sandbox and should require explicit permission at the very least (if not be blocked outright...I'm curious what the legitimate uses for these would be).

  • Accelerometer and gyroscope & magnetic field sensor

Should probably be tied to location permission, for the sake of a simple UX.

  • Proximity sensor

Definitely potential legitimate reasons for this, but it shouldn't be by default.

  • Keyboard layout

As someone who uses a non-QWERTY (and non-QWERTY-based) layout, this is one I have quite a stake in. The bottom line is that even without directly being able to obtain this, a site can very easily indirectly obtain it anyway, thanks to the difference between event.code and event.key. And that difference is important, because there are some cases where it's better to use one or the other. A browser-based game, for example, probably wants to use event.code so the user can move around based on where WASD would be on a QWERTY keyboard, even though as a Dvorak user, for me that would be <AOE. But keyboard shortcuts like J and K for "next"/"previous" item should usually use event.key.

There could/should be a browser setting somewhere, or an extension, that can hide this from sites. But it is far too useful, relative to its fingerprinting value, to restrict for ordinary users.

how sensors are used to fingerprint you, I think it has to do with manufacturing imperfections that skew their readings in unique ways

It's also simple presence detection. "You have a proximity sensor" is a result not every browser will have, so it helps narrow down a specific browser.

[–] Technoguyfication@sh.itjust.works 6 points 1 day ago* (last edited 1 day ago) (3 children)

Operating system and CPU architecture are useful for sites to serve the correct binaries when a user is downloading an application. I know you could just give them all the options, but the average idiot has no idea what the difference between ARM and x86 is, or whether they have a 64 bit system. Hell, I wouldn’t even trust some users to accurately tell me what operating system they’re using.

[–] Dreaming_Novaling@lemmy.zip 3 points 14 hours ago* (last edited 14 hours ago)

This was the only one I could think of a good reason to track. I immediately thought of all the grandparents and tech illiterate people who'd probably implode if they had to pick .exe vs .deb vs .dmg/.app (I actually had to look up what MacOS uses...) vs etc. And don't even try to have them guess intel vs amd.

Automatically guessing the operating system saves us tech people from having to figure out they downloaded a file for a completely different OS.

[–] lambalicious@lemmy.sdf.org 1 points 12 hours ago

Operating system and CPU architecture are useful for sites to serve the correct binaries when a user is downloading an application.

Barely. You could trim down the data to incredibly low granularity ("OS: Windows", "CPU: Intel Desktop") and you'd still get the exact same binary as 99% of the people 99% of the time, anyway.

[–] Zagorath@aussie.zone 1 points 1 day ago (1 children)

Oh yes, that's a very good point, actually. That actually seems such a fundamental use case that you could almost justify it being available without a permission.

[–] lambalicious@lemmy.sdf.org -1 points 12 hours ago (1 children)

No. It should be made available with a permission, because not every site out there is going to offer you to download binaries. 1% of the web """requiring""" this does not justify 99% of the web being able to violate that privacy.

[–] Zagorath@aussie.zone 3 points 12 hours ago (1 children)

Reread the comment you replied to. Not one word of it was in there accidentally.

[–] lambalicious@lemmy.sdf.org 1 points 9 hours ago

Good catch. Still, doesn't make it true either: it's not such a "fundamental use case" that it would even require the capability. The browser already reports the usable information in the user agent (you rarely even in that 1% need more specificity than "Windows" on "Desktop Intel").

load more comments (2 replies)
[–] autonomoususer@lemmy.world 7 points 2 days ago* (last edited 2 days ago)

Most of those crying about this are likely still stuck on the easy stuff, trapped in WhatsApp, Discord and iOS. Try start there.

load more comments
view more: next ›