I'm inconsolable over the fact that somebody managed to get me excited about an RGB fan
MHLoppy
I liked this (older) overview article for more context on DDR6/LPDDR6: https://hothardware.com/news/jedec-ddr6-lpddr6-revealed
Especially:
[...] Where DDR4 memory officially topped out around 4266 MT/s and the fastest LPDDR5X available is around 10 Gbps, LPDDR6 is going to start with a 10.667 Gbps per-pin data rate, and is expected to scale to 14.4 Gbps.
Okay, starting out where we are now might not sound that impressive, but keep in mind that typically a new memory technology actually starts well behind the fastest memory speeds of the current tech. The earliest DDR3 memory supported 800 MT/s transfer rates, while DDR2 memory available for enthusiasts was clocked at 1333 and even 1600. Similarly, when DDR4 debuted it was the sluggish DDR4-1866, while DDR3 had been hitting 2133, 2400, and even higher speeds for some time.
It's not really the important takeaway, but I'm kind of surprised it's coming from the AMA tbh
It covers the breadth of problems pretty well, but I feel compelled to point out that there are a few times where things are misrepresented in this post e.g.:
Newegg selling the ASUS ROG Astral GeForce RTX 5090 for $3,359 (MSRP: $1,999)
eBay Germany offering the same ASUS ROG Astral RTX 5090 for €3,349,95 (MSRP: €2,229)
The MSRP for a 5090 is $2k, but the MSRP for the 5090 Astral -- a top-end card being used for overclocking world records -- is $2.8k. I couldn't quickly find the European MSRP but my money's on it being more than 2.2k euro.
If you’re a creator, CUDA and NVENC are pretty much indispensable, or editing and exporting videos in Adobe Premiere or DaVinci Resolve will take you a lot longer[3]. Same for live streaming, as using NVENC in OBS offloads video rendering to the GPU for smooth frame rates while streaming high-quality video.
NVENC isn't much of a moat right now, as both Intel and AMD's encoders are roughly comparable in quality these days (including in Intel's iGPUs!). There are cases where NVENC might do something specific better (like 4:2:2 support for prosumer/professional use cases) or have better software support in a specific program, but for common use cases like streaming/recording gameplay the alternatives should be roughly equivalent for most users.
as recently as May 2025 and I wasn’t surprised to find even RTX 40 series are still very much overpriced
Production apparently stopped on these for several months leading up to the 50-series launch; it seems unreasonable to harshly judge the pricing of a product that hasn't had new stock for an extended period of time (of course, you can then judge either the decision to stop production or the still-elevated pricing of the 50 series).
DLSS is, and always was, snake oil
I personally find this take crazy given that DLSS2+ / FSR4+, when quality-biased, average visual quality comparable to native for most users in most situations and that was with DLSS2 in 2023, not even DLSS3 let alone DLSS4 (which is markedly better on average). I don't really care how a frame is generated if it looks good enough (and doesn't come with other notable downsides like latency). This almost feels like complaining about screen space reflections being "fake" reflections. Like yeah, it's fake, but if the average player experience is consistently better with it than without it then what does it matter?
Increasingly complex manufacturing nodes are becoming increasingly expensive as all fuck. If it's more cost-efficient to use some of that die area for specialized cores that can do high-quality upscaling instead of natively rendering everything with all the die space then that's fine by me. I don't think blaming DLSS (and its equivalents like FSR and XeSS) as "snake oil" is the right takeaway. If the options are (1) spend $X on a card that outputs 60 FPS natively or (2) spend $X on a card that outputs upscaled 80 FPS at quality good enough that I can't tell it's not native, then sign me the fuck up for option #2. For people less fussy about static image quality and more invested in smoothness, they can be perfectly happy with 100 FPS but marginally worse image quality. Not everyone is as sweaty about static image quality as some of us in the enthusiast crowd are.
There's some fair points here about RT (though I find exclusively using path tracing for RT performance testing a little disingenuous given the performance gap), but if RT performance is the main complaint then why is the sub-heading "DLSS is, and always was, snake oil"?
obligatory: disagreeing with some of the author's points is not the same as saying "Nvidia is great"
Do you not see any value in engaging with views you don't personally agree with? I don't think agreeing with it is a good barometer for whether it's post-worthy
If the GN tests accurately map to whatever the navy's using, the difference in most games isn't that significant despite the suboptimal cooling, and if they're usually just playing TF2 and Halo 2 (as per article) then even 50% of full performance should still be plenty.
last-generation Alienware (Dell) machines
I noticed that too, it might be a case of military procurement delay ¯\_(ツ)_/¯
Well, yes
The MoD sees embracing gamer culture as a way of attracting and retaining young people, particularly for roles in cyber defence and technology-focused positions. The UK government launched a recruitment plan this year to fast-track gamers into cyber defence roles.
Five nodes in four years is being stretched awfully thin lol
Probably not representative of the wider userbase, but it might be a halfway-accurate proxy for the "PC enthusiast" crowd
a sharp increase from its 21% share in 2024
I didn't realize it would be this high tbh. Is a lot of that being driven by domestic usage that I just don't hear about over here?
Wonder if the type of yoghurt influences it? It looks pretty humid over in the UK right now