this post was submitted on 24 Jul 2023
109 points (91.6% liked)
Programming
17668 readers
142 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
He's not exactly comparing software to netscape or win 3.11 though, he's comparing version N of some software to version N-1 or N-2 and noticing that they're getting worse from release to release. Given the rate of new releases the complexity shouldn't be increasing that rapidly between releases so I'm not convinced that is the cause per se. I have to agree with the conclusion from the article, testing was more rigorous in the past than it is now. Both because there was less surface area to test back then and because time-to-market pressures were less due to the longer windows between releases.
I assume you never worked in testing. back in the days, we used to cram testing into a weekend as developers were late with their coding. There was no test automation so that weekend we spend all the time on the most basic functionality. Barely getting thou the testing of having the app started and some of the most basic functions. Almost never was there any time for regression testing, old functions broke all the time. It wasn't uncommon that we skipped a bug fix in one version, just to reintroduce the same bug in the next release.
No, but I do work as a developer and we work pretty closely with the testers. Not back in the day though, I'm not that old but I've been around enough to know that even in the current era of software development the quality and duration of testing varies quite a bit from company to company. Some companies really don't care and the testing is token at best, others like where I currently work are quite obsessed with quality and dedicate quite a lot of time and people to testing a release before it goes out. Of course there are still bugs from time to time but a lot are found and fixed during testing.
Previous companies I've worked at with not-so-great testing were more consumer facing whereas the current one is B2B with a lot of enterprise customers so maybe companies just put as much or as little effort into testing as they think their target audience is willing to put up with.
It's less about putting up as much as fulfilling requirements. I know several software houses will tailor and vary their testing directly as result of classification. The higher the rigor, the higher the cost. There are customers that spare no expense in certifications like government.