this post was submitted on 15 Sep 2024
892 points (98.1% liked)

Technology

59989 readers
2766 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] melroy@kbin.melroy.org 11 points 3 months ago (2 children)

No sh*t, this is what I predicted from day one.

[–] RagingRobot@lemmy.world 3 points 3 months ago (1 children)

We should have looked to melroy

[–] melroy@kbin.melroy.org 2 points 3 months ago (1 children)

Thank you! That is indeed a valid point. I was hoping more people came up with this valid remark. Do you have any other questions or predictions you would like to know? So that we don't get "surprises" in the field of technology again?

[–] Eheran@lemmy.world 3 points 3 months ago (1 children)

Please hit me with some predictions :D

[–] melroy@kbin.melroy.org 3 points 3 months ago

Sure!

  • More and more (AI) spyware / malware is getting injected into projects and operating systems. Without the user consent. Mobile phones, laptops, desktop PCs, smart devices, etc. This comes from companies, but also from governments (no, not just China, but also US and EU).
  • AI bubble itself will burst for the "normal users" and most companies who won't really benefit from AI / LLMs as they thought they will. This will be apparenty only after several years. Where the highly skilled developers left the companies, and you are left with software engineers using AI tools which generates wrong code. The damage LLM (like AI Code generation) is doing and will be continue to do in the upcoming years is very untransparent, but it won't be nice. We suddently are not getting AGI.
  • More research and efforts will be put into alternative computers, like computers based on biology. Like using living cells. After all nature is so much more efficient then our current technologies. This could fix the energy demand issues we now see with AI.
  • Biology computer will then also create huge moral issues. Since, how do we know the cells are not becoming aware? How do we know it won't feel pain or the cells are feeling trapped? After all, we, humans, don't even know how conscious really works and self aware.
  • Users & companies want to get back in control over 5 or 15 years from now. So their could be a big move back from "Cloud" to on-prem. You are already seeing this now with the fediverse.
  • The internet becomes too much centralized and controlled by goverments. Blocking public DNS IPs. Overruling them. The only answer would be is to create a much more decentralized internet alternative, so over 20 or 30 years from now (so we can still talk which each other about issues in the goverments par example). The current internet is just too fragile. And the root of the problem is already DNS. Meaning you need to basically start from scratch.
  • Over 80 years Windows might only be used by corporate businesses. Most people might only use Android or any Linux based distro. This mainly depends on how fast we change our education process, so young people learn about alternatives. And schools should stop promoting and forcing people to use Microsoft products only. If schools won't change, then we might have a huge issue, and this topic won't be valid.
  • Google will be split into multiple companies.
  • Microsoft might be split later as well into multiple companies, but only much later, after Google.
  • ... Should I continue or stop here..?

@Eheran@lemmy.world @RagingRobot@lemmy.world

#it #software #ai #predictions

[–] Eheran@lemmy.world -3 points 3 months ago (2 children)

So you predicted that security flaws in software are not going to vanish with AI?

[–] sugar_in_your_tea@sh.itjust.works 2 points 3 months ago* (last edited 3 months ago) (1 children)

All software has bugs. I prefer the human-generated bugs, they're much easier to diagnose and solve.

[–] melroy@kbin.melroy.org 2 points 3 months ago

My point exactly, now you have genAI code written by AI, who doesn't know what it is doing. Instructed by a developer, who doesn't understand the programming language. Reviewed by a co-worker, who doesn't know what is doing on. It's madness I tell you!

[–] melroy@kbin.melroy.org 2 points 3 months ago

I predicted that introducing AI on software engineer (especially juniors) will result in overall worse code, since apparently people don't feel responsible for the genAI code. While I believe the responsibility is still fully at the humans who try to deliver code. And on top of that, most devs are not doing good code reviews in general (often due to lack of time or .. skill issue). And now we have AI that generates code which are too easily accepted on top of reviewers who blindly accept code.. And no unit tests or integration tests.. And then we have this current situation. No wonder this would happen. If you are in software engineering, you would know exactly where I'm talking about. Especially if you would work at larger companies.