this post was submitted on 23 Mar 2025
45 points (97.9% liked)

Technology

1163 readers
45 users here now

A tech news sub for communists

founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] amemorablename@lemmygrad.ml 8 points 3 months ago (6 children)

I don't doubt it can be harmful, but I can't help wondering if the problem in education is the LLMs themselves or the educational system being broken in general. I never had LLMs when I was in college/university to begin with, but also, the only time I can remember even being tempted in the general direction of cheating was an online statistics class (one of the few times I did fully online class) that I had trouble focusing on or understanding much at all (at the time, I didn't understand that I probably struggled more with some classes than others because of ADHD/executive functioning focus problems). And I still didn't ideate about actual cheating itself, I just had some test where I tried to guess answers with intuition as an experiment; unsurprisingly, that went badly and I had to study harder going forward in that class to make it up.

Maybe I was just too goody two shoes to ever consider it seriously, I don't know. But it wasn't really something I even considered as an option. Notably, I also genuinely enjoyed learning if it was a subject that interested me and I usually more liked classes that had projects I could do, rather than rote memorization or long research papers.

I don't have data on it off-hand, so maybe I'm talking out of my backside, but it seems to me that if people are focused on assignments as meeting metrics and expectations rather than the learning itself, they're more likely to look for ways to game the system, whether for an edge, to get approval, to avoid rejection, etc. So although I can easily believe AI is making it worse in the short-term, I have to wonder why people would go for it in the first place and what can be done at the root at cheating motivations.

[–] AlbigensianGhoul@lemmygrad.ml 10 points 3 months ago (2 children)

My main gripe isn't so much with cheating, but with it being so easy to avoid learning anything fundamental with LLMs and then getting stuck when things get more advanced. This is very common with programming, where intro level students are now able to pass easily by relying on tools like co-pilot, but get absolutely destroyed once they reach more advanced courses.

With LLMs kids don't feel like learning, studying or developing critical thinking skills in fundamentals classes because they are constantly spoonfed ready answers, and so they are woefully unprepared later on. As with most "successful" inventions in the smartphone age, it turns humans into passive observers and consumers rather than engaged actors with skills for investigation. I am genuinely worried for what sort of professionals and scientists we are forming today.

[–] amemorablename@lemmygrad.ml 1 points 3 months ago (1 children)

Oh I see, I think I read cheating with the other poster and went off of that. That is a fair point and I dislike how LLMs are getting pushed as solution rather than tool. To make a rough comparison, a hammer is a tool and makes some tasks easier / more doable, but you still have to physically use the hammer with human dexterity to pound the nail in. Whereas with LLMs, you can ask it for answers or to write things for you and it will, even if nonsense; and there's a big problem within that of "not knowing what you don't know" with LLMs that if you have the skills/knowledge to know it's feeding you BS, you probably don't need it, but if you don't, you're more apt to think you do need it... but also lack the skills/knowledge to debug / fact check / etc. what it's giving you. So the people who would get the most immediate use out of it are also putting a lot of trust in something that is nowhere near a reliable tutor or subject matter expert.

[–] CountryBreakfast@lemmygrad.ml 6 points 3 months ago

I don't differentiate cheating from avoiding learning. Students get caught and fess up all the time, so it's possibly even more pervasive than I believe.

I teach things like global political economy and ethnic studies. If a sizable portion of students are using generative AI to fudge their way through these topics, then we couldn't be more fucked. We can only get more fascist from here.

And AI just compounds on other issues, like our political climate where people basically piss their pants because heaven forbid someone ask you to read 30 pages about enslavement. Not only do they not want to read but they are fairly often very racist and anti-intellectual. How do you address that when everything you can ask them to do to improve their engagement can be fudged?

load more comments (3 replies)