44
submitted 5 months ago by Ozone6363@lemmy.world to c/chatgpt@lemmy.world

It's so frustrating.

Even very basic things like "Summarize this video transcipt" on GPTs built specifically for that purpose.

Firstly, it cannot even read text files anymore. It straight up "cannot access documents". No idea why, sometimes it will act like it can, but it becomes obvious it's hallucinating or only read part of the document.

So ok, paste info in. GPT will start giving you a detailed summary, and then just skip over like 40 fucking percent of the middle, and resume summarizing at the end.

I mean honestly, I'm hardly asking it to do complex shit.

I have absolutely no idea what lead to this decline, but it's become so bad it is hardly even worth messing with it anymore. Such an absolute shame.

top 13 comments
sorted by: hot top controversial new old
[-] TropicalDingdong@lemmy.world 24 points 5 months ago

This is 100% consistent with my experience. Its been clear that they are nerfing it on the back-end to deal with copyrighted material, illegal shit, etc (which I also think is bullshit but I accept is debatable).

Beyond that however, I think they are also down scoping the queries from 4 to 3.5 or other variants of '4'. I think this is a cost savings measure. Its absolutely clear however, that 4 is not what 4 was. The biggest issue I have with this is the issue of "What am I buying with a call to a given OpenAI product?". What exactly am I buying if they are re-arranging the deck chairs under the hood?

I did some tests basically asking GPT4 to do some extremely complicated coding and analytics tasks. Early days it performed excellently. These days its a struggle to get it to do basic asks. The issue is that not that I cant get it to the solution, the issue is that it costs me more time and calls to do so.

I think we're all still holding our breath for the 'upgrade', but I don't think its going to come from OpenAI. I need a product that I'll get consistent performance from that isn't going to change on me.

[-] Uranium3006@kbin.social 9 points 5 months ago

local AI is the way. it's just that current models aren't gpt4 quality yet and you'd probably need 1 TB of VRAM to run them

[-] hperrin@lemmy.world 4 points 5 months ago

Surprisingly, there’s a way to run Llama 3 70b on 4GB of VRAM.

https://huggingface.co/blog/lyogavin/llama3-airllm

[-] theterrasque@infosec.pub 1 points 5 months ago

Llama3 70b is pretty good, and you can run that on 2x3090's. Not cheap, but doable.

You could also use something like runpod to test it out cheaply

[-] DaseinPickle@leminal.space 6 points 5 months ago

Could it be that so many is using it that they don’t have the capacity anymore? This technology does require crazy amount of resources to work.

[-] Player2@lemm.ee 7 points 5 months ago

Then they should increase prices or have tighter usage limits instead of a quiet downgrade. Customers getting less while paying for the same thing is a scam.

[-] RedditWanderer@lemmy.world 5 points 5 months ago

This has always been it. Unless there is a new breakthrough, adding more data has diminishing returns and costs an enormous amount of energy.

They had to convince everyone they were worth 10 trillion dollars and that they need to be part of the energy infrastructure of the future before it all fell apart. With everyone using it I have no doubt they have to reduce the "depth" of it.

[-] Rolando@lemmy.world 1 points 5 months ago

The funny/tragic thing is there are several decades worth of AI/NLP research that they could call on, but they seem intent on kludging and reinventing things instead.

[-] Ozone6363@lemmy.world 3 points 5 months ago

No idea man, but it was so incredibly useful before, and now it isn't even worth fucking with.

I don't understand how they fucked it up this hard.

[-] AmbiguousProps@lemmy.today 2 points 5 months ago

Yes, but they're also trying to increase profitability, likely thanks to Microsoft.

[-] nothingcorporate@lemmy.today 4 points 5 months ago

You are not wrong: https://arstechnica.com/information-technology/2023/07/is-chatgpt-getting-worse-over-time-study-claims-yes-but-others-arent-sure/ and also https://duckduckgo.com/?q=chat+gpt+4+getting+worse

The more LLMs get exposed to data, the more they get exposed to wrong data. There's also a vicious cycle problem that once LLMs spit out bad information, that bad information gets incorporated into LLMs new data sets, which makes them more wrong, so on and so forth.

[-] kromem@lemmy.world 2 points 5 months ago* (last edited 5 months ago)

There was just a post on HN about how GPT-4o is best at long context. Try that.

this post was submitted on 14 May 2024
44 points (86.7% liked)

ChatGPT

8902 readers
1 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS