this post was submitted on 15 Dec 2023
10 points (91.7% liked)
Free Open-Source Artificial Intelligence
2889 readers
3 users here now
Welcome to Free Open-Source Artificial Intelligence!
We are a community dedicated to forwarding the availability and access to:
Free Open Source Artificial Intelligence (F.O.S.A.I.)
More AI Communities
LLM Leaderboards
Developer Resources
GitHub Projects
FOSAI Time Capsule
- The Internet is Healing
- General Resources
- FOSAI Welcome Message
- FOSAI Crash Course
- FOSAI Nexus Resource Hub
- FOSAI LLM Guide
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I agree. AI is a polarizing topic. And in general opinions are way exaggerated. I mean there are several articles highlighting that people used some chatbots for therapy or companionship and the AI told them to end their lives. Or that people roleplay and abuse their virtual companions and this reinforces negative thoughts.
In media I read those articles more often than I'd read an in-depth talk about chances to make it useful... I don't think I've ever read such an article.
I don't think computer science people are the problem, they rarely think in binary, it's more maths and thinking of ways to handle data/information what they do. It often requires out-of-the-box thinking, creativity, balancing things and compromise. (Depending on the exact field.) I think they do understand.
All of that isn't that simple. At least that's my opinion. AI can be viewed as a tool. It can be used for good, evil, everything in-between and it can also be applied correctly or wrong. It can be the correct tool for a task or not so much.
I'm a bit hesitant to recommend it to someone who isn't well. If he were mentally healthy, I'd say yes, go try and see if it helps. But that has nothing to do with AI. Same applies to seeking for advice on Reddit. Or doing self-diagnosis with TikTok. It can help, but it can also lead you astray.
I'd say AI is probably the better option and I'd use it if it were me.
Of course doing inference costs money. So it's either free and complicated or paid and somewhat easy if you choose the correct service. Unfortunately AI is hyped so there are hundreds of services and I really don't know if there is one that stands out and can be recommended more than the others.
I don't think there is that much harm in telling people. I also tell people I like chatbots and think they're useful. I usually don't go into detail in real-life conversations. But I've also done roleplay and talked to it about random stuff and I think it is nice. Some people don't understand because all they've seen is ChatGPT and how it can re-phrase emails. And roleplay or a virtual companion is really something different.
(I've also seen people overestimate ChatGPT. They ask important factual questions, let it summarize complicated stuff, let it explain a scientific paper to them. And that's a bit dangerous. The output always looks professional. But sometimes it's riddled with inaccurate information, sometimes plain wrong. And that'd be bad if you mistook it for an expert or confused it with an actual therapist. As long as you're aware, I think it's alright and you can judge whether that's okay. And I'm sure ChatGPT and AI will get better, hallucinate less and research will come up with ways to control factuality and creativity.)
Do you have any new better-than-Llama2-70B models you've tried recently?
I haven't tried anything new in awhile because of code I changed in Oobabooga and Linux mainline kernel w/Nvidia issues. I basically have to learn git to a much better level and manage my own branch for my mods. I tried koboldcpp but I didn't care to install the actual Nvidia CUDA toolkit because Nvidia breaks everything they touch.
Hehe. I've recently spent $5 on OpenRouter and tried a few models from 7B to 70b and even one hundred and something billion parameters. They definitely get more intelligent. But I have determined that I'm okay within the 7B to 33B range. At least for my use-case. I've tested creative storywriting and dialogue in a near-future setting where AI and androids permeate human society. I wasn't that impressed. The larger models still did some of the same mistakes and struggled with spacial positions of the characters and the random pacing of the plot points didn't really get better.
This wasn't a scientific test whatsoever, I just took random available models, some were fine-tuned for similar purposes, some not and I just clicked my way through the list. So your mileage may vary here. Perhaps they're much better with factual knowledge or reasoning. I read a few comments from people who like for example chatting with the Llama(2) base model at 65b/70b parameters and say this is way better than the 13b fine-tunes I usually use.
And I also wasn't that impressed with OpenRouter. It makes it easy and has some 'magic' to add the correct prompt formatting with all the different instruct formats. But I still had it entangle itself in repetition loops or play stupid until I went ahead and disabled the automatic settings. And once again tried to find the optimal prompt format and settings.
So I'm back to KoboldCpp. I'm familiar with it's UI and all the settings. I think the CUDA toolkit within the Debian Linux repository is somewhat alright. I've deleted it because it takes up too much space and my old GPU with 2GB of VRAM is useless anyways. We cerainly all had our 'fun' with the proprietary NVidia stuff.