Try LibreTranslate. It is a "language model," but it is not a "large" language model along the lines of something like ChatGPT. I am not sure what the training process entails, but it will run on a 10 year old dual core CPU without GPU acceleration. You can test it at libretranslate.org, or you can install it on your own machine if you have Python by running pip install libretranslate
. I run LibreTranslate on a meager VPS (the CPU I just described) and the headroom left over from running Mastodon is enough to handle the translation queries locally, without making any remote requests.
technology
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
i knew this would happen. everything's AI now, so everything must be bad. translation utilities have used machine learning methods for years, long before gpts got useful. translation doesn't have nearly the power requirements of generative stuff. also, for the record, it's not the usage of generative ai that's wasteful, it's building the model. the models are already built.
but anyway, it sounds like you want to run something on your own machine. there are tools for this. some of them are engine-specific, others do video capture. RPG maker and various visual novel engines for example have tons of them, where you just run the game using the tool and it replaces all the text. there are similar things for emulators.
is this something that's desirable, or do you want to use the game more as a language learning guide so that the original text stays up? in that case the OCR path is probably better, using something like translumo or ugt. they will show the text in a separate window.
also, for the record, it's not the usage of generative ai that's wasteful, it's building the model.
What happens when you stop refining the model?
if you're facebook, what happens when you stop refining the model is that a power plant somewhere explodes because multiple tens of megawatts drop off the grid in a millisecond causing the turbines to overspeed like hell. true story.
also, i forgot i left that part in, that was a draft sentence. what i was going to write was that we're not in this situation because of individuals using machine learning models to translate text. We're in this situation due to massive corporations dumping metric fuckawatts of power into always-on systems that train insanely large models made to scale to millions of users at the same time.
translation models are not that kind of ML and have never been.
running your own model is fine. continuing to fund the deluded e/accs that are trying to build a machine god is not.
where you just run the game using the tool and it replaces all the text. there are similar things for emulators.
These are typically locally running LLMs that do the translating as you go along then cache it on the device. It's why there's a delay before the text is replaced, the delay is shorter on higher end machines that process it quicker.
well, not typically llms. these tools have been around longer than the term.
besides, i don't see why it matters? energy-wise, the problem isn't the tech, it's the immense scale it's deployed on in order to be instantly available to millions of people. running a translator locally is unlikely to show up on your electric bill if you play any games on your computer.
Stuff like DeepL and Google Translate use AI but not LLMs. They use much less power from what I can gather. For example Firefox allows you to perform such translation on your computer without using a cloud service like Google Translate and its still pretty fast.
I'm not sure it's not AI, but DeepL has been around for a while, it's at least older than this modern concept of AI, so I think it's a different model. I like it a lot.
If you're talking about one of those plugins/mods then the AI translation is occurring locally on your machine. The increase in processing use is you on your machine, not a warehouse.