this post was submitted on 14 Jul 2025
52 points (96.4% liked)

No Stupid Questions

42261 readers
837 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 2 years ago
MODERATORS
all 19 comments
sorted by: hot top controversial new old
[–] yogurt@lemmy.world 1 points 4 hours ago

https://vincmazet.github.io/bip/filtering/fourier.html

There are ways to encode images that make it easier to isolate differences in cropping and resolution and rotation. Like how if you wanted to search for color filtered images you could just throw out the color and compare them in black and white.

[–] Sleepkever@lemmy.zip 5 points 8 hours ago

Looking up similar images and searching for crops are computer vision topics, not large language model (basically text predictor) or image generation ai topics.

Image hashing has been around for quite a while now and there is crop resistant image hashing libraries readily available like this one: https://pypi.org/project/ImageHash/

It's basically looking for defining features in images and storing those in an efficient searchable way probably in a traditional database. As long as they are close enough or in the case of a crop, a partial match, it's a similar image.

[–] over_clox@lemmy.world 31 points 15 hours ago (1 children)

JPEG works in 8x8 pixel blocks, and back in the day, most JPEG images weren't all that big. Each 8x8 pixel block (64 pixels per block) could easily and quickly be processed as if it were a single pixel.

So if you had a 1024x768 JPEG, then the fast scanning technique would only scan the 128x96 blocks, not necessary to process every single pixel.

Of course the results could never be perfectly accurate, but most images are unique enough that this would be more than sufficient for fast scanning.

[–] bathing_in_bismuth@sh.itjust.works 8 points 15 hours ago (1 children)

Okay, not entirely a layman but also not exactly an expert, if the Photoshop max pixelated entry has the same formula as the detailed comparison it would match? And if that is the case, I imagine all the human input data and behavioral wise would only better the algorithm?

[–] over_clox@lemmy.world 8 points 15 hours ago (1 children)

Looking past the days of old, while also dismissing modern artificial intelligence, the same techniques would still work if you just processed the thumbnails of the images, which for simplicity sake, might as well be a 1/8 scale image, if not actually even lower resolution.

[–] bathing_in_bismuth@sh.itjust.works 3 points 15 hours ago (1 children)

That makes sense. Ive seen it do some amazing results but also some painfully hard-to-make mistakes. Minda neat, imagine going by that mindset, making the most with what you have, without a never ending redundant hell of depencies for even the most basic functiin/feature?!

[–] brucethemoose@lemmy.world 3 points 14 hours ago* (last edited 14 hours ago)

making the most with what you have

That was, indeed, the motto of ML research for a long time. Just hacking out more efficient approaches.

It's people like Altman that introduced the idea of not innovating and just scaling up what you already have. Hence many in the research community know he's full of it.

[–] Nemo@slrpnk.net 17 points 15 hours ago (2 children)

They had the AI models of those days.

[–] bathing_in_bismuth@sh.itjust.works 4 points 15 hours ago (3 children)

That's cool, didn't know AI models where a thing in those days. Are they comparable (maybe more crude?) to nowadays tech? Like, did they use machineearning? As far as I remember there were not much dedicated AI accelerating hardware pieces. Maybe a beefy GPU for neural network purposes? Interesting though

[–] Zwuzelmaus@feddit.org 15 points 15 hours ago (1 children)

Models were a thing even some 30 or 40 years ago. Processing power makes most of the difference today: it allows larger models and quicker results.

[–] bathing_in_bismuth@sh.itjust.works 6 points 15 hours ago (3 children)

I didn't know. Are you somewhat informed about the history of models? I'd love to hear it from you instead of a random crypto bros' LLM summary. Thanks!

[–] brucethemoose@lemmy.world 11 points 14 hours ago* (last edited 14 hours ago) (1 children)

Machine learning has been a field for years, as others said, yeah, but Wikipedia would be a better expansion of the topic. In a nutshell, it's largely about predicting outputs based on trained input examples.

It doesn't have to be text. For example, astronmers use it to find certain kinds of objects in raw data feeds. Object recognition (identifying things in pictures with little bounding boxes) is an old art at this point. Series prediction models are a thing, languagetool uses a tiny model to detect commonly confused words for grammar checking. And yes, image hashing is another, though not entirely machine learning based. IDK what Tineye does in their backend, but there are some more "oldschool" approaches using more traditional programming techniques, generating signatures for images that can be easily compared in a huge database.

You've probably run ML models in photo editors, your TV, your phone (voice recognition), desktop video players or something else without even knowing it. They're tools.

Seperately, image similarity metrics (like lpips or SSIM) that measure the difference between two images as a number (where, say, 1 would be a perfect match and 0 totally unrelated) are common components in machine learning pipelines. These are not usually machine learning based, barring a few execptions like VMAF (which Netflix developed for video).

Text embedding models do the same with text. They are ML models.

LLMs (aka models designed to predict the next 'word' in a block of text, one at a time, as we know them) in particular have an interesting history, going back to (If I even remember the name correctly) BERT in Google's labs. There were also tiny LLMS people did run on personal GPUs before ChatGPT was ever a thing, like the infamous Pygmalion 6B roleplaying bot, a finetune of GPT-J 6B. They were primitive and dumb, but it felt like witchcraft back then (before AI Bros marketers poisoned the well).

[–] Nemo@slrpnk.net 6 points 12 hours ago

As a transmillenial student of AI / ML, great write-up.

[–] Zwuzelmaus@feddit.org 4 points 12 hours ago

I don't remember too much tbh, just that we heard about the theory at university and tried out some of the mathematical methods. They were tiresome ;)

Today I would recommend to start your studies on the wikipedia pages about Markov models and about machine learning.

[–] howrar@lemmy.ca 2 points 11 hours ago

Yann Lecun gave us convolutional neural networks (CNNs) in 1998. These are the models that are used for pretty much all specialized computer vision tasks even today. TinyEye came into existence ten years later in 2008. I can't tell you if they used CNNs, but they were certainly available.

[–] brucethemoose@lemmy.world 6 points 14 hours ago* (last edited 11 hours ago)

Oh and to answer this, specifically, Nvidia has been used in ML research forever. It goes back to 2008 and stuff like the desktop GTX 280/CUDA 1.0. Maybe earlier.

Most "AI accelerators" are basically the same thing these days: overgrown desktop GPUs. They have pixel shaders, ROPs, video encoders and everything, with the one partial exception being the AMD MI300X and beyond (which are missing ROPs).

CPUs were used, too. In fact, Intel made specific server SKUs for giant AI users like Facebook. See: https://www.servethehome.com/facebook-introduces-next-gen-cooper-lake-intel-xeon-platforms/

[–] cecilkorik@lemmy.ca 6 points 14 hours ago

We didn’t call them AI because they weren’t (and aren’t) intelligent, but marketing companies eventually realized there were trillions of dollars to be made convincing people they were intelligent and created models explicitly designed to convince people of things like the idea that they are intelligent and can have genuine conversations like a real human and create real art like a real human and totally aren’t just empty-headedly mimicking thousands of years of human conversation and art, and immediately used them to convince people that the models themselves were intelligent (and many other things besides). Given that marketing and advertising literally exist to convince people of various things and have become exceedingly good at it, it’s really a brilliant business move and seems to be working great for them.

[–] Feyd@programming.dev 8 points 14 hours ago

What you're looking for is the history of "computer vision"