109
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 10 Sep 2023
109 points (99.1% liked)
Fediverse
28262 readers
193 users here now
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).
If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!
Rules
- Posts must be on topic.
- Be respectful of others.
- Cite the sources used for graphs and other statistics.
- Follow the general Lemmy.world rules.
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy
founded 1 year ago
MODERATORS
Perhaps it would help a bit, I don't know. Even if it does, it would be far less than having the sharer to actually write something, and telling the reader the focus of the picture.
I'll give you a personal albeit real example of that. I posted this picture in Mastodon, some time ago:
A machine learning model could theoretically say something like there's a tabby cat in the picture, one semi-abstract acrylic painting, one figurative oil painting. Both paintings rest on a white wall... except that most of those things don't matter, what matters is what the cat is doing towards the viewer.
Contrast it with the translated version of the alt text that I've provided: A playful tabby cat, leaning against the back of a chair, looking at the viewer. Her head, upper thorax, and paws are visible. One paw is holding the back of the chair; the other paw is on the air, in an "I got you!" movement towards the viewer. It's completely different and, when I wrote this, I hoped that both blind and non-blind users could get something out of the picture that they wouldn't without the alt text.
And it's the same deal with other Mastodon posters, not just me. This system - where the user is expected to provide alt text - works well, IMO.