Say all you want about hallucinations, but AI will never be able to outperform humans at bullshitting, so sales and marketing is safe.
GenderNeutralBro
"It's popular so it must be good/true" is not a compelling argument. I certainly wouldn't take it on faith just because it has remained largely unquestioned by marketers.
The closest research I'm familiar with showed the opposite, but it was specifically related to the real estate market so I wouldn't assume it applies broadly to, say, groceries or consumer goods. I couldn't find anything supporting this idea from a quick search of papers. Again, if there's supporting research on this (particularly recent research), I would really like to see it.
If there is any research from the last 50 years suggesting this actually works, I'd love to see it.
I haven't seen this movie in like 25 years, but I still read this in Marisa Tomei's voice.
Wait, isn't it the other way around? You should arrive in NY earlier than you left London, since NY is 5 hours behind London. So if you leave at 8:30 and arrive 1.5 hours later, it should only be 5AM when you arrive.
You might need a third breakfast before your elevenses in that case.
Jerboa is solid, but it's not feature-rich. Not great for media browsing. It's still my main client since I use Lemmy mostly for text, not images or videos.
Eternity and Voyager are worth looking at, too.
Interesting read, thanks! I'll finish it later, but already this bit is quite interesting:
Without access to gender, the ML algorithm over-predicts women to default compared to their true default rate, while the rate for men is accurate. Adding gender to the ML algorithm corrects for this and the gap in prediction accuracy for men and women who default diminishes.
We find that the MTEs are biased, signif-icantly favoring White-associated names in 85.1% of casesand female-associated names in only 11.1% of case
If you're planning to use LLMs for anything along these lines, you should filter out irrelevant details like names before any evaluation step. Honestly, humans should do the same, but it's impractical. This is, ironically, something LLMs are very well suited for.
Of course, that doesn't mean off-the-shelf tools are actually doing that, and there are other potential issues as well, such as biases around cities, schools, or any non-personal info on a resume that might correlate with race/gender/etc.
I think there's great potential for LLMs to reduce bias compared to humans, but half-assed implementations are currently the norm, so be careful.
Being factually incorrect about literally everything you said changes nothing? Okay.
More importantly, humans are capable of abstract thought. Your whole argument is specious. If you find yourself lacking the context to understand these numbers, you can easily seek context. A good starting place would be the actual paper, which is linked in OP's article. For the lazy: https://www.nature.com/articles/s41598-020-61146-4
It's 14,000 to 75,000, not millions.
Microplastics are in the range of one micrometer to five millimeters, not nanometers.
And you can't tell when something is active/focused or not because every goddamn app and web site wants to use its own "design language". Wish I had a dollar for every time I saw two options, one light-gray and one dark-gray, with no way to know whether dark or light was supposed to mean "active".
I miss old-school Mac OS when consistency was king. But even Mac OS abandoned consistency about 25 years ago. I'd say the introduction of "brushed metal" was the beginning of the end, and IIRC that was late 90s. I am old and grumpy.