this post was submitted on 28 Jul 2023
436 points (93.8% liked)
Technology
59223 readers
3421 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The wording of every single article has such an anti AI slant, and I feel the propaganda really working this past half year. Still nobody cares about advertising companies, but LLMs are the devil.
Existing datasets still exist. The bigger focus is in crossing modalities and refining content.
Why is the negative focus always on the tech and not the political system that actually makes it a possible negative for people?
I swear, most of the people with heavy opinions don't even know half of how the machines work or what they are doing.
Probably because LLMs threaten to (and has already started to) shittify a truly incredible number of things like journalism, customer service, books, scriptwriting etc all in the name of increased profits for a tiny few.
again, the issue isn't the technology, but the system that forces every technological development into functioning "in the name of increased profits for a tiny few."
that has been an issue for the fifty years prior to LLMs, and will continue to be the main issue after.
removing LLMs or other AI will not fix the issue. why is it constantly framed as if it would?
we should be demanding the system adjust for the productivity increases we've already seen, as well to what we expect in the near future. the system should make every advancement a boon for the general populace, not the obscenely wealthy few.
even the fears of propaganda. the wealthy can already afford to manipulate public discourse beyond the general public's ability to keep up. the bigger issue is in plain sight, but is still being largely ignored for the slant that "AI is the problem."
Yep, the problem was never LLMs, but billionaires and the rich. The problems have always been the rich for thousands of years, and yet they are immensely successful at deflecting their attacks to other groups for those thousands of years. They will claim it's Chinese immigrants, or blacks, or Mexicans, or gays, or trans people. Now LLMs and AI are the new boogieman.
We should be talking about UBI, not LLMs.
It’s a capitalism problem not an AI or copyright problem.
Sure but lets say you try to solve this problem. What's the first thing you think a coordinated group could do, get sensible regulations about AI, or overthrow global capitalism. Its framed the way it is because unless you want ro revolt that's the framework we're gonna have to use to deal with it. I suppose we could alwyas do nothing to AI specifically and focus on just overthrowing capitalism, but during that time lots of harm will come to lots of workers because of AI use. I dont think anticapitalism has reached a critical mass (we need this for any real sustem wide attacks on and alternatives to capitalism) so I think dealing with this AI problem and trying to let everyone else know about how it's really a capitalism thing would do more to build support and avert harm to workers. I hate that its like that too but those choices are basically the real options we have moving forward from my pov.
You tell me what "sensible regulations about AI" are that don't hurt small artists and creators more than they centralize the major players and enrich copyright hoarding, copyright-maximalist corporations. (Seriously, this isn’t bait. I’ve been wracking my mind on the issue for months. Because the only serious proposals so far are expanding the already far-too-broad copyright rights to things like covering training or granting artists more rights to their work during their lifetime - something that will only hurt small artists) We desperately need more fair use, not less. The only "sensible regulations" that we should and could be talking about is some form of UBI. That's it.
UBI is a bandaid that doesn't solve the core issues of production under capitalism, the people with capital still control production, still make more money than eveyone else and still have more money and power to use influencing the politicians that write the laws surrounding UBI. And expecting me to solve the AI problem in a comment section is like me asking you to implement UBI in a way that landlords dont just jack up rent or business dont inflate prices with more cash and demand floating around, also whats your plan for when the level of UBI legislated , or planned increases in UBI is no longer sufficient enough to pay for housing food and other necessities? What do you do to counter the fact that the capitists still have more access to politicians and media empires they can use to discredit and remove UBI?
UBI is a bandaid, sure. But bandaids actually help; “sensible AI regulations” - a nothing phrase that will most likely materialize as yet another expansion of copyright — will actively make things worse. UBI is achievable, and can be expanded on once it’s enacted. You establish protections and regulations that actually help people, and dare opposition to ever try to take them away; instead of carrying water for copyright maximalists along the way.
Exactly. We need to break apart copyright with a crowbar. It's a broken system that only benefits the rich, and AI has the opportunity to turn the entire system into a pile of unenforceable garbage.
Why does legislation or regulation surrounding AI necessarily have to be copyright maxamilism but UBI regulations are somehow in some undescribed way going to be strong enough to prevent lobbying from the people who still control the mean of production? You're arguement gets to use the magic regulations that don't get challenged or changed, but my arguement is stuck to the one mainstream idea that has people worried?
Because those are the only “sensible AI regulations” seriously being talked about. Tell me any other actual regulatory schemes that are being proposed that aren’t, and I’ll be happy to talk about those, and likely support them. I’m not getting the hostility, btw. fwiw this (getting stronger consumer protection laws passed) is literally my job; I’m going to go out on a limb here and say we probably agree with more than we disagree, based on your comment history. Obviously UBI won’t be enough to - will never be enough to - oust capitalists from having an outsized influence in policy, but what I don’t support at all are regulations that would further centralize the corporate IP holders and tech companies that would actually benefit from the copyright maximalist proposals currently being bandied about by the fear mongering anti generative AI discourse.
Fundamentally we’re not going to copyright our way out of the externalities AI brings with it.
My arguement is not limited to the only regulations being currently talked about any more than your arguement is limited by what types of UBI are currently talked about because im not hearing any talk on UBI.
Friend you haven’t proposed ANY thing except “sensible” regulations. I’ve asked you to elaborate on what those might be, but I’m cautioning against the regulatory schemes currently being proposed and considered. Maybe I’m too in the weeds on this issue — again, literally my job — but I can tell you the only proposals actually being discussed right now are copyright maximalist.
Because the western world has no other framework for intellectual property other than copyright claims, and the people who write the laws and discuss the solutions are the people already at the top who have property to protect. If copyright in it's entirety disappeared tomorrow with nothing in it's stead the very first people to lose and lose the most will be the small artists, because there's nothing to stop people or companies with massive resource pools just steamrolling over them, in fact I can't see that move(eliminating all copyright) doing anything but accelerating a monopoly forming to whichever company can steal and distribute art the best. My best guess says if you try to solve that problem you'll find solutions that will at least alleviate the AI problem for smaller creators. But again I don't know what you're expecting of me, If I had the regulation all figured out, I probably wouldn't just be jawing about it on Lemmy, and you yourself said you cannot figure it out and it's related to your field. Because right now like you said, more copyright will slowly strangle creators, and like I just said no copyright will let them be steamrolled over by anyone who can sell or distribute it for a lower price than them. UBI make sure the artist doesn't die. Great honestly, better than just letting people drop, but if all the artists work can simply be stolen for profit by those that didn't make it, and they have to subsist on the payments everyone gets, then there's really no such thing as an artist in terms of vocation anymore, nor would there be enough opportunities to profit from your own artistic work. I don't know the solution any more than you do, but to me I don't think UBI is good enough because I believe being an artist is labor worth being paid for. I think we need something targeted at the AI problem, not something reactionary, and not something necessarily targeting the AI itself, perhaps targeting the relationship between people who create art, people who distribute it, and people who profit from it, since often times those aren't all the same person.
You’re making a lot of jumps here. Copyright need not be eliminated entirely — that won’t happen anytime soon. But fair use exists for a reason, and generative art is if anything a fair use machine more than a plagiarism one. Everything is a remix, and artists making art using AI tools deserve to be able to monetize that work. I, too, believe artists deserve to get paid. I’ve worked personally helping small artists and creators navigate those waters, writing license agreements and helping them argue fair use to create and monetize their work. Expanding copyright terms further than they are wont help those artists get paid. It will further centralize the major corporations and big tech. It’s a siren song, a distraction from the real issue: that AI represents a far bigger threat than JUST to artists and writers. There is major economic upheaval right around the corner. There will be massive job displacement. Rather than further enriching corporations for misplaced short term gain, we must move to consider how we can allow artists and creators and everyone else with a creative spark to continue to exist and create art without further building up these legacy structures — that are already fraying at the seams and not fulfilling their original purpose, having long since been co-opted by the disneys of the world. We need more fair use, not less. UBI of some form is that answer, as far as I can tell. We need to start providing people the freedom to work on what they want without the crushing pressures of imminent destitution if we hope to have artists making art. I say again: copyright will fundamentally not be the solution to the externalities of AI. We need more creative, more radical solutions. https://abovethelaw.com/legal-innovation-center/2023/07/21/stop-rushing-to-copyright-as-a-tool-to-solve-the-problems-of-ai/
I still have yet to argue in favor of more copyright, I don't know why you're still arguing against something we both agreed was bad, but this is the first time in a while I've heard UBI called radical in the face of capitalism, I don't think it gets at the root of the problem of capitalism or of the smaller AI vs artist problem as I've explained previously. The relations of production and profit don't budge an inch. I've also never said I oppose UBI though, in fact I said if it helps keep people fed and sheltered that's great. I just see it as a stopgap 'solution' that will buy time and comfort until the market adapts on how to recuperate that money since it can expect every citizen to get that every month, I don't trust a free market not to ruin that. Furthermore specifically in the face of AI versus artists, the consequences are already happening and the push at a political level for UBI is lower than when Andrew Yang was running, and since the political level is where UBI would have to come from, I don't like the idea of not trying anything else until then.
Andrew Yang is a neoliberal hack, and I promise you UBI is about as radical as we could hope for that’s still attainable in the currently policy landscape. Anything more radical and we’re talking actual get-out-your-guillotine-revolution. Fair enough, though, I would suggest that the way you talk about addressing AI as it relates to artists dances around copyrightland more than directly and broadly focusing on labor — which imo is the best way to tackle AI externalities. What we should be doing is building bridges between artists and the rest of labor, finding ways out of the cyberpunk dystopia not by relying on intellectual property but by building stronger social programs that lift us all up.
Simple: Make UBI is inflation adjusted. Problem solved.
There's no such thing as "sensible regulations" for AI. AI is a technological advantage. Any time you regulate that advantage, other groups that don't have those regulations will fuck you over. Even if you start talking about regulations, the corpos will take over and fuck you over with regulations that only hurt the little guy.
Hell, even without regulations, we're already seeing this on the open-source vs. capitalism front. Google admitted that it lost some advantages because of open-source AI tools, and now these fucking cunts are trying to hold on to their technology as close as possible. This is technology that needs to be free and open-source, and we're going to see a fierce battle with multi-billion-dollar capitalistic corporations clawing back whatever technological gains OSS acquired, until you're forced to spend hundreds or thousands of dollars to use a goddamn chess bot.
GPLv3 is key here, and we need to force these fuckers into permanent copyleft licenses that they can't revoke. OpenAI is not open, StabilityAI is not the future, and Google is not your friend.
Isnt forcing a copyleft licence exactly a regulation that would be sensible though? So why wouldn't regulations and legislation work if thats your solution too?
There's never been a bill that had the word "copyleft" or "GNU Public License" on it at all, and thanks to corpo lobbyists, there probably never will be. We have to be realistic here, and the only realistic option is to encourage as much protected open-source software on the subject as possible.
This isn’t a technological issue, it’s a human one
I totally agree with everything you said, and I know that it will never ever happen. Power is used to get more power. Those in power will never give it up, only seek more. They intentionally frame the narrative to make the more ignorant among us believe that the tech is the issue rather than the people that own the tech.
The only way out of this loop is for the working class to rise up and murder these cunts en masse
Viva la revolucion!
I completely agree with you, ai should be seen as a great thing, but we all know that the society we live in will not pass those benefits to the average person, in fact it'll probably be used to make life worse. From a leftist perspective it's very easy to see this, but from the Norman position, atleast in the US, people aren't thinking about how our society slants ai towards being evil and scary, they just think ai is evil and scary. Again I completely agree with what you've said it's just important to remember how reactionary the average person is.
Exactly. I work in AI (although not the LLM kind, just applying smaller computer vision models), and my belief is that AI can be a great liberator for humanity if we have the right political and economic apparatus. The question is what that apparatus is. Some will say it's an inherent feature of capitalism, but that's not terribly specific, nor does it explain the relatively high wealth equality that existed briefly during the middle of the 20th century in America. I think some historical context is important here.
Historical Precedent
During the Industrial Revolution, we had an unprecedented growth in average labor productivity due to automation. From a naïve perspective, we might expect increasing labor productivity to result in improved quality of life and less working hours. I.e., the spoils of that productivity being felt by all.
But what we saw instead was the workers lived in squalor and abject poverty, while the mega-rich captured those productivity gains and became stupidly wealthy.
Many people at the time took note of this and sought to answer this question: why, in an era over greater-than-ever labor productivity, is there still so much poverty? Clearly all that extra wealth is going somewhere, and if it's not going to the working class, then it's evidently going to the top.
One economist and philosopher, Henry George, wrote a book exploring this very question, Progress and Poverty. His answer, in short, was rent-seeking:
Rent-seeking takes many forms. To list a few examples:
George's argument, essentially, was that the privatization of the economic rents borne of god-given things — be it land, minerals, or ideas — allowed the rich and powerful to extract all that new wealth and funnel it into their own portfolios. George was not the only one to blame these factors as the primary drivers of sky-high inequality; Nobel-prize winning economist Joseph Stiglitz has stated:
George's proposed remedies were a series of taxes and reforms to return the economic rents of those god-given things to society at large. These include:
such as in the Norwegian model:
(continued)
Present Day
Okay, so that's enough about the past. What about now?
Well, monopolization of land and housing via the housing crisis has done tremendous harm:
And that is just one form of rent-seeking. Imagine the collective toll of externalities (e.g., the climate crisis), monopolistic/oligopolistic markets such as energy and communications, monopolization of valuable intellectual property, etc.
So I would tend to say that — unless we change our policies to eliminate the housing crisis, properly price in externalities, eliminate monopolies, encourage the growth of free and open IP (e.g., free and open-source software, open research, etc.), and provide critical public goods/services such as healthcare and education and public transit — we are on a trajectory for AI to be Gilded Age 2: Electric Boogaloo. AI merely represents yet another source of productivity growth, and its economic spoils will continue to be captured by the already-wealthy.
I say this as someone who works as an AI and machine learning research engineer: AI alone will not fix our problems; it must be paired with major policy reform so that the economic spoils of progress are felt by all, not just the rich.
Joseph Stiglitz, in the same essay I referred to earlier, has this to say:
It is a completely understandable stance in the face of the economic model, though. Your argument could be fitted to explain why firearms shouldn’t be regulated at all. It isn’t the technology, so we should allow the sale of actual machine guns (outside of weird loopholes) and grenade launchers.
The reality is that the technology is targeted by the people affected by it because we are hopeless in changing the broader system which exists to serve a handful of parasitic non-working vampires at the top of our societies.
Edit: not to suggest that I’m against AI and LLM. I want my fully automated luxury communism and I want it now. However, I get why people are turning against this stuff. They’ve been fucked six ways from Sunday and they know how this is going to end for them.
Plus, a huge amount of AI doomerism is being pushed by the entrenched monied AI players, like OpenAI and Meta, in order to used a captured government to regulate potential competition out of existence.
Technology is but a tool. It cannot tell you how to use it. If it's in the hands of a writer it's a helpful sounding board. If it's in the hands of a Netflix producer it's an anti-labor tool. We need to protect people's livelyhoods
Journalism and customer service can't possibly get worse than they already are.
Books and movies are not at risk - there will always be lots of people willing to write good content for both, and the best content will be published. And "the best" will be a hybrid of humans and AI working together - which is what has some people in that industry so scared. Just like factory workers were scared when machines entered that industry.
It's an irrational fear - there are still factory workers today. Probably more than ever. And there will still be human writers - it's an industry that will never go away.
If, however, you refuse to work with AI... then yeah, you're fucked. Pretty soon you'll be unemployable and nobody will publish your work, which is why the movie publishers aren't going to budge. They recognise a day is coming where they can't sell movies and tv shows that were made exclusively by humans and they are never going to sign a contract locking them into a dead and path.
You make it sound like the quality is what will increase with human/AI partnership. What will realistically happen is an expected rate of output. Why can't you deliver a book every year with an AI ghost writing? People who work slowly or meticulously will be phased out by those that can quickly throw together a collage of their own words and their guided filler. It's amazing and futuristic. It can be very useful and inspirational. But I do not share your optimism that it will make creative industries better. It will allow a single person to put together a script that would've taken a team... But the better content will now be drowning in this sea. Unfortunately, I expect an equivalent of the media explosion that happened when the reality tv format became an ok thing, eventually leading to shows half filmed on phones. The end result will be double the marvel movies every year.
Yah I think it's fairly obvious that people are both fascinated and scared by the tech and also acknowledge that under a different economic structure, it would be extremely beneficial for everyone and not just for the very few. I think it's more annoying that people like you assume that everyone is some sort of diet Luddite when they're just trying to see how the tool has the potential to disrupt many, many jobs and probably not in a good way. And don't give me this tired comparison about the industrial revolution because it's a complete false equivalence.
I am so tired of techno-fetishist AI bros complaining every single time any of the many ways in which AI will devastate and rot out daily lives is brought up.
"It's not the tech! It's the economic system!"
As if they're different things? Who is building the tech? Who is pouring billions into the tech? Who is protecting the tech from proper regulation, smartass? I don't see any worker coops using AI.
"You don't even know how it works!"
Just a thought terminating cliche to try to avoid any discussion or criticism of your precious little word generators. No one needs to know how a thing works to know it's effects. The effects are observable reality.
Also, nobody cares about advertising companies? What the hell are you on about?
they are different things. it's not exclusively large companies working on and understanding the technology. there's a fantastic open-source community, and a lot of users of their creations.
would destroying the open-source community help prevent the big-tech from taking over? that battle has already been lost and needs correction. crying about the evil of A.I. doesn't actually solve anything. "proper" regulation is also relative. we need entirely new paradigms of understanding things like "I.P." which aren't based on a century of lobbying from companies like disney. etc.
and yes, understanding how something works is important for actually understanding the effects, when a lot of tosh is spewed from media sites that only care to say what gets people to engage.
i'd say a fraction of what i see as vaguely directed anger towards anything A.I. is actually relegated to areas that are actual severe and important breaches of public trust and safety, and i think the advertising industry should be the absolute focal point on the danger of A.I.
Are you also arguing against every other technology that has had their benefits hoarded by the rich?
It's mostly large companies, some models are open source (of which only some are also community driven), but the mainstream ones are the ones being entirely funded by, legally protected by, and pushed onto everything by capitalist olligarchs.
What other options do you have? I'm sick and tired of people like you seeing workers lose their jobs, seeing real people used like meat puppets by the internet, seeing so many artists risking their livelihoods, seeing that we'll have to lose faith in everything we see and read because it could be irrecognizably falsified, and CLAIMING you care about it, only to complain every single time any regulation or way to control this is proposed, because you either don't actually care and are just saying it for rhetoric, or you do care but only to the point you can still use your precious little toys restriction-free. Just overthrow the entire economic system of all countries on earth, otherwise don't do anything, let all those people burn! Do you realize how absurd you sound?
It's sociopathic. I don't say it as an insult, I say it applying the definition of a word, it's a complete lack of empathy and care for your fellow human beings, it's viewing an inmaterial piece of technology, nothing but a thoughtless word generator, like inherently worth more than the livelihood of millions. I'm absolutely sick of it. And then you have the audacity to try to seem like the reasonable ones when arguing about this, knowing if you had your way so many would suffer. Framing it as anti-capitalism knowing that if you had your way you'd pave the way for the olligarchs to make so many more billions off of that suffering.
it's like you just ignored my main points.
get rid of the A.I. = the problem is still the problem. has been especially for the past 50 years, any non-A.I. advancement continues the trend in the exact same way. you solved nothing.
get rid of the actual problem = you did it! now all of technology is a good thing instead of a bad thing.
false information? already a problem without A.I. always has been. media control, paid propagandists etc. if anything, A.I. might encourage the main population to learn what critical thought is. it's still just as bad if you get rid of A.I.
" CLAIMING you care about it, only to complain every single time any regulation or way to control this is proposed, because you either don’t actually care and are just saying it for rhetoric" think this is called a strawman. i have advocated for particular A.I. tools to get much more regulation for over 5-10 years. how long have you been addressing the issue?
you have given no argument against A.I. currently that doesn't boil down to "the actual problem is unsolvable, so get rid of all automation and technology!" when addressed.
which again, solves nothing, and doesn't improve anything.
should i tie your opinions to the actual result of your actions?
say you succeed. A.I. is gone. nothing has changed. inequality is still getting worse and everything is terrible. congratulations! you managed to prevent countless scientific discoveries that could help countless people. congrats, the blind and deaf lose their potential assistants. the physically challenged lose potential house-helpers. etc.
on top of that, we lose the biggest argument for socializing the economy going forward, through massive automation that can't be ignored or denied while we demand a fair economy.
for some reason i expect i'm wasting my time trying to convince you, as your argument seems more emotionally motivated than rationalized.
What are you on about? Who's talking about "completely getting rid of AI"? And you accuse me of strawmanning? I didn't even argue that it should be stopped. I argued that every single time anyone tries or suggests doing anything to curtail these things people like you jump out to vehemently defend your precious programs from regulation or even just criticism, because we should either completely destroy capitalism or not do anything at all, there is no inbetween, there is nothing we can do to help anyone if it's not that.
Except there is. There are plenty of things that can be done to help the common people besides telling them "well just tough it out until we someday magically change the fundamentals of the economic system of the entire world, nerd". It just would involve restricting what these things can do. And you don't want that. It's fine but own up to it. Trying to have this image that you really do care about helping but just don't want to help at all unless it's via an incredibly unprobable miracle pisses me off.
For someone who accuses others of not understanding how AI works, to then say something like this is absurd. I hope you're being intellectually dishonest and not just that naive. There is absolutely no comparison between a paid propagandist and the irrecognizable replicas of real things you could fabricate with AI.
People are already abusing voice actors by sampling them and making covers with their voices without their permission and certainly without paying. We can already make amateur videos of the person speaking to pair it up with the generated audio. In a few years when the technology innevitably gets better I will be able to perfectly fabricate a video that can ruin someone's life with a few clicks. If this process is sophisticated enough there will be minimal points of failure, there will be almost nothing to investigate and try to figure out if the video is false or not. No evidence will ever mean anything, it could all be fabricated. If you don't see how this is considerably worse than ANYTHING we have right now to falsify information, then there is nothing I can say to ever convince you. "Oh, but if nothing can be demonstrably true anymore, the masses will learn critical thought!" Sure.
This is what I mean. You people lack any kind of nuance. You can only work in this "all or nothing" thinking. No "anti-AI" person wants to fully and completely destroy every single machine and program powered by artificial intelligence, jesus christ. It's almost like it's an incredibly versatile tool that has many uses that can be used for good and bad, It's almost like we should, call me an irrational emotional snowflake if you want, put regulations in place so the bad uses are heavily restricted, so we can live with this incredible technology without feeling constantly under threat because we are using it responsibly.
Instead what you propose is, don't you dare limit anything, open the flood gates and let's instead change the economic system so that the harmful don't also destroy people economically. Except the changes you want not only don't fix some of the problems unregulated and free AI use for everything bring, they go against the interests of every single person with power in this system, so they have an incredibly minuscule chance of ever being close to happening, much less happening peacefully. I'd be okay if it was your ultimate goal, but if you're not willing to have a compromise on something that could minimize the harm this is doing in the meantime without being a perfect solution, why shouldn't I assume you just don't care? What reasons are you giving me to not believe that you simply prefer seeing the advancements of technology rather than the security of your fellow humans, and you're just saying this as an excuse to keep it that way?
Right, because that's the way to socialize the economy. By having a really good argument. I'm sure it will convince the people that have unmeasurable amounts of wealth and power precisely because the economy is not socialized. It will be so convincing they will willingly give all of that up.
then what the fuck are you even arguing? i never said "we should do NO regulation!" my criticism was against blaming A.I. for things that aren't problems created by A.I.
i said "you have given no argument against A.I. currently that doesn’t boil down to “the actual problem is unsolvable, so get rid of all automation and technology!” when addressed."
because you haven't made a cohesive point towards anything i've specifically said this entire fucking time.
are you just instigating debate for... a completely unrelated thing to anything i said in the first place? you just wanted to be argumentative and pissy?
i was addressing the general anti-A.I. stance that is heavily pushed in media right now, which is generally unfounded and unreasonable.
I.E. addressing op's article with "Existing datasets still exist. The bigger focus is in crossing modalities and refining content." i'm saying there is a lot of UNREASONABLE flak towards A.I. you freaked out at that? who's the one with no nuance?
your entire response structure is just.. for the sake of creating your own argument instead of actually addressing my main concern of unreasonable bias and push against the general concept of A.I. as a whole.
i'm not continuing with you because you are just making your own argument and being aggressive.
I never said "we can't have any regulation"
i even specifically said " i have advocated for particular A.I. tools to get much more regulation for over 5-10 years. how long have you been addressing the issue?"
jesus christ you are just an angry accusatory ball of sloppy opinions.
maybe try a conversation next time instead of aggressively wasting people's time.
So, like, everything? I've talked about the problems and shit AI will cause on us and your only response consistantly was "Yeahhhh but if we completely ban my precious AI then we wont have all of it's nice things! Better to wait until capitalism is magically solved". Then the problems that weren't economics-related you handwaved away. Please illuminate me on a problem you think is real, caused by AI, and that you would be willing to regulate within the bounds of our current system?
I've argued against your mentality and the mentality of the people that always show up on posts concerned about AI to defend it. I've even responded to specific phrases you said. What else do you even want?
If you don't want people to feel threatened by AI, maybe be willing to fix its threats? Maybe don't just go to something people find concerning and say "we can't dare to do anything about this!" and try to reframe it as some sort of tragic prevention by the ignorant masses?
That means nothing to me. Those are just words. Your actions have been vehemently defending AI and trying to convince me that curtailing it is pointless, while also trying to appear concerned about its threats. Those are completely contradictory positions that you held now, and that's what I pointed out. I don't care what you have been doing for years.
When have I ever obbligated you to respond to me? I've never hidden that this topic and people who think like you make me angry. If you didn't want to deal with it you could have just ignored me.
It does make me angry. It makes me angry to see so many people be threatened so unfairly. It makes me angry to see people not care about that and prefer seeing further sophistication of a thoughtless algorithm over lives, but at least those people are honest. Dishonesty is what makes me livid. Those that have the same attitude but know they'd sound awful if they said it straight forwardly, so they try to find a hip and cool thing to parade as. It's totally anti-capitalist to not want to stop capitalist corporations abusing workers to replace them bro, trust me.