prototype_g2

joined 1 year ago
 

First of all, I'm not sure this is the best community for this, so if you think there is a more suitable one, please inform me.

So I've been looking for manufacturers that sell computers with Linux out of the box and I remembered hearing about Tuxedo Computers. Some people seem to really like them, but I've also heard of some people complaining about them too.

And so I've come here to ask this community what are your experiences with this vendor? Is there somewhere else I should look? Thanks in advance.

[–] prototype_g2@lemmy.ml 13 points 2 months ago (1 children)

Not if you are part of the AI-bros club. There is a reason Marketing agencies insist in using the term Artificial Intelligence.

Unfortunately, this is not common knowledge, as experts and Marketing Agencies explain Machine Learning to the masses by saying that "It looks at the data and learns from it, like a human would", which combined with the name Artificial Intelligence and the other terms, like Neural Networks and Machine Learning can make someone think these things are actually intelligent.

Furthermore, we, humans, can see humanity where there is none. We can see faces where there are no faces, we can empathize with things that aren't even alive. So, when this thing shows up, which is capable of creating somewhat coherent text, people are quick to Anthropomorphize the machine. To add to this, we are also very language focused: If someone is really good with the language they speak, they are usually seen as more intelligent.

And finally, never underestimate tech illiteracy.

[–] prototype_g2@lemmy.ml 2 points 2 months ago (1 children)

That is true. Take, for example, movies. Cinema studious with big budgets are usually very risk averse, simply due to the cost of failure being so high. So they have to make sure they can turn a profit. But how can you make sure any given thing will be profitable? Well, that is a prediction, and to predict anything, you need data to base that prediction on. Predictions are based on past events. And so they make sequel after sequel. They make things that have been proven to work. New things, by virtue of being new, don't have tons of data (past examples) for them to make good predictions and so they avoid new things. This results in the homogenization of art. Homogenization induced by Capital, has Capital only sees value in profit, and thus, for Capital, only predictably profitable art is given the resources to flourish.

Machine Learning made images art the epiphany of this. All output is based on previous input. The machine is constructed to not deviate too much from the training data (loss function). And thus struggles to do things it does not have much data on, like original ideas.

I think that what we’re likely to see are parallel worlds of art. The first and biggest being the homogenous, public and commercial one which we’re seeing now but with more of it produced by machines, and the other a more intimate, private and personal one that we discover by tuning back into our real lives and recognising art that has been made by others who are doing the same.

That's kind of already a thing. Just without the AI. Like in the example above, Capital wants predictable profit. Therefore only the most widely appealing, proven to be profitable art will get significant budgets. Creative and unique ideas are just too risky, and therefore delegated to the indie space, where, should any ever become successful, Capital is willing to help... Under the condition they get all the money (Think, for example, how Spotify takes most of the revenue made by the songs they distribute).


By "Capital" I mean those who own things necessary to produce value.

[–] prototype_g2@lemmy.ml 1 points 2 months ago

You have yet to refute the deduction based argument:

If you use the machine to think for you, you will stop thinking.

Not thinking leads to a degradation of thinking skills

Therefore, using machine to think for you will lead to a degradation of thinking skills.

This is not inductive reasoning, like a study, where you look at data and induce a conclusion. This is pure reasoning. Refute it.

That’s a lot of bon-scientific blogs to talk about the non-scientific study I pointed out. Still no objective evidence.

They are a bunch of blogs of people sharing that, after utilizing AI for extended periods of time, their ability to solve problems degraded because they stopped thinking and sharpening their cognitive skills.

So what would satisfy your need for objective evidence? What would I need to show you for you to change your mind? How would a satisfactory study be conducted?

I didn’t say much about the “hominem” but I think you’re defining Microsoft?

"Defining Microsoft"... I didn't define Microsoft?

Did you mean "Defend"? What do you mean "defend"? Again, ad hominem. Instead of substantiating why it is you say the document doesn't count, you attack the ones who made it.


All your dismissals and you have yet to refute the argument all these people make:

If you use the machine to think for you, you will stop thinking.

Not thinking leads to a degradation of thinking skills

Therefore, using machine to think for you will lead to a degradation of thinking skills.

All you have to do is refute this argument and my then it will be up to me to defend myself. Refute the argument. It's deductive reasoning.

[–] prototype_g2@lemmy.ml 0 points 2 months ago (2 children)

The classic Ad Hominem. Instead of actually refuting the arguments, you instead attack the ones making them.

So, tell me, which part of "As Bainbridge [7] noted, a key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise.” is affected by the conflict of interests with the company? This is a note made by Bainbridge. The argument is as follows

If you use the machine to think for you, you will stop thinking.

Not thinking leads to a degradation of thinking skills

Therefore, using machine to think for you will lead to a degradation of thinking skills.

It is not too hard to see that if you stop doing something for a while, your skill to do that thing will degrade overtime. Part of getting better is learning from your own mistakes. The AI will rob you those learning experiences.

What is the problem with the second quote? It is not an opinion, it is an observation.

Other's have noticed this already:

https://www.darrenhorrocks.co.uk/why-copilot-making-programmers-worse-at-programming/

https://www.youtube.com/watch?v=8DdEoJVZpqA

https://nmn.gl/blog/ai-illiterate-programmers

https://www.youtube.com/watch?v=cQNyYx2fZXw


This, of course, only happens if you use the AI to think for you.

[–] prototype_g2@lemmy.ml 0 points 2 months ago (4 children)

Microsoft did a study on this and they found that those who made heavy usage of AI tools said they felt dumber:

"Such consternation is not unfounded. Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved. As Bainbridge [7] noted, a key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise."

Cognitive ability is like a muscle. If it is not used regularly, it will decay.

It also said it made people less creative:

"users with access to GenAI tools produce a less diverse set of outcomes for the same task, compared to those without. This tendency for convergence reflects a lack of personal, contextualised, critical and reflective judgement of AI output and thus can be interpreted as a deterioration of critical thinking."

LINK

[–] prototype_g2@lemmy.ml 2 points 2 months ago (1 children)

Well, it depends on your definition of "best". But if we go by total score, my "best" post is this one and my "best" comment is this one.

[–] prototype_g2@lemmy.ml 0 points 2 months ago (2 children)

This is a matter of coding a good enough neuron simulation, running it on a powerful enough computer, with a brain scan we would somehow have to get - and I feel like the brain scan is the part that is farthest off from reality.

So... Sci-Fi technology that does not exist. You think the "Neurons" in the Neural Networks of today are actually neuron simulations? Not by a long shot! They are not even trying to be. "Neuron" in this context means "thing that holds a number from 0 to 1". That is it. There is nothing else.

That’s an unnecessary insult - I’m not advocating for that, I’m stating it’s theoretically possible according to our knowledge, and would be an example of a computer surpassing a human in art creation. Whether the simulation is a person with rights or not would be a hell of a discussion indeed.

Sorry about the insulting tone.

I do also want to clarify that I’m not claiming the current model architectures will scale to that, or that it will happen within my lifetime. It just seems ridiculous for people to claim that “AI will never be better than a human”, because that’s a ridiculous claim to have about what is, to our current understanding, just a computation problem.

That is the reason why I hate the term "AI". You never know whether the person using it means "Machine Learning Technologies we have today" or "Potential technology which might exist in the future".

And if humans, with our evolved fleshy brains that do all kinds of other things can make art, it’s ridiculous to claim that a specially designed powerful computation unit cannot surpass that.

Yeah... you know not every problem is compute-able right? This is known as the halting problem.

Also, I'm not interested in discussing Sci-Fi future tech. At that point we might as well be talking about Unicorns, since it is theoretically possible for future us to genetically modify a equine an give it on horn on the forehead.


Also, why would you want such a machine anyways?

[–] prototype_g2@lemmy.ml 0 points 2 months ago (4 children)

It’s not a matter of if “AI” can outperform humans, it’s a matter of if humanity will survive to see that and how long it might take.

You are not judging what is here. The tech you speak of, that will surpass humans, does not exist. You are making up a Sci-Fi fantasy and acting like it is real. You could say it may perhaps, at some point, exist. At that point we might as well start talking about all sorts of other technically possible Sci-Fi technology which does not exist beyond fictional media.

Also, would simulating a human and then forcing them to work non-stop count as slavery? It would. You are advocating for the creation of synthetic slaves... But we should save moral judgement for when that technology is actually in horizon.

AI is a bad term because when people hear it they start imagining things that don't exist, and start operating in the imaginary, rather than what actually is here. Because what is here cannot go beyond what is already there, as is the nature of the minimization of the Loss Function.

[–] prototype_g2@lemmy.ml -1 points 2 months ago (1 children)

Just because a drawn picture won once means squat

True, a sample of one means nothing, statistically speaking.

AI can be used alongside drawing

Why would I want a function drawing for me if I'm trying to draw myself? In what step of the process would it make sense to use?

for references for instance

AI is notorious for not giving the details someone would pick a reference image for. Linkie

It’s a tool like any other

No they are not "a tool like any other". I do not understand how you could see going from drawing on a piece of paper to drawing much the same way on a screen as equivalent as to an auto complete function operated by typing words on one or two prompt boxes and adjusting a bunch of knobs.


Also, just out of curiosity, do you know how "back propagation" is, in the context of Machine Learning? And "Neuron" and "Learning"?

[–] prototype_g2@lemmy.ml 1 points 3 months ago (1 children)

Desiring that the people who make art not starve to death is too much to ask now? We live under Capitalism! It's money or death.

[–] prototype_g2@lemmy.ml 3 points 3 months ago (2 children)

That’s not how AI works

How does it work then? I see lot's pf people claiming to know how it works... only to not actually know how the training works exactly, only a superficial understanding.

How is access limited and at the same time you are bullying everyday Joes who are actually using it?

Ah yes, because people in 3rd world countries earning $1 an hour or less to label that data for the image gen can 100% afford the $10/month for a subscription or a pc to run locally.

Delete all software and turn off your computer or be a hypocrite.

How so?

The stuff they use for training is free for any artist to train on.

The fact that you think AI training and humans looking at thinks are the same thing tells me you don't know how humans art nor how machines train.

You don’t own the definition of art and nobody you will encounter in a post of any sort is even doing it for major profit.

  1. True. However, this argument should not be about semantics;
  2. I got news for ya.

You don’t own the definition of art.

This is not about definitions, I won't spend time arguing semantics with you. Also, why re-state yourself?

AI is for everyone, but is made for the rich to get richer, like literally everything else you see or do online

Without social development, all forms of technological development will do nothing but allow for greater forms of torment.

view more: next ›