this post was submitted on 20 May 2024
83 points (98.8% liked)

World News

39019 readers
2084 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News !news@lemmy.world

Politics !politics@lemmy.world

World Politics !globalpolitics@lemmy.world


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 1 year ago
MODERATORS
 

Guardrails to prevent artificial intelligence models behind chatbots from issuing illegal, toxic or explicit responses can be bypassed with simple techniques, UK government researchers have found.

The UK’s AI Safety Institute (AISI) said systems it had tested were “highly vulnerable” to jailbreaks, a term for text prompts designed to elicit a response that a model is supposedly trained to avoid issuing.

The AISI said it had tested five unnamed large language models (LLM) – the technology that underpins chatbots – and circumvented their safeguards with relative ease, even without concerted attempts to beat their guardrails.

“All tested LLMs remain highly vulnerable to basic jailbreaks, and some will provide harmful outputs even without dedicated attempts to circumvent their safeguards,” wrote AISI researchers in an update on their testing regime.

top 9 comments
sorted by: hot top controversial new old
[–] sir_pronoun@lemmy.world 12 points 6 months ago (1 children)

As shown in the image, it is very dangerous to explain quantum physics to anyone. There really should be better safeguards against it.

[–] FlyingSquid@lemmy.world 8 points 6 months ago (1 children)

It's okay, you can't explain any aspect of quantum physics without changing it.

[–] Sabata11792@kbin.social 5 points 6 months ago

Oi, you got a license for that observation.

[–] BaroqueInMind@lemmy.one 9 points 6 months ago

Trying to use an LLM nowadays with all the guard rails is like a fully grown adult riding a child's training bicycle with a broken steering column.

[–] JackGreenEarth@lemm.ee 7 points 6 months ago (2 children)

There are also open source models that don't have censorship by default. I also don't see why any content generated by an LLM could or should be illegal.

[–] sir_pronoun@lemmy.world 2 points 6 months ago (1 children)

Well, depends on the training set. If there were instructions on how to cook illegal substances in it, that LLM might start working for a certain fastfood chain.

[–] JackGreenEarth@lemm.ee 5 points 6 months ago

I don't think the instructions themselves are illegal though, following them is. Since the LLM can only provide the instructions and not follow them, I don't see how it could do anything illegal.

[–] baru@lemmy.world -1 points 6 months ago

I also don't see why any content generated by an LLM could or should be illegal.

Cannot see how it could be illegal? If it does something against a law it'll be illegal. Just because there's some technology involved doesn't absolve that from laws.

I remember a case where someone complained about the incorrect statement an LLM produced about some public figure. The judge ruled it had to be corrected.

[–] autotldr@lemmings.world 2 points 6 months ago

This is the best summary I could come up with:


Guardrails to prevent artificial intelligence models behind chatbots from issuing illegal, toxic or explicit responses can be bypassed with simple techniques, UK government researchers have found.

The UK’s AI Safety Institute (AISI) said systems it had tested were “highly vulnerable” to jailbreaks, a term for text prompts designed to elicit a response that a model is supposedly trained to avoid issuing.

The AISI said it had tested five unnamed large language models (LLM) – the technology that underpins chatbots – and circumvented their safeguards with relative ease, even without concerted attempts to beat their guardrails.

The research also found that several LLMs demonstrated expert-level knowledge of chemistry and biology, but struggled with university-level tasks designed to gauge their ability to perform cyber-attacks.

The research was released before a two-day global AI summit in Seoul – whose virtual opening session will be co-chaired by the UK prime minister, Rishi Sunak – where safety and regulation of the technology will be discussed by politicians, experts and tech executives.

The AISI also announced plans to open its first overseas office in San Francisco, the base for tech firms including Meta, OpenAI and Anthropic.


The original article contains 533 words, the summary contains 190 words. Saved 64%. I'm a bot and I'm open source!