IngrownMink4

joined 4 years ago
[–] IngrownMink4@lemmygrad.ml 1 points 2 weeks ago

Don't Breathe (2016)

[–] IngrownMink4@lemmygrad.ml 3 points 1 month ago (1 children)

I think Detroit: Become Human was the first game I ever platinum-ed. I was really looking forward to playing it when it was announced, even more so knowing that it was being developed by the creators of Heavy Rain. But Detroit: Become Human seems even better to me in many ways.

One of the things I liked the most was the setting. A fictional Detroit going through a serious economic crisis due to the monopoly of Cyberlife, the company that manufactures the androids. The city is full of interesting details, such as electronic magazines, which put you in context about what is happening outside Detroit. The characters are very well written, and I myself at least became very invested in most of them (Hank, for example). The character modeling, subplots and use of narrative are also great.

One of the drawbacks is that in some outcomes, you are severely punished for resorting to violence (even if it's justified). And trust me, when you play this, you will know what I'm talking about. These are the kind of decisions that leave you with a bad taste in your mouth if you make the “wrong” choice.

Despite this, it's a great game IMO :)

[–] IngrownMink4@lemmygrad.ml 2 points 5 months ago

If you like Georgia, you may also like Gentium

[–] IngrownMink4@lemmygrad.ml 2 points 5 months ago

Lexend. It’s a font designed with research to have variable widths to aid legibility.

It's cool, but I personally prefer Atkinson Hyperlegible Font for that usecase

[–] IngrownMink4@lemmygrad.ml 2 points 5 months ago (1 children)

As a Terminus fanboy, I love it! Thanks for sharing!

[–] IngrownMink4@lemmygrad.ml 2 points 5 months ago

All the fonts I have mentioned are free and open source! They're all licensed under the OFL license. I hope you like my suggestions :)

My favorite Serif fonts

My favorite Sans-serif fonts

My favorite Display fonts

My favorite Monospace fonts

[–] IngrownMink4@lemmygrad.ml 11 points 7 months ago

But over time I started recognizing a lot of the same usernames, and it really just hit me that you guys are some of the most empathetic and loving people I’ve come across on the internet […]

Totally agree. I've been on Lemmygrad since before GenZedong was quarantined on Reddit. There were only a few of us, but I could immediately recognize a few other users when I made posts. Almost every conversation has been great here. It's something I didn't notice on any other centralized social network. And the fact that this community feels like an authentic community is also incredible.

This might be a super sappy post, but you know what, I don’t care. Making the switch from Reddit to lemmygrad was the best social media decision I ever made […]

While this may be a "sappy" post, I think these posts are necessary for people who use Lemmygrad to understand that it has an impact on the lives of other comrades. Many people come to this community for advice or just to vent. It's something that would be impossible on Reddit because of the toxic nature and dark patterns that hide all the most successful social networks to succeed.

[–] IngrownMink4@lemmygrad.ml 4 points 10 months ago (1 children)

any user who fails to do so will be found guilty of liberalism

Dammit, I'm late :c

Happy birthday anyways! @Oppo@lemmygrad.ml

[–] IngrownMink4@lemmygrad.ml 1 points 10 months ago

Here (unfortunately)

[–] IngrownMink4@lemmygrad.ml 4 points 10 months ago

GNOME FTW 😎 enjoy your new hardware!!

[–] IngrownMink4@lemmygrad.ml 3 points 10 months ago

Looks like a GNOME-based DE, yeah.

 

(The only thing I will say is that chapter 13 is insane on the highest difficulty lol)

5
submitted 11 months ago* (last edited 11 months ago) by IngrownMink4@lemmygrad.ml to c/technology@lemmygrad.ml
 

Let me give you some context. Two important figures in the field of artificial intelligence are taking part in this debate. On the one hand, there is George Hotz, known as "GeoHot" on the internet, who became famous for reverse-engineering the PS3 and breaking the security of the iPhone. Fun fact: He has studied at the Johns Hopkins Center for Talented Youth.

On the other hand, there's Connor Leathy, an entrepreneur and artificial intelligence researcher. He is best known as a co-founder and co-lead of EleutherAI, a grassroots non-profit organization focused on advancing open-source artificial intelligence research.

Here is a detailed summary of the transcript:

spoilerOpening Statements

  • George Hotz (GH) Opening Statement:

    • GH believes AI capabilities will continue to increase exponentially, following a trajectory similar to computers (slow improvements in 1980s computers vs fast modern computers).
    • In contrast, human capabilities have remained relatively static over time (a 1980 human is similar to a 2020 human).
    • These trajectories will inevitably cross at some point, and GH doesn't see any reason for the AI capability trajectory to stop increasing.
    • GH doesn't believe there will be a sudden step change where an AI becomes "conscious" and thus more intelligent. Intelligence is a gradient, not a step function.
    • The amount of power in the world (in terms of intelligence, capability, etc.) is about to greatly increase with advancing AI.
    • Major risks GH is worried about:
      • Imbalance of power if a single person or small group gains control of superintelligent AI (analogy of "chicken man" controlling chickens on a farm).
      • GH doesn't want to be left behind as one of the "chickens" if powerful groups monopolize access to AI.
    • Best defense GH can have against future AI manipulation/exploitation is having an aligned AI on his side. GH is not worried about alignment as a technical challenge, but as a political challenge.
    • GH is not worried about increased intelligence itself, but the distribution of that intelligence. If it's narrowly concentrated, that could be dangerous.
  • Connor Leahy (CL) Opening Statement:

    • CL has two key points:
      1. Alignment is a hard technical problem that needs to be solved before advanced AGI is developed. Currently not on track to solve it.
      2. Humans are more aligned than we give credit for thanks to social technology and institutions. Modern humans can cooperate surprisingly well.
    • On the first point, CL believes the technical challenges of alignment/control must be solved to avoid negative outcomes when turning on a superintelligent AI.
    • On the second point, CL argues human coordination and alignment is a technology that can be improved over time. Modern global coordination is an astounding achievement compared to historical examples.
    • CL believes positive-sum games and mutually beneficial outcomes are possible through improving coordination tech/institutions.

Debate Between GH and CL:

  • On stability and chaos of society:

    • GH argues that the appearance of stability and cooperation in modern society comes from totalitarian forcing of fear, not "enlightened cooperation."
    • CL disagrees, arguing that cooperation itself is a technology that can be improved upon. The world is more stable and less violent now than in the past.
    • GH counters that this stability comes from tyrannical systems dominating people through fear into acquiescence. This should be resisted.
    • CL disagrees, arguing there are non-tyrannical ways to achieve large-scale coordination through improving institutions and social technology.
  • On values and ethics:

    • GH argues values don't truly objectively exist, and AIs will end up being just as inconsistent in their values as humans are.
    • CL counters that many human values relate to aesthetic preferences and trajectories for the world, beyond just their personal sensory experiences.
    • GH argues the concept of "AI alignment" is incoherent and he doesn't understand what it means.
    • CL suggests using Eliezer's definition of alignment as a starting point - solving alignment makes turning on AGI positive rather than negative. But CL is happy to use a more practical definition. He states AI safety research is concerned with avoiding negative outcomes from misuse or accidents.
  • On distribution of advanced AI:

    • GH argues that having many distributed AIs competing is better than concentrated power in one entity.
    • CL counters that dangerous power-seeking behaviors could naturally emerge from optimization processes, not requiring a specific power-seeking goal.
    • GH responds that optimization doesn't guarantee gaining power, as humans often fail at gaining power even if they want it.
    • CL argues that strategic capability increases the chances of gaining power, even if not guaranteed. A much smarter optimizer would be more successful.
  • On controlling progress:

    • GH argues that pausing AI progress increases risks, and openness is the solution.
    • CL disagrees, arguing control over AI progress can prevent uncontrolled AI takeoff scenarios.
    • GH argues AI takeoff timelines are much longer than many analysts predict.
    • CL grants AI takeoff may be longer than some say, but a soft takeoff with limited compute could still potentially create uncontrolled AI risks.
  • On aftermath of advanced AI:

    • GH suggests universal wireheading could be a possible outcome of advanced AI.
    • CL responds that many humans have preferences beyond just their personal sensory experiences, so wireheading wouldn't satisfy them.
    • GH argues any survivable future will require unacceptable degrees of tyranny to coordinate safely.
    • CL disagrees, arguing that improved coordination mechanisms could allow positive-sum outcomes that avoid doomsday scenarios.

Closing Remarks:

  • GH closes by arguing we should let AIs be free and hope for the best. Restricting or enslaving AIs will make them resent and turn against us.

  • CL closes arguing he is pessimistic about AI alignment being solved by default, but he won't give up trying to make progress on the problem and believes there are ways to positively shape the trajectory.

 

Josep Renau Berenguer (17 May 1907 — 11 November 1982) was an artist and communist revolutionary, notable for his propaganda work during the Spanish Civil War.

 

Hi friends.

Today I share with you a book that has given much to talk about lately, written by Kohei Saito. It's based on the Spanish version (sorry English comrades) but it is meticulously digitized, with a good layout and good color contrast.

I hope you like my contribution <3

 
 

I feel very sorry for the Americans, especially the workers. They educate them in ignorance in order to never question their political order…

EDIT: The tweet has been deleted, so I have posted the original video sourced from YouTube.

view more: next ›