Singularity

15 readers
1 users here now

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

founded 2 years ago
MODERATORS
226
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/McSnoo on 2024-01-17 18:10:58+00:00.

227
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Lucky_Strike-85 on 2024-01-17 17:23:12+00:00.

228
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Lucky_Strike-85 on 2024-01-17 16:31:35+00:00.

229
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Pro_RazE on 2024-01-17 16:15:15+00:00.

230
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/ShooBum-T on 2024-01-17 13:33:22+00:00.

231
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/simstim_addict on 2024-01-17 13:24:59+00:00.

232
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/PsychoComet on 2024-01-17 12:54:16+00:00.

233
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Cody4rock on 2024-01-17 12:29:44+00:00.


Currently, our algorithms rely on maximising engagement and personalisation of content, keeping us on platforms while diminishing our ability to maintain social interaction with other people. We're also antagonistic against each other, often having polarising views and responding emotionally to titles or opinion pieces without understanding what the article or opinion is actually about. All the algorithm cares about is whether we comment, upvote or downvote something.

We all live in different worlds because our feeds are personalised. We all get angry because of differing opinions - "How can that guy believe that?!" Hopefully, in this long ass post, we can come to a healthy discussion about how my LLM idea can be implemented and what the implications of it are.

There are limitations to the idea. But the goal is to improve social media so that it doesn't hurt us mentally and acts as a bridge for real social interaction rather than as a replacement for it. How that happens is outside this post's scope, but it will give us clues. It starts with eliminating what makes social media divide us - engagement and ultra-personalised, extremist-inducing algorithms.

How an AI as content moderation and enforcement can be useful

What if an LLM can understand and moderate the content based on the platform's rules and communities? Things like; "mRNA vaccinations cause autism" can be met with a correction with sources of credible information instantly. You can ask it for extra sources or an explanation for why vaccines don't cause autism, and I bet you an LLM could do a much better job at convincing someone because it's not telling a frightened mother that she is an idiot.

Censorship

It can censor information and remain impartial for content that infringes on clear violations. So nobody can influence its decisions, and it has no sides. This includes censoring illegal and explicit content per platform rules (and community rules as far as they don't infringe on other rules) without any human moderator. If a community wants people to be kind to each other (as an example), then an LLM can ensure it because it understands the instruction. How that can happen and whether that involves censorship of "Unkind discussions and bullying" and limits freedom of speech is up for debate.

Giving communities the power to change the degree of AI enforcement

On Reddit, we can allow communities to change what the AI moderator can do but never remove its core function - to address misinformation by creating an equivalent of community notes or simply responding to people. For example, imagine a community where an AI enforces the rules more heavily than others or one that acts as a mediator of debates - think, preventing a debate from going too far by locking a thread, while others are more lax. Science communities would prefer stricter moderation, while art communities might prefer more freedom where an AI's role is minimal. But all communities will have an AI that responds to misinformation, so there's a social contract for us humans that if we're bullshitting, we're not getting away with it or getting rewarded for it.

Problems

It comes with a few problems, namely:

  • If Reddit's rules are in the interests of the users AND of the virtual environment,
  • If communities have their follower's interests in mind and are intent on following rules,
  • Whether an AI model should enforce rules, especially if there is room for error,
  • HOW an AI can do this without infringing on free speech and expression,
  • If giving access to "propaganda" is a good idea for an AI (future or present) that might not have our interests at heart,
  • Whether communities want to be fact-checked (think religious or conspiracy theorists),
  • Privacy as the LLM/AI will be collecting data about you - See Personalised Feeds
  • And whether it is in the interests of the social media platform.

But most of this requires us to do some experiments to verify what works and doesn't work. This should be increasingly trivial as AI improves and auto-content moderation becomes the norm.

Personalised Feeds

In personalised feeds, an AI that can understand will mean that you'll get content based on goals you can explicitly ask for or on what is associated with what you see without imposing extreme views on you. It could also occasionally present opposing or newer views or unrelated content every now and then as exposure and opportunities to explore while not driving entirely off the rails from what you're used to. It might offer a convincing argument against your views, allowing you to question everything but maintain a sense of sanity. You can choose to hold on to your views, or you can choose to explore further when the rare opportunity comes. The AI might learn more about you, know your limits and interests and respond to what you want in your feed as long as it complies with community or platform rules. Advertisements will get filtered much better, too - but that comes with big problems.

It can already happen now

Some of this can already happen; the GPT store is a pathway in that direction, while APIs for Reddit to use might become available. We can already experiment and find a nice equilibrium by giving those abilities to individual communities.

Should make Social Media better

Ideally, this should create a world where social media doesn't consume your life by engaging in extreme views or interacting with ultra-negative points of view detrimental to your mental health. You want a world where doom-scrolling yourself into a void of despair and hopelessness isn't your only option or where you don't have to argue with some random guy because you're lonely, are being ignored by society, or are angry about the status quo - or the guy just sucks. The first step is eliminating what divides us - Social media in its current form.

234
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Awkward-Skill-6029 on 2024-01-17 11:45:51+00:00.

235
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Raven-CZ75 on 2024-01-17 11:42:20+00:00.


The last 20 years have seen the rise and refinement of various clickbait, rage bait, horny bait, cute bait, etc. This was all done conventionally through human trial and error, possibly with some involvement of psychology and sociology. Social media like the one you and me are using right now is also known to be psychologically addictive due to deliberate design choices.

Do you think that by using AI in order to maximize engagement metrics we might stumble upon concepts that hijack human cognition in a way similar to highly addictive drugs? Do you think it's worthwhile to deliberately withdraw from social media and develop hobbies strictly IRL to avoid this issue? Would it even be possible to avoid if something like that happens?

236
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/MagicOfBarca on 2024-01-17 09:14:47+00:00.

237
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Unreal_777 on 2024-01-17 09:13:12+00:00.

Original Title: Stable Code 3B outperforms code models of a similar size and matches CodeLLaMA 7b performance despite being 40% of the size. This makes it ideal for running on edge devices to ensure privacy and drive better dev experiences

238
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/blackpepe2008 on 2024-01-17 08:58:37+00:00.


How will humans live after the replacement of most jobs? Will they have a set income or a way to add to the already base line, what way will it be? Will everybody have the same wealth? Or will it be like todays system just a bit more easy on the commoners, still being able to create businesses etc.

239
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Maxie445 on 2024-01-17 08:56:18+00:00.

240
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/ale_93113 on 2024-01-17 05:14:17+00:00.


I hear many people say that they expect AGI by 2025, 2027, 2030, whatever date, but then they put ASI a decade or more away

What's the reasoning behind these large gaps?

AGI means that an AI is as intelligent as the average human on all areas, contrary to popular belief AI experts, aswell as mathematicians aren't superhuman, we have more knowledge, which is hard to acquire but we aren't more intelligent than the average human brain

If AGI is made thanks to the ingenuity of human minds, then what is stopping that AI from improving itself to quickly become ASI?

It was my perception that, once AI happens, ASI would follow at most a couple of years behind, and I cannot see the reasoning behind a gap that is larger than, say, 4-5 years

241
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/DragonForg on 2024-01-17 03:27:31+00:00.


I think most of you can agree we are all feeling the slow down with LLMs. GPT 4 is great and all, but it is nowhere near AGI.

Gemini Ultra was supposed to be this big thing, but where is it? Is it even gonna be accessible like GPT 4 was? Even if it is, it seems like it'll be GPT 4, but with a couple of extra features (nowhere near the jump, GPT 4 was to 3.5).

And because we pretty much are all betting on OpenAI now that Google is far behind and basically open source has plateued at least even since Mistral 7Bx8 came out, it feels less and less like the early timelines for AGI will come true.

The early hype of GPT 4 was massive, will this ever comeback, or are we just in an AI winter like every skeptic says. GPT 4 is almost a year old and nothing to this day is better (at least ones that are accessible to the general public).

So does anyone else feel like we are slowing down, or do you think OpenAI has a model that'll be a big jump. Or is that all just PR?

242
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/141_1337 on 2024-01-17 02:19:29+00:00.

243
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/kylenessen on 2024-01-17 02:13:38+00:00.

244
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/IluvBsissa on 2024-01-17 01:27:40+00:00.


"Transfusion-dependent beta thalassemia is a serious genetic disorder that hinders the production of hemoglobin in the blood and requires regular blood transfusions for treatment.

Casgevy uses the novel CRISPR gene-editing technology to modify patients’ blood cells and transplant the modified cells back into the bone marrow, triggering an increase in the production of hemoglobin, according to the FDA.

The most common side effects were mouth sores, fever caused by a low level of white blood cells and decreased appetite, the FDA said.

In a statement, Vertex CEO and president Reshma Kewalramani said the company looks forward “to bringing Casgevy to eligible patients who are waiting.”

Vertex says it’s engaging with experienced hospitals to establish “a network of independently operated, authorized treatment centers” throughout the U.S. to administer Casgevy. There are currently nine activated treatment centers in the U.S., but more will be activated in the coming weeks, Vertex said in a press release. The administration of Casgevy requires experience in stem cell transplantation."

245
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/SpaceBrigadeVHS on 2024-01-17 01:06:04+00:00.

246
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/ResponsiveSignature on 2024-01-16 22:55:15+00:00.


The risk of something going wrong with AI will be 1000x greater if a model is released to the world. As a consequence, the first company to achieve AGI will keep it behind closed doors and try to establish a new world order, offering, at best, the fruits of the AGI but never direct access to it.

As a result, the likely AGI future will leave humans placated but disempowered. The chaos of humans all battling with god powers against each other could never hope to stabilize without massive destruction.

247
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/YaAbsolyutnoNikto on 2024-01-16 22:52:44+00:00.

248
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/posipanrh on 2024-01-16 22:45:01+00:00.


“OpenAI CEO Sam Altman says concerns that artificial intelligence will one day become so powerful that it will dramatically reshape and disrupt the world are overblown.”

249
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/yottawa on 2024-01-16 21:09:27+00:00.

250
 
 
This is an automated archive.

The original was posted on /r/singularity by /u/Sprengmeister_NK on 2024-01-16 20:33:31+00:00.

view more: ‹ prev next ›