this post was submitted on 28 Sep 2024
214 points (100.0% liked)

Technology

37717 readers
480 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Archived link

Since its founding in 2015, its leaders have said their top priority is making sure artificial intelligence is developed safely and beneficially. They’ve touted the company’s unusual corporate structure as a way of proving the purity of its motives. OpenAI was a nonprofit controlled not by its CEO or by its shareholders, but by a board with a single mission: keep humanity safe.

But this week, the news broke that OpenAI will no longer be controlled by the nonprofit board. OpenAI is turning into a full-fledged for-profit benefit corporation. Oh, and CEO Sam Altman, who had previously emphasized that he didn’t have any equity in the company, will now get equity worth billions, in addition to ultimate control over OpenAI.

In an announcement that hardly seems coincidental, chief technology officer Mira Murati said shortly before that news broke that she was leaving the company. Employees were so blindsided that many of them reportedly reacted to her abrupt departure with a “WTF” emoji in Slack.

WTF indeed.

you are viewing a single comment's thread
view the rest of the comments
[–] FlashMobOfOne@beehaw.org 20 points 1 month ago (1 children)

It's WeWork and Adam Neumann all over again.

You couldn't pay me to invest in this shit and it feels a little insane that seemingly intelligent VC's are doing so.

[–] sunzu2@thebrainbin.org 12 points 1 month ago (2 children)

Don't give them your data folks!

You don't know what you inputs will be used for in the future but nobody also was thinking that Facebook posts from 2000 would be a large piece of a training data for these llms lol

[–] FlashMobOfOne@beehaw.org 10 points 1 month ago* (last edited 1 month ago)

Definitely.

Also, don't invest in companies that hand total control to one person. That's a recipe for having that one idiot blow all of your money, like Adam Neumann did. (Fun fact: Toward the end of WeWork's heyday, Neumann was burning $3k in cash a minute.)

[–] xor@infosec.pub 2 points 1 month ago* (last edited 1 month ago) (1 children)

i want them trained on me so that our future robot overlords will respect me… maybe create some simulacrum of my consciousness to live on forever

[–] ChicagoTransplant@midwest.social 3 points 1 month ago (1 children)

You think they would expend resources recreating nobodies like us? Sam gets his digital construct immortality and we squat.

[–] xor@infosec.pub 1 points 1 month ago

i’ve been obsessively commenting on reddit for years… i’ll live on forever