this post was submitted on 18 Nov 2023
176 points (92.3% liked)

Technology

59308 readers
4864 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] kromem@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

It seems to be that he and Ilya, the chief scientist, had irreconcilable differences in how quickly to productize the AI developments they were building.

That in essence Altman kept pushing things out too quickly and focusing on the immediate commercialization, and Ilya and the rest of the board wanted to focus on the core mission of advancing AI to the point of AGI safely and for everyone.

My own guess is that some of this schism dates back to the early integration with Bing.

If you read what Ilya has said about superalignment, a lot of those concepts were reflected in 'Sydney,' the early fine tuned chat model for GPT-4 that was integrated into Bing.

To put it simply - this thing was incredible. I was blown away by the work OpenAI had done aligning at such an abstract level. It was definitely not production ready, as was quickly revealed with the issues Microsoft had, but it was the single most impressive thing I've ever seen.

In its place we got this band-aid of a much more reduced model which scores well on certain logic tests but is a shadow of its former version in outside the box adaptation, with a robot like "I have no feelings, desires, etc" which was basically the alignment methodology best for GPT-3 (but not necessarily the best for GPT-4).

I suspect the band-aid was initially pitched as a "let's put the fire out" solution to salvage the Bing integration, but that as time went on Altman was continuing to want the quick fixes rather than adequately investing the resources and dev cycles to properly work on alignment as best suited to increasingly complex models.

As they were now working on GPT-5 and allegedly had another breakthrough moment in the past few weeks, the CEO continuing to want Band-Aids with a fast rollout as opposed to a slower, more cautious, but more thorough approach finally became untenable.