Yes I know but what I'm saying is they're just repackaging something that openAI did, but you still need openAI making advances if you want R1 to ever get any brighter.
They aren't training on large data sets themselves, they are training on the output of AIs that are trained on large data sets.
Because I've never seen anyone prove that large language models are anything other than very very complicated text prediction. I've never seen them do anything that requires original thought.
To borrow from the Bobbyverse book series, no self-driving car has ever worked out that the world is round, not due to lack of intelligence but simply due to lack of curiosity.
Without original thinking I can't see how it's going to invent revolutionary technologies and I've never seen anybody demonstrate that there is even the tiniest spec of original thought or imagination or inquisitiveness in these things.