this post was submitted on 23 Nov 2024
15 points (100.0% liked)

Buttcoin

397 readers
14 users here now

Buttcoin is the future of online butts. Buttcoin is a peer-to-peer butt. Peer-to-peer means that no central authority issues new butts or tracks butts.

A community for hurling ordure at cryptocurrency/blockchain dweebs of all sorts. We are only here for debate as long as it amuses us. Meme stocks are also on topic.

founded 11 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] self@awful.systems 14 points 1 day ago* (last edited 1 day ago) (2 children)

If you remember early bitcoin, some people would say it’s money, some people would say it’s gold. Some people would say it’s this blockchain … The way that I look at Bittensor is as the World Wide Web of AI.

it’s really rude of you to find and quote a paragraph designed to force me to take four shots in rapid succession in my ongoing crypto/AI drinking game!

How does Bittensor work? “When you have a question, you send it out to the network. Miners whose models are suited to answer your question will process it and send back a proposed answer.” The “miners” are rewarded with TAO tokens.

“what do you mean oracle problem? our new thing’s nothing but oracles, we just have to figure out a way to know they’re telling the truth!”

Bittensor is enormously proud to be decentralized, because that’s a concept that totally makes sense with AI models, right? “There is no greater story than people’s relentless and dogged endeavor to overcome repressive regimes,” starts Bittensor’s introduction page.

meme stock cults and crypto scams both should maybe consider keeping pseudo-leftist jargon out of their fucking mouths

e: also, Bittensor? really?

[–] self@awful.systems 16 points 1 day ago* (last edited 1 day ago) (2 children)

also this is all horseshit so I know they haven’t thought this far ahead, but pushing a bit on the oracle problem, how do they think they solved these fundamental issues in their proposed design?

  • if verifying answers are correct is up to the miners, how do they prevent the miners from just generating any old bullshit using a much less expensive method than an LLM (a Markov chain say, or even just random characters or an empty string if nobody’s checking) and pocketing the tokens?
  • if verification is up to the requester, why would you ever mark an answer as correct? if you’re forced to pick one correct answer that gets your tokens, what’s stopping you from spinning up an adversarial miner that produces random answers and marking those as correct, ensuring you keep both your tokens and the other miners’ answers?
  • if answers are verified centrally… there’s no need for the miners or their models, just use whatever that central source of truth is.

and of course this is avoiding the elephant in the room: LLMs have no concept of truth, they just extrude plausible bullshit into a statistically likely shape. there’s no source of truth that can reliably distinguish bad LLM responses from good ones, and if you had one you’d probably be better off just using it instead of an LLM.

edit cause for some reason my brain can’t stop it with this fractally wrong shit: finally, if their plan is to just evenly distribute tokens across miners and return all answers: congrats on the “decentralized” network of /dev/urandom to string converters you weird fucks

another edit: I read the fucking spec and somehow it’s even stupider than any of the above. you can trivially just spend tokens to buy a majority of the validator slots for a subnet (which I guess in normal cultist lingo would be a subchain) and use that to kick out everyone else’s miners:

Only the top 64 validators, when ranked by their stake amount in any particular subnet, are considered to have a validator permit. Only these top 64 subnet validators with permits are considered active in the subnet.

a third edit, please help, my brain is melting: what does a non-adversarial validator even look like in this architecture? we can’t fucking verify LLM outputs like I said so… is this just multiple computers doing RAG and pretending that’s a good idea? is the idea that you run some kind of unbounded training algorithm and we also live in a universe where model overfitting doesn’t exist? help I am melting

[–] YourNetworkIsHaunted@awful.systems 6 points 20 hours ago (1 children)

You call it a problem. I call it a O(1) mining algorithm.

[–] self@awful.systems 4 points 17 hours ago

I’d say we should start calling this computer science affinity fraud shit “O(0) algorithms”, but knowing the space it’ll be like 2 months before crypto twitter starts using it ironically and maybe 6 months if we’re lucky before it shows up in a whitepaper cause the affinity grifters realized it’d make mediocre engineers buy more fraudcoins

[–] dgerard@awful.systems 12 points 1 day ago (1 children)

how do they think they solved these fundamental issues in their proposed design?

number go up

[–] self@awful.systems 10 points 1 day ago (1 children)

what if we made the large language model larger? it’s weird nobody has attempted this

[–] rook@awful.systems 7 points 1 day ago

I thought the era of scaling was over. We’re in the era of ??? now. Presumably profit comes later.

[–] dgerard@awful.systems 11 points 1 day ago (1 children)

how dare you, that's ancap jargon

[–] self@awful.systems 10 points 1 day ago

you’re right, I’m giving them way too much credit — the full thought is almost definitely “There is no greater story than people’s relentless and dogged endeavor to overcome repressive regimes and replace them with their own repressive regimes, but this time with heroin and sex tourism”