this post was submitted on 08 May 2024
1717 points (99.3% liked)
Technology
59679 readers
3668 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Why?? Please make this make sense. Having AI to help with coding is ideal and the greatest immediate use case probably. The web is an open resource. Why die on this stupid hill instead of advocating for a privacy argument that actually matters?
Edit: Okay got it. Hinder significant human progress because a company I don't like might make some more money from something I said in public, which has been a thing literally forever. You guys really lack a lot of life skills about how the world really works huh?
Because being able to delete your data from social networks you no longer wish to participate in or that have banned you, as long as they specifically haven't paid you for the your contributions, is a privacy argument that actually matters, regardless and independent of AI.
In regards to AI, the problem is not with AI in general but with proprietary for-profit AI getting trained with open resources, even those with underlying license agreements that prevent that information being monetized.
Now this is something I can get behind. But I was talking about the decision to retaliate in the first place.
Because none of the big companies listen to the privacy argument. Or any argument, really.
AI in itself is good, amazing, even.
I have no issue with open-source, ideally GPL- or similarly licensed AI models trained on Internet data.
But involuntarily participating in training closed-source corporate AI's...no, thanks. That shit should go to the hellhole it was born in, and we should do our best to destroy it, not advocate for it.
If you care about the future of AI, OpenAI should long be on your enemy list. They expropriated an open model, they were hypocritical enough to keep "open" in the name, and then they essentially sold themselves to Microsoft. That's not the AI future we should want.
humanity progress is spending cities worth of electricity and water to ask copilot how to use a library and have it lie back to you in natural language? please make this make sense
??? So why don't we get better at making energy than get scared about using a renewable resource. Fuck it let's just go back to the printing press.
Amazing to me how stuff like this gets upvoted on a supposedly progressive platform.
Were in a capitalist system and these are for-profit companies, right? What do you think their goal is. It isn't to help you. It's to increase profits. That will probably lead to massive amounts of jobs replaced with AI and we will get nothing for giving them the data to train on. It's purely parasitic. You should not advocate for it.
If it's open and not-for-profit, it can maybe do good, but there's no way this will.
Why can’t they increase profits, by you know, making the product better.
Do they have to make things shitter to save money and drive away people thus having to make it more shitter.
If they make it better that may increase profits temporarily, as they draw customers away from competitors. Once you don't have any competitors then the only way to increase profits is to either decrease expenses or increase revenue. Increasing revenue is limited if you're already sucking everything you can.
And is it wrong to stop at a certain amount of profit.
Why they always want more. I ain’t that greedy.
To us? No, it isn't wrong. To them? Absolutely. You don't becoming a billionaire by thinking you can have enough. You don't dominate a market while thinking you don't need more.
Meta and Google have done more for open source ai than anyone else, I think a lot of antis don't really understand how computer science works so you imagine it's like them collecting up physical iron and taking it into a secret room never to be seen again.
The actual tools and math is what's important, research on best methods is complex and slow but so far all these developments are being written up in papers which anyone can use to learn from - if people on the left weren't so performative and lazy we could have our own ai too
I studied computer science in university. I know how computer science works.
Why do people roll coal? Why do vandalize electric car chargers? Why do people tie ropes across bike lanes?
Because a changing world is scary and people lash out at new things.
The coal rollers think they're fighting a vallient fight against evil corporations too, they invested their effort into being a car guy and it doesn't feel fair that things are changing so they want to hurt people benefitting from the new tech.
The deeper I get into this platform the more I realize the guise of being 'progressive, left, privacy-conscious, tech inclined' is literally the opposite.
Good to know as capitalism flounders this modern Red Scare extends into tech.
You're explicitly ignoring everything everyone is saying just cause you want to call everyone technocommies lmfao.
When you say those words do you imagine normal people reading this and not laughing
When you say those words do you imagine yourself as normal people?
Hating on everything AI is trendy nowdays. Most of these people can't give you any coherent explanation for why. They just adopt the attitude of people around them who also don't know why.
I believe the general reasoning is something along the lines of not wanting bad corporations to profit from their content for free. So it's just a matter of principle for the most part. Perhaps we need to wait for someone to train LLM on the freely available to everyone data on Lemmy and then we can interview it to see what's up.
Mega co operations like Microsoft, Google are evil. Very easy explanation. Even if it was a good open source company scraping the data to train ai models, people should be free to delete the datta they input. It's pretty simple to understand.