TheWiseAlaundo

joined 1 year ago

Given all the warnings and threats that reddit admins have made, you'd think they'd appreciate lemmy being around

[–] TheWiseAlaundo@lemmy.whynotdrs.org 10 points 1 year ago* (last edited 1 year ago) (1 children)

I'm just gonna play devil's advocate here.

Before the invention of the police, communities took it upon themselves to enforce the law. Oftentimes, militia members would directly write to governors asking for arms, and would also be present in their communities during public events where an armed presence might be necessary. Arrests for members of the community would happen by way of court order first, and then a posse would be formed as a means to enact that court order. Nowhere in the US constitution does the word "police" appear because the idea hadn't even been conceived at the time of foundation.

Comparatively, today's police have far more authority to enact violence and effect arrests than even the courts. Could a court today order a dog to maul a surrendering man? Probably not. But when the police do it, apparently, that's just the cost of doing business.

I think the lie is that we need the police and not the other way around.

Immediately, probably not. Privacy is one of those things where when you really need it, you can't get it... unless you already have it.

Also, it's not like you know the motivations of all 7 billion people on earth. If you're out in the open, it just makes it easy for the lazy to find you.

I can get behind using a VPN, a phone with Graphene or Calyx, adblocker, user agent switcher, librewolf, and stuff... you give up some convenience for privacy, but it's not overbearing. Tor, however, isn't exactly useful as a daily driver.

So is there a visible benefit? Hopefully not. If you're doing it right, you'll just live a normal life and not be bothered.

[–] TheWiseAlaundo@lemmy.whynotdrs.org 31 points 1 year ago (1 children)

Lol... I just read the paper, and Dr Zhao actually just wrote a research paper on why it's actually legally OK to use images to train AI. Hear me out...

He changes the 'style' of input images to corrupt the ability of image generators to mimic them, and even shows that the super majority of artists even can't tell when this happens with his program, Glaze... Style is explicitly not copywriteable in US case law, and so he just provided evidence that the data OpenAI and others use to generate images is transformative which would legally mean that it falls under fair use.

No idea if this would actually get argued in court, but it certainly doesn't support the idea that these image generators are stealing actual artwork.

[–] TheWiseAlaundo@lemmy.whynotdrs.org 20 points 1 year ago (1 children)

So that's the thing. People say that they'll never retire and that it sounds boring, but the reality is much different. You just find other things to do. What you'll find is that when you stop working for someone else, you start working for yourself... and if you're a determined individual you'll be busier than you've ever been in your life. Just something to consider.

[–] TheWiseAlaundo@lemmy.whynotdrs.org 12 points 1 year ago* (last edited 1 year ago) (2 children)

He's not saying "AI is done, there's nothing else to do, we've hit the limit", he's saying "bigger models don't necessarily yield better results like we had initially anticipated"

Sam recently went before congress and advocated for limiting model sizes as a means of regulation, because, at the time, he believed bigger would generally always mean better outputs. What we're seeing now is that if a model is too large it will have trouble producing truthful output, which is super important to us humans.

And honestly, I don't think anyone should be shocked by this. Our own human brains have different sections that control different aspects of our lives. Why would an AI brain be different?

[–] TheWiseAlaundo@lemmy.whynotdrs.org 9 points 1 year ago (1 children)

I've gone pretty zen, checking in on Superstonk only occasionally. GameStop is crazy undervalued at current prices, even without the prospect of MOASS, so i continue to hold. I'm actually kinda happy seeing this post show up in my feed, 'cause Lemmy has entirely replaced Reddit for me.

Also, ignore the following, I'm typing this because I couldn't do the same on Reddit: AMC, WallStreetBets sucks balls, u/Spez doesn't care about Reddit people, Use Lemmy

[–] TheWiseAlaundo@lemmy.whynotdrs.org 13 points 1 year ago (1 children)

Somehow I don't think the Quest 3 is going to be a problem. The battery only lasts a couple hours, and you look dumb as hell wearing it in public. Unless the point is to look dumb as hell in public, then mission accomplished.

That's kinda why I bring up Deming and his views of the entire purpose of a quality management system. "they should just stop pretending and send their employee the bullet points." I couldn't agree more. My bro is sending out the bullet points, but AI is formatting it, so it is acceptable to his boss.

In an ideal world, there'd be someone who actually examined the business operation to determine what the benefits of doing individual performance reviews are. Instead, things at his work are done a certain way simply because that's the way they've always been done... and thus, that's what he's doing.

"I'm not asking them to change the system..." That's not really what I meant, I apologize if i phrased what I said weird. If you're evaluating a person, then they're already probably not too far to any extreme. If they were the worst employee ever, you would let them go. If they were the best employee ever, your company would be dependent on them and would suffer if they voluntarily decided to leave. Your ideal employee would, therefore, be somewhere within the norm and would need to conform to your system. An individual review exists simply to enforce this conformity, and the reality of the situation is that most employees' true output is directed more by the operational efficiency of the business than an individuals own actions. If an employee is already conforming, then the review is effectively useless.

Anyways, I'm kinda droning on, but I think the horses have already left the barn with AI. I think the next logical step for many businesses is to really evaluate what they do and why they do it at an administrative level... and this is a good thing!

[–] TheWiseAlaundo@lemmy.whynotdrs.org 8 points 1 year ago* (last edited 1 year ago) (2 children)

Regardless of what anyone says, I think this is actually a pretty good use case of the technology. The specific verbiage of a review isn't necessarily important, and ideas can still be communicated clearly if tools are used appropriately.

If you ask a tool like ChatGPT to write "A performance review for a construction worker named Bob who could improve on his finer carpentry work and who is delightful to be around because if his enthusiasm for building. Make it one page." The output can still be meaningful, and communicate relevant ideas.

I'm just going to take a page from William Edwards Deming here, and state that an employee is largely unable to change the system that they work in, and as such individual performance reviews have limited value. Even if an employee could change the system that they work in, this should be interpreted as the organization having a singular point of failure.

[–] TheWiseAlaundo@lemmy.whynotdrs.org 5 points 1 year ago (7 children)

Depends on what you do. I personally use LLMs to write preliminary code and do cheap world building for d&d. Saves me a ton of time. My brother uses it at a medium-sized business to write performance evaluations... which is actually funny to see how his queries are set up. It's basically the employee's name, job title, and three descriptors. He can do in 20 minutes what used to take him all day.

view more: next ›