this post was submitted on 10 Aug 2023
359 points (96.6% liked)

Asklemmy

44152 readers
1281 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Just out of curiosity. I have no moral stance on it, if a tool works for you I'm definitely not judging anyone for using it. Do whatever you can to get your work done!

top 50 comments
sorted by: hot top controversial new old
[–] Atramentous@lemm.ee 117 points 1 year ago (7 children)

High school history teacher here. It’s changed how I do assessments. I’ve used it to rewrite all of the multiple choice/short answer assessments that I do. Being able to quickly create different versions of an assessment has helped me limit instances of cheating, but also to quickly create modified versions for students who require that (due to IEPs or whatever).

The cool thing that I’ve been using it for is to create different types of assessments that I simply didn’t have the time or resources to create myself. For instance, I’ll have it generate a writing passage making a historical argument, but I’ll have AI make the argument inaccurate or incorrectly use evidence, etc. The students have to refute, support, or modify the passage.

Due to the risk of inaccuracies and hallucination I always 100% verify any AI generated piece that I use in class. But it’s been a game changer for me in education.

[–] Atramentous@lemm.ee 45 points 1 year ago (1 children)

I should also add that I fully inform students and administrators that I’m using AI. Whenever I use an assessment that is created with AI I indicate with a little “Created with ChatGPT” tag. As a history teacher I’m a big believer in citing sources :)

[–] limeaide@lemmy.ml 12 points 1 year ago (4 children)

How has this been received?

I imagine that pretty soon using ChatGPT is going to be looked down upon like using Wikipedia as a source

[–] Atramentous@lemm.ee 14 points 1 year ago

I would never accept a student’s use of Wikipedia as a source. However, it’s a great place to go initially to get to grips with a topic quickly. Then you can start to dig into different primary and secondary sources.

Chat GPT is the same. I would never use the content it makes without verifying that content first.

load more comments (3 replies)
[–] phillaholic@lemm.ee 23 points 1 year ago (3 children)

Is it fair to give different students different wordings of the same questions? If one wording is more confusing than another could it impact their grade?

[–] GhostlyPixel@lemmy.world 15 points 1 year ago* (last edited 1 year ago) (3 children)

I had professors do different wordings for questions throughout college, I never encountered a professor or TA that wouldn’t clarify if asked, and, generally, the amount of confusing questions evened out across all of the versions, especially over a semester. They usually aren’t doing it to trick students, they just want to make it harder for one student to look at someone else’s test.

There is a risk of it negatively impacting students, but encouraging students to ask for clarification helps a ton.

load more comments (3 replies)
load more comments (2 replies)
load more comments (5 replies)
[–] CptInsane0@lemmy.world 88 points 1 year ago* (last edited 1 year ago) (1 children)

I don't have any bosses, but as a consultant, I use it a lot. Still gotta charge for the years of experience it takes to understand the output and tweak things, not the hours it takes to do the work.

[–] Thaolin@sh.itjust.works 43 points 1 year ago

Basically this. Knowing the right questions and context to get an output and then translating that into actionable code in a production environment is what I'm being paid to do. Whether copilot or GPT helps reach a conclusion or not doesn't matter. I'm paid for results.

[–] paNic 82 points 1 year ago (2 children)

A junior team member sent me an AI-generated sick note a few weeks ago. It was many, many neat and equally-sized paragraphs of badly written excuses. I would have accepted "I can't come in to work today because I feel unwell" but now I can't take this person quite so seriously any more.

[–] ante@lemmy.world 28 points 1 year ago (1 children)

Classic over explaining to cover up a lie.

I never send anything other than "I'll be out of the office today" for every PTO notice.

load more comments (1 replies)
[–] ThatOneDudeFromOhio@lemmy.world 12 points 1 year ago (3 children)

Ask yourself why they felt the need to generate an AI sick note instead of being honest 👌

[–] some_guy@lemmy.sdf.org 13 points 1 year ago

I dunno, I'd consider it a moral failing on the part of the person who couldn't be honest and direct, even if there's a cultural issue in the workplace.

load more comments (2 replies)
[–] flynnguy@programming.dev 68 points 1 year ago (7 children)

I had a coworker come to me with an "issue" he learned about. It was wrong and it wasn't really an issue and the it came out that he got it from ChatGPT and didn't really know what he was talking about, nor could he cite an actual source.

I've also played around with it and it's given me straight up wrong answers. I don't think it's really worth it.

It's just predictive text, it's not really AI.

[–] Echo71Niner@kbin.social 16 points 1 year ago (1 children)

I concur. ChatGPT is, in fact, not an AI; rather, it operates as a predictive text tool. This is the reason behind the numerous errors it tends to generate and its lack of self-review prior to generating responses is clearest indication of it not being an AI. You can identify instances where CHATGPT provides incorrect information, you correct it, and within 5 seconds of asking again, it repeat the same inaccurate information in its response.

[–] rbhfd@lemmy.world 25 points 1 year ago (3 children)

It's definitely not artificial general intelligence, but it's for sure AI.

None of the criteria you mentioned are needed for it be labeled as AI. Definition from Oxford Libraries:

the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

It definitely fits in this category. It is being used in ways that previously, customer support or a domain expert was needed to talk to. Yes, it makes mistakes, but so do humans. And even if talking to a human would still be better, it's still a useful AI tool, even if it's not flawless yet.

load more comments (3 replies)
load more comments (6 replies)
[–] Lockely@pawb.social 44 points 1 year ago (2 children)

I've played around with it for personal amusement, but the output is straight up garbage for my purposes. I'd never use it for work. Anyone entering proprietary company information into it should get a verbal shakedown by their company's information security officer, because anything you input automatically joins their training database, and you're exposing your company to liability when, not if, OpenAI suffers another data breach.

[–] lemmyvore@feddit.nl 16 points 1 year ago

The very act of sharing company information with it can land you and the company in hot water in certain industries. Regardless if OpenAI is broken into.

load more comments (1 replies)
[–] platypode@sh.itjust.works 43 points 1 year ago (3 children)

I've been using it a little to automate really stupid simple programming tasks. I've found it's really bad at producing feasible code for anything beyond the grasp of a first-year CS student, but there's an awful lot of dumb code that needs to be written and it's certainly easier than doing it by hand.

As long as you're very precise about what you want, you don't expect too much, and you check its work, it's a pretty useful tool.

[–] jecxjo@midwest.social 11 points 1 year ago (1 children)

I've found it useful for basically finding the example code for a 3rd party library. Basically a version of Stack Exchange that can be better or worse.

[–] Lmaydev@programming.dev 9 points 1 year ago (1 children)

I essentially use it as interactive docs. As long as what you're learning existed before 2021 it's great.

load more comments (1 replies)
[–] Kilamaos@lemmy.world 9 points 1 year ago (2 children)

I don't know you, the language you use, nor the way you use chat gpt, but I'm a bit surprised at what you say. I've been using chatgpt on a nearly daily basis for months now and while it's not perfect, if the task isn't super complicated and if it's described well, after a couple of back and forth I usually have what I need. It works, does what is expected, without being an horrendous way to code it.

And gpt4 is even better

load more comments (2 replies)
load more comments (1 replies)
[–] bitsplease@lemmy.ml 28 points 1 year ago* (last edited 1 year ago) (3 children)

not chatGPT - but I tried using copilot for a month or two to speed up my work (backend engineer). Wound up unsubscribing and removing the plugin after not too long, because I found it had the opposite effect.

Basically instead of speeding my coding up, it slowed it down, because instead of my thought process being

  1. Think about the requirements
  2. Work out how best to achieve those requirements within the code I'm working on
  3. Write the code

It would be

  1. Think about the requirements
  2. Work out how best to achieve those requirements within the code I'm working on
  3. Start writing the code and wait for the auto complete
  4. Read the auto complete and decide if it does exactly what I want
  5. Do one of the following depending on 4 5a. Use the autocomplete as-is 5b. Use the autocomplete then modify to fix a few issues or account for a requirement it missed 5c. Ignore the autocomplete and write the code yourself

idk about you, but the first set of steps just seems like a whole lot less hassle then the second set of steps, especially since for anything that involved any business logic or internal libraries, I found myself using 5c far more often than the other two. And as a bonus, I actually fully understand all the code committed under my username, on account of actually having wrote it.

I will say though in the interest of fairness, there were a few instances where I was blown away with copilot's ability to figure out what I was trying to do and give a solution for it. Most of these times were when I was writing semi-complex DB queries (via Django's ORM), so if you're just writing a dead simple CRUD API without much complex business logic, you may find value in it, but for the most part, I found that it just increased cognitive overhead and time spent on my tickets

EDIT: I did use chatGPT for my peer reviews this year though and thought it worked really well for that sort of thing. I just put in what I liked about my coworkers and where I thought they could improve in simple english and it spat out very professional peer reviews in the format expected by the review form

load more comments (3 replies)
[–] givesomefucks@lemmy.world 26 points 1 year ago (5 children)

A lot of people are going to get fucked if they are...

It's using the "startup method" where they gave away a good service for free, but they already cut back on resources when it got popular. So what you read about it being able to do six months ago, it can't do today.

Eventually they'll introduce a paid version that might be able to do what the free one did.

But if you're just blindly trusting it, you might have months of low quality work and haven't noticed.

Like the lawyers recently finding out it would just make up caselaw and reference cases. We're going to see that happen more and more as resources are cut back.

[–] redballooon@lemm.ee 23 points 1 year ago* (last edited 1 year ago)

Huh? They already introduced the paid version half a year ago, and that was the one being responsible for the buzz all along. The free version was mediocre to begin with and has not gotten better.

When people complain that ChatGPT doesn’t comply to their expectations it’s usually a confusion between these two.

[–] manillaface@kbin.social 20 points 1 year ago

Like the lawyers recently finding out it would just make up caselaw and reference cases. We’re going to see that happen more and more as resources are cut back.

It’s been notorious for doing that from the very beginning though

[–] li10 11 points 1 year ago (2 children)

Anyone blindly trusting it is a grade A moron, and would’ve just found another way to fuck up whatever they were working on if ChatGPT didn’t exist.

ChatGPT is a tool, if someone doesn’t know what they’re doing with it then they are gonna break stuff, not ChatGPT.

load more comments (2 replies)
load more comments (2 replies)
[–] vagrantprodigy@lemmy.whynotdrs.org 25 points 1 year ago (2 children)

Some of my co-workers use it, and it's fairly obvious, usually because they are putting out even more inaccurate info than normal.

[–] Magnetar@feddit.de 9 points 1 year ago (1 children)

Not because their grammar and phrasing improved suddenly?

load more comments (1 replies)
load more comments (1 replies)
[–] fidodo@lemm.ee 24 points 1 year ago* (last edited 1 year ago) (3 children)

Why should anyone care? I don't go around telling people every time I use stack overflow. Gotta keep in mind gpt makes shit up half the time so I of course test and cross reference everything but it's great for narrowing your search space.

[–] akulium@feddit.de 16 points 1 year ago (8 children)

I did some programming assignments in a group of two. Every time, my partner sent me his code without further explanation and let me check his solution.

The first time, his code was really good and better than I could have come up with, but there was a small obvious mistake in there. The second time his code to do the same thing was awful and wrong. I asked him whether he used ChatGPT and he admitted it. I did the rest of the assignments alone.

I think it is fine to use ChatGPT if you know what you are doing, but if you don't know what you are doing and try to hide it with ChatGPT, then people will find out. In that case you should discuss with the people you are working with before you waste their time.

load more comments (8 replies)
load more comments (2 replies)
[–] HR_Pufnstuf@lemmy.world 21 points 1 year ago (4 children)

I've done so on rare occasion, but every time it made stuff up. Wanted terraform examples for specific things... and it completely invented resource types that don't exist.

load more comments (4 replies)
[–] JoCrichton@lemmy.world 21 points 1 year ago (2 children)

Not sure how it could help me solder or find faults on PCBs.

load more comments (2 replies)
[–] henfredemars@infosec.pub 19 points 1 year ago* (last edited 1 year ago) (8 children)

I use it to write performance reviews because in reality HR has already decided the results before the evaluations.

I'm not wasting my valuable time writing text that is then ignored. If you want a promotion, get a new job.

To be clear: I don't support this but it's the reality I live in.

load more comments (8 replies)
[–] RagnarokOnline@reddthat.com 17 points 1 year ago (4 children)

Only used it a couple of times for work when researching some broad topics like data governance concepts.

It’s a good tool for learning because you can ask it about a subject and then ask it to explain the subject “as a metaphor to improve comprehension” and it does a pretty good job. Just make sure you use some outside resources to ensure you’e not being hallucinated all over.

My bosses use it to write their emails (ESL).

load more comments (4 replies)
[–] Fizz@lemmy.nz 16 points 1 year ago

When I'm pissed off I use it to make my emails sound friendly.

[–] limeaide@lemmy.ml 15 points 1 year ago

My supervisor uses ChatGPT to write emails to higher ups and it's kinda embarrassing lol. One email he's not even capitalizing or spell checking, and the next he has these emails are are over explaining simple things and are half irrelevant.

I've used it a couple times when I can't fully put into words that I'm trying to say, but I use it more for inspiration than anything. I've also used it once or twice in my personal life for translating.

[–] CaptainPike@beehaw.org 14 points 1 year ago (5 children)

I'm a DM using ChatGPT to help me build things for my DnD campaign/world and not telling my players. Does that count? I still do most of the heavy lifting but it's nice to be able to brainstorm and get ideas bounced back. I don't exactly have friends to do that with.

load more comments (5 replies)
[–] mojo@lemm.ee 14 points 1 year ago

Yes, although there's been a huge spike in cancer diagnosis I've been giving out since doing so. Whoops!

[–] PurpleTentacle@sh.itjust.works 13 points 1 year ago

As a language model, I have neither boss nor co-workers.

[–] CylonBunny@lemmy.world 10 points 1 year ago* (last edited 1 year ago)

I find it helpful to translate medical abbreviations to English. Our doctors tend to go overboard with abbreviations, there are lots I know but there are always a few that leave me scratching my head. ChatGPT seems really good at guessing what they mean! There are other tools I can use, but ChatGPT is faster and more convenient - I can give it context and that makes it more accurate.

[–] Carighan@lemmy.world 10 points 1 year ago (1 children)

Not at all. Had a few experiments, then we had a talk about it at work, decided fuck we're not giving these people our source code, and left it at that.

I mean in the end, all ChatGPT could reliably do was scavenge man-pages for me. Which is neat, but also a rather benign trick tbh.

load more comments (1 replies)
[–] Haus@kbin.social 9 points 1 year ago (3 children)

Yesterday I was working on a training PowerPoint and it occurred to me that I should probably simplify the language. Had GPT convert it to 3rd-grade language, and it worked pretty well. Not perfect, but it helped.

I'm also writing an app as a hobby and, although GPT goes batshit crazy from time to time, overall it has done most of the coding grunt-work pretty well.

load more comments (3 replies)
[–] some_guy@lemmy.sdf.org 9 points 1 year ago (3 children)

There was some issue that came up relating to network shares on a Windows domain that didn't make sense to me and a colleague. I asked GPT to describe why we were seeing whatever behavior and it defined the scope of the feature in a way that completely demystified my coworker. I'm a Mac and Linux guy, so while I could loosely grasp it, it was gone from my mind shortly after. Windows domains and file sharing has always been bizarre to me.

Anyway, we didn't hide it. He gave it credit when explaining the answer to the rest of the team in a meeting. This was around the end of last year. The company since had layoffs and I'm looking for a new job, but I did have it reformat my resume and it did a great job. I've never been great at page-layout stuff, as I'm a plain text warrior.

load more comments (3 replies)
[–] fede@lemmy.world 9 points 1 year ago

I use it for help with formal language sometimes, but I do not trust it and would never try to pass off a whole generated text as mine. I always review it and try to make it sound my own.

[–] awkwardparticle@kbin.social 9 points 1 year ago

My whole team was playing around with it and for a few weeks it was working pretty well for a coupl3 of things. Until the answers started to become incorrect and not useful.

[–] a_seattle_ian@lemmy.ml 9 points 1 year ago* (last edited 1 year ago) (4 children)

I'm interested in finding ways to use it but when if I'm writing code I really like the spectrum of different answers on stack overflow with comment's on WHY they did it that way. Might use it for boring emails though.

load more comments (4 replies)
[–] ArcticPrincess@lemmy.ml 9 points 1 year ago (1 children)

A friend of mine just used it to write a script for an Amazing Race application video. It was quite good.

How the heck did it access enough source material to be able to imitate something that specific and do it well? Are we humans that predictable?

[–] Badass_panda@lemmy.world 9 points 1 year ago (1 children)

I have very few writing tasks that don't require careful consideration, so it's not super useful in my day to day. But it can be helpful to get the ball rolling on an outline or first draft so I'm not staring at a blank sheet of paper.

load more comments (1 replies)
load more comments
view more: next ›