this post was submitted on 15 Jun 2023
35 points (100.0% liked)

Programming

20 readers
1 users here now

This magazine is dedicated to discussions on programming languages, software development, and coding. Whether you are a beginner programmer or an experienced developer, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as coding languages, software engineering, web development, and more. From the latest trends and frameworks to tips and tricks for debugging, this category covers a wide range of topics related to programming.

founded 2 years ago
 

How reliable is AI lke ChatGPT in giving you code that you request?

you are viewing a single comment's thread
view the rest of the comments
[–] experbia@kbin.social 2 points 1 year ago* (last edited 1 year ago)

I find ChatGPT to be less useful for code and more useful for generating boilerplate more in the 'configuration' realm. Ansible playbooks or tasks, Nginx configs, Dockerfiles, docker-compose files, etc. Well-bounded things with an abundance of clear documentation.

I generate a lot of first-draft Dockerfiles and docker-compose files through ChatGPT now with a short description of what I want. It's always worth reviewing it because sometimes it just invents things that look like a Dockerfile, but it can save a lot of the boring boilerplate writing of volumes and networks and depends_ons and obvious env vars you need to override.

I do use Codeium in my VS Code instance, though. It's like a free more ethical Github Copilot, and I've been really really happy with it. Not so much to make a whole program, but I use it a lot more as a kind of super-autocomplete.

I'll go in to a class and go to a method that needs a change and I'll just type a comment like the following and it will basically spit out the authentication logicc that I do a quick review on.

// check the request authentication header against the user service to verify we're allowed to do this

It's also an amazing "static" debugger - I can highlight particularly convoluted segments of math or recursion or iteration and ask it to explain it. Then I can ask follow-up questions like "Is there any scenario in which totalFound remains at 0" and it will tell me yes or no and why it thinks that, which is really nice. I tend to save it for instances where I'm reasonably certain that it was all correct, but I wanted to check it instead. Now instead of breaking out the paper and pen and reasoning it out, I can ask it for a second opinion, and if it has no doubts, my paranoid mind is put at ease a bit.

I've been unimpressed with the ability of any of these "AI" systems to spit out larger volumes of good code. They're more like ADHD, eager-to-please little interns. They'll spit out the first answer that comes to their mind even if it's wrong, and they fall for all kinds of well-known development pitfalls.