this post was submitted on 05 Feb 2024
202 points (84.1% liked)

Asklemmy

43945 readers
749 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Ok let's give a little bit of context. I will turn 40 yo in a couple of months and I'm a c++ software developer for more than 18 years. I enjoy to code, I enjoy to write "good" code, readable and so.

However since a few months, I become really afraid of the future of the job I like with the progress of artificial intelligence. Very often I don't sleep at night because of this.

I fear that my job, while not completely disappearing, become a very boring job consisting in debugging code generated automatically, or that the job disappear.

For now, I'm not using AI, I have a few colleagues that do it but I do not want to because one, it remove a part of the coding I like and two I have the feeling that using it is cutting the branch I'm sit on, if you see what I mean. I fear that in a near future, ppl not using it will be fired because seen by the management as less productive...

Am I the only one feeling this way? I have the feeling all tech people are enthusiastic about AI.

you are viewing a single comment's thread
view the rest of the comments
[–] mozz@mbin.grits.dev 23 points 9 months ago (1 children)

I think all jobs that are pure mental labor are under threat to a certain extent from AI.

It's not really certain when real AGI is going to start to become real, but it certainly seems possible that it'll be real soon, and if you can pay $20/month to replace a six figure software developer then a lot of people are in trouble yes. Like a lot of other revolutions like this that have happened, not all of it will be "AI replaces engineer"; some of it will be "engineer who can work with the AI and complement it to be produtive will replace engineer who can't."

Of course that's cold comfort once it reaches the point that AI can do it all. If it makes you feel any better, real engineering is much more difficult than a lot of other pure-mental-labor jobs. It'll probably be one of the last to fall, after marketing, accounting, law, business strategy, and a ton of other white-collar jobs. The world will change a lot. Again, I'm not saying this will happen real soon. But it certainly could.

I think we're right up against the cold reality that a lot of the systems that currently run the world don't really care if people are taken care of and have what they need in order to live. A lot of people who aren't blessed with education and the right setup in life have been struggling really badly for quite a long time no matter how hard they work. People like you and me who made it well into adulthood just being able to go to work and that be enough to be okay are, relatively speaking, lucky in the modern world.

I would say you're right to be concerned about this stuff. I think starting to agitate for a better, more just world for all concerned is probably the best thing you can do about it. Trying to hold back the tide of change that's coming doesn't seem real doable without that part changing.

[–] taladar@sh.itjust.works 3 points 9 months ago (1 children)

It’s not really certain when real AGI is going to start to become real, but it certainly seems possible that it’ll be real soon

What makes you say that? The entire field of AI has not made any progress towards AGI since its inception and if anything the pretty bad results from language models today seem to suggest that it is a long way off.

[–] mozz@mbin.grits.dev 0 points 9 months ago (1 children)

You would describe "recognizing handwritten digits some of the time" -> "GPT-4 and Midjourney" as no progress in the direction of AGI?

It hasn't reached AGI or any reasonable facsimile yet, no. But up until a few years ago something like ChatGPT seemed completely impossible, and then a few big key breakthroughs happened, and now the impossible is possible. It seems by no means out of the question that a few more big breakthroughs could happen with AGI, especially with as much attention and effort is going into the field now.

[–] jacksilver@lemmy.world 3 points 9 months ago (1 children)

It's not that machine learning isn't making progress, it's just many people speculate that AGI will require a different way of looking at AI. Deep Learning, while powerful, doesn't seem like it can be adapted to something that would resemble AGI.

[–] mozz@mbin.grits.dev 2 points 9 months ago* (last edited 9 months ago) (2 children)

You mean, it would take some sort of breakthrough?

(For what it's worth, my guess about how it works is to generally agree with you in terms of real sentience -- just that I think (a) neither one of us really knows that for sure (b) AGI doesn't require sentience; a sufficiently capable fakery which still has limitations can still upend the world quite a bit).

[–] jacksilver@lemmy.world 2 points 9 months ago

Yes, and most likely more of a paradigm shift. The way deep learning models work is largely around static statistical models. The main issue here isn't the statistical side, but the static nature. For AGI this is a significant hurdle because as the world evolves, or simply these models run into new circumstances, the models will fail.

Its largely the reason why autonomous vehicles have sorta hit a standstill. It's the last 1% (what if an intersection is out, what if the road is poorly maintained, etc.) that are so hard for these models as they require "thought" and not just input/output.

LLMs have shown that large quantities of data seem to approach some sort of generalized knowledge, but researchers don't necessarily agree on that https://arxiv.org/abs/2206.07682. So if we can't get to more emergent abilities, it's unlikely AGI is on the way. But as you said, combining and interweaving these systems may get something close.

[–] taladar@sh.itjust.works 1 points 9 months ago

a sufficiently capable fakery which still has limitations can still upend the world quite a bit

Maybe but we are essentially throwing petabyte sized models and lots of compute power at it and the results are somewhere on the level where a three year old would do better in not giving away that they don't understand what they are talking about.

Don't get me wrong, LLMs and the other recent developments in generative AI models are very impressive but it is becoming increasingly clear that the approach is maybe barely useful if we throw about as many computing resources at it as we can afford, severely limiting its potential applications. And even at that level the results are still so bad that you essentially can't trust anything that falls out.

This is very far from being sufficient to fake AGI and has absolutely nothing to do with real AGI.