this post was submitted on 03 Apr 2024
960 points (99.4% liked)

Technology

59739 readers
4111 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

top 50 comments
sorted by: hot top controversial new old
[–] Stovetop@lemmy.world 212 points 8 months ago (1 children)

"Your honor, the evidence shows quite clearly that the defendent was holding a weapon with his third arm."

[–] Downcount@lemmy.world 172 points 8 months ago* (last edited 8 months ago) (23 children)

If you ever encountered an AI hallucinating stuff that just does not exist at all, you know how bad the idea of AI enhanced evidence actually is.

load more comments (23 replies)
[–] dual_sport_dork@lemmy.world 147 points 8 months ago (27 children)

No computer algorithm can accurately reconstruct data that was never there in the first place.

Ever.

This is an ironclad law, just like the speed of light and the acceleration of gravity. No new technology, no clever tricks, no buzzwords, no software will ever be able to do this.

Ever.

If the data was not there, anything created to fill it in is by its very nature not actually reality. This includes digital zoom, pixel interpolation, movement interpolation, and AI upscaling. It preemptively also includes any other future technology that aims to try the same thing, regardless of what it's called.

[–] KairuByte@lemmy.dbzer0.com 56 points 8 months ago (1 children)

One little correction, digital zoom is not something that belongs on that list. It’s essentially just cropping the image. That said, “enhanced” digital zoom I agree should be on that list.

load more comments (1 replies)
[–] ashok36@lemmy.world 21 points 8 months ago (1 children)

Digital zoom is just cropping and enlarging. You're not actually changing any of the data. There may be enhancement applied to the enlarged image afterwards but that's a separate process.

[–] dual_sport_dork@lemmy.world 36 points 8 months ago (1 children)

But the fact remains that digital zoom cannot create details that were invisible in the first place due to the distance from the camera to the subject. Modern implementations of digital zoom always use some manner of interpolation algorithm, even if it's just a simple linear blur from one pixel to the next.

The problem is not in how a digital zoom works, it's on how people think it works but doesn't. A lot of people (i.e. [l]users, ordinary non-technical people) still labor under the impression that digital zoom somehow makes the picture "closer" to the subject and can enlarge or reveal details that were not detectable in the original photo, which is a notion we need to excise from people's heads.

load more comments (1 replies)
[–] jeeva@lemmy.world 20 points 8 months ago* (last edited 8 months ago) (3 children)

Hold up. Digital zoom is, in all the cases I'm currently aware of, just cropping the available data. That's not reconstruction, it's just losing data.

Otherwise, yep, I'm with you there.

load more comments (3 replies)
load more comments (24 replies)
[–] guyrocket@kbin.social 116 points 8 months ago (35 children)

I think we need to STOP calling it "Artificial Intelligence". IMHO that is a VERY misleading name. I do not consider guided pattern recognition to be intelligence.

[–] TurtleJoe@lemmy.world 35 points 8 months ago

A term created in order to vacuum up VC funding for spurious use cases.

[–] UsernameIsTooLon@lemmy.world 27 points 8 months ago (2 children)

It's the new "4k". Just buzzwords to get clicks.

[–] lemann@lemmy.dbzer0.com 19 points 8 months ago (1 children)

My disappointment when I realised "4k" was only 2160p 😔

load more comments (1 replies)
[–] exocortex@discuss.tchncs.de 13 points 8 months ago* (last edited 8 months ago) (7 children)

on the contrary! it's a very old buzzword!

AI should be called machine learning. much better. If i had my way it would be called "fancy curve fitting" henceforth.

load more comments (7 replies)
[–] Hamartiogonic@sopuli.xyz 19 points 8 months ago* (last edited 8 months ago)

Optical Character Recognition used to be firmly in the realm of AI until it became so common that even the post office uses it. Nowadays, OCR is so common that instead of being proper AI, it’s just another mundane application of a neural network. I guess, eventually Large Language Models will be outside there scope of AI.

load more comments (32 replies)
[–] UnderpantsWeevil@lemmy.world 99 points 8 months ago
[–] emptyother@programming.dev 95 points 8 months ago (14 children)

How long until we got upscalers of various sorts built into tech that shouldn't have it? For bandwidth reduction, for storage compression, or cost savings. Can we trust what we capture with a digital camera, when companies replace a low quality image of the moon with a professionally taken picture, at capture time? Can sport replays be trusted when the ball is upscaled inside the judges' screens? Cheap security cams with "enhanced night vision" might get somebody jailed.

I love the AI tech. But its future worries me.

[–] someguy3@lemmy.world 28 points 8 months ago* (last edited 8 months ago) (2 children)

Dehance! [Click click click.]

load more comments (2 replies)
[–] Jimmycakes@lemmy.world 18 points 8 months ago (1 children)

It will wild out for the foreseeable future until the masses stop falling for it in gimmicks then it will be reserved for the actual use cases where it's beneficial once the bullshit ai stops making money.

Lol, you think the masses will stop falling for it in gimmicks? Just look at the state of the world.

[–] GenderNeutralBro@lemmy.sdf.org 16 points 8 months ago (7 children)

AI-based video codecs are on the way. This isn't necessarily a bad thing because it could be designed to be lossless or at least less lossy than modern codecs. But compression artifacts will likely be harder to identify as such. That's a good thing for film and TV, but a bad thing for, say, security cameras.

The devil's in the details and "AI" is way too broad a term. There are a lot of ways this could be implemented.

[–] jeeva@lemmy.world 13 points 8 months ago (4 children)

I don't think loss is what people are worried about, really - more injecting details that fit the training data but don't exist in the source.

Given the hoopla Hollywood and directors made about frame-interpolation, do you think generated frames will be any better/more popular?

load more comments (4 replies)
load more comments (6 replies)
[–] MudMan@fedia.io 13 points 8 months ago* (last edited 8 months ago)

Not all of those are the same thing. AI upscaling for compression in online video may not be any worse than "dumb" compression in terms of loss of data or detail, but you don't want to treat a simple upscale of an image as a photographic image for evidence in a trial. Sport replays and hawkeye technology doesn't really rely on upscaling, we have ways to track things in an enclosed volume very accurately now that are demonstrably more precise than a human ref looking at them. Whether that's better or worse for the game's pace and excitement is a different question.

The thing is, ML tech isn't a single thing. The tech itself can be used very rigorously. Pretty much every scientific study you get these days uses ML to compile or process images or data. That's not a problem if done correctly. The issue is everybody is both assuming "generative AI" chatbots, upscalers and image processers are what ML is and people keep trying to apply those things directly in the dumbest possible way thinking it is basically magic.

I'm not particularly afraid of "AI tech", but I sure am increasingly annoyed at the stupidity and greed of some of the people peddling it, criticising it and using it.

load more comments (10 replies)
[–] Mango@lemmy.world 66 points 8 months ago (7 children)

Jesus Christ, does this even need to be pointed out!??

[–] Whirling_Cloudburst@lemmy.world 43 points 8 months ago* (last edited 8 months ago) (3 children)

Unfortunately it does need pointing out. Back when I was in college, professors would need to repeatedly tell their students that the real world forensics don't work like they do on NCIS. I'm not sure as to how much thing may or may not have changed since then, but based on American literacy levels being what they are, I do not suppose things have changed that much.

load more comments (3 replies)
[–] Stopthatgirl7@lemmy.world 33 points 8 months ago (1 children)

Yes. When people were in full conspiracy mode on Twitter over Kate Middleton, someone took that grainy pic of her in a car and used AI to “enhance it,” to declare it wasn’t her because her mole was gone. It got so much traction people thought the ai fixed up pic WAS her.

load more comments (1 replies)
[–] lole@iusearchlinux.fyi 24 points 8 months ago (1 children)

I met a student at university last week at lunch who told me he is stressed out about some homework assignment. He told me that he needs to write a report with a minimum number of words so he pasted the text into chatGPT and asked it about the number of words in the text.

I told him that every common text editor has a word count built in and that chatGPT is probably not good at counting words (even though it pretends to be good at it)

Turns out that his report was already waaaaay above the minimum word count and even needed to be shortened.

So much about the understanding of AI in the general population.

I'm studying at a technical university.

load more comments (1 replies)
[–] altima_neo@lemmy.zip 19 points 8 months ago (1 children)

The layman is very stupid. They hear all the fantastical shit AI can do and they start to assume its almighty. Thats how you wind up with those lawyers that tried using chat GPT to write up a legal brief that was full of bullshit and didnt even bother to verify if it was accurate.

They dont understand it, they only know that the results look good.

load more comments (1 replies)
[–] laughterlaughter@lemmy.world 17 points 8 months ago (2 children)

There's people who still believe in astrology. So, yes.

load more comments (2 replies)
[–] douglasg14b@lemmy.world 17 points 8 months ago (1 children)

Of course, not everyone is technology literate enough to understand how it works.

That should be the default assumption, that something should be explained so that others understand it and can make better, informed, decisions. .

load more comments (1 replies)
load more comments (1 replies)
[–] milkjug@lemmy.wildfyre.dev 61 points 8 months ago (2 children)

I’d love to see the “training data” for this model, but I can already predict it will be 99.999% footage of minorities labelled ‘criminal’.

And cops going “Aha! Even AI thinks minorities are committing all the crime”!

load more comments (2 replies)
[–] Neato@ttrpg.network 48 points 8 months ago (1 children)

Imagine a prosecution or law enforcement bureau that has trained an AI from scratch on specific stimuli to enhance and clarify grainy images. Even if they all were totally on the up-and-up (they aren't, ACAB), training a generative AI or similar on pictures of guns, drugs, masks, etc for years will lead to internal bias. And since AI makers pretend you can't decipher the logic (I've literally seen compositional/generative AI that shows its work), they'll never realize what it's actually doing.

So then you get innocent CCTV footage this AI "clarifies" and pattern-matches every dark blurb into a gun. Black iPhone? Maybe a pistol. Black umbrella folded up at a weird angle? Clearly a rifle. And so on. I'm sure everyone else can think of far more frightening ideas like auto-completing a face based on previously searched ones or just plain-old institutional racism bias.

load more comments (1 replies)
[–] rob_t_firefly@lemmy.world 41 points 8 months ago

According to the evidence, the defendant clearly committed the crime with all 17 of his fingers. His lack of remorse is obvious by the fact that he's clearly smiling wider than his own face.

[–] Kolanaki@yiffit.net 32 points 8 months ago

clickity clackity

"ENHANCE"

[–] hperrin@lemmy.world 30 points 8 months ago (8 children)
[–] fidodo@lemmy.world 20 points 8 months ago

It's incredibly obvious when you call the current generation of AI by its full name, generative AI. It's creating data, that's what it's generating.

[–] TurtleJoe@lemmy.world 13 points 8 months ago

Everything that is labeled "AI" is made up. It's all just statistically probable guessing, made by a machine that doesn't know what it is doing.

load more comments (6 replies)
[–] rottingleaf@lemmy.zip 27 points 8 months ago (1 children)

The fact that it made it that far is really scary.

I'm starting to think that yes, we are going to have some new middle ages before going on with all that "per aspera ad astra" space colonization stuff.

[–] Strobelt@lemmy.world 13 points 8 months ago (4 children)

Aren't we already in a kind of dark age?

People denying science, people scared of diseases and vaccination, people using anything AI or blockchain as if it were magic, people defending power-hungry, all-promising dictators, people divided over and calling the other side barbaric. And of course, wars based on religion.

Seems to me we're already in the dark.

load more comments (4 replies)
[–] FiniteBanjo@lemmy.today 22 points 8 months ago
[–] AnUnusualRelic@lemmy.world 22 points 8 months ago (6 children)

Why not make it a fully AI court and save time if they were going to go that way. It would save so much time and money.

Of course it wouldn't be very just, but then regular courts aren't either.

load more comments (6 replies)
[–] TheBest@midwest.social 21 points 8 months ago* (last edited 8 months ago) (11 children)

This actually opens an interesting debate.

Every photo you take with your phone is post processed. Saturation can be boosted, light levels adjusted, noise removed, night mode, all without you being privy as to what's happening.

Typically people are okay with it because it makes for a better photo - but is it a true representation of the reality it tried to capture? Where is the line of the definition of an ai-enhanced photo/video?

We can currently make the judgement call that a phones camera is still a fair representation of the truth, but what about when the 4k AI-Powered Night Sight Camera does the same?

My post is more tangentially related to original article, but I'm still curious as what the common consensus is.

[–] GamingChairModel@lemmy.world 13 points 8 months ago (2 children)

Every photo you take with your phone is post processed.

Years ago, I remember looking at satellite photos of some city, and there was a rainbow colored airplane trail on one of the photos. It was explained that for a lot of satellites, they just use a black and white imaging sensor, and take 3 photos while rotating a red/green/blue filter over that sensor, then combining the images digitally into RGB data for a color image. For most things, the process worked pretty seamlessly. But for rapidly moving objects, like white airplanes, the delay between the capture of red/green/blue channel created artifacts in the image that weren't present in the actual truth of the reality being recorded. Is that specific satellite method all that different from how modern camera sensors process color, through tiny physical RGB filters over specific subpixels?

Even with conventional photography, even analog film, there's image artifacts that derive from how the photo is taken, rather than what is true of the subject of the photograph. Bokeh/depth of field, motion blur, rolling shutter, and physical filters change the resulting image in a way that is caused by the camera, not the appearance of the subject. Sometimes it makes for interesting artistic effects. But it isn't truth in itself, but rather evidence of some truth, that needs to be filtered through an understanding of how the image was captured.

Like the Mitch Hedberg joke:

I think Bigfoot is blurry, that's the problem. It's not the photographer's fault. Bigfoot is blurry, and that's extra scary to me.

So yeah, at a certain point, for evidentiary proof in court, someone will need to prove some kind of chain of custody that the image being shown in court is derived from some reliable and truthful method of capturing what actually happened in a particular time and place. For the most part, it's simple today: i took a picture with a normal camera, and I can testify that it came out of the camera like this, without any further editing. As the chain of image creation starts to include more processing between photons on the sensor and digital file being displayed on a screen or printed onto paper, we'll need to remain mindful of the areas where that can be tripped up.

load more comments (2 replies)
load more comments (10 replies)
[–] Voyajer@lemmy.world 16 points 8 months ago (1 children)

You'd think it would be obvious you can't submit doctored evidence and expect it to be upheld in court.

load more comments (1 replies)
[–] rustyfish@lemmy.world 14 points 8 months ago* (last edited 8 months ago)

For example, there was a widespread conspiracy theory that Chris Rock was wearing some kind of face pad when he was slapped by Will Smith at the Academy Awards in 2022. The theory started because people started running screenshots of the slap through image upscalers, believing they could get a better look at what was happening.

Sometimes I think, our ancestors shouldn’t have made it out of the ocean.

load more comments
view more: next ›