this post was submitted on 21 Aug 2023
386 points (94.1% liked)
Technology
59679 readers
3431 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Using AI for this is so stupid. There will be a lot of false negatives and positives.
I think it's a pretty good idea, the AI does a first pass, flags potential violations and sends them to a human for review. It's not like they are just sending people fines directly based on the AI output.
I'm definitely a fan of better enforcement of traffic rules to improve safety, but using ML* systems here is fraught with issues. ML systems tend to learn the human biases that were present in their training data and continue to perpetuate them. I wouldn't be shocked if these traffic systems, for example, disproportionately impact some racial groups. And if the ML system identifies those groups more frequently, even if the human review were unbiased (unlikely), the outcome would still be biased.
It's important to see good data showing these systems are fair, before they are used in the wild. I wouldn't support a system doing this until I was confident it was unbiased.
I think the lack of transparency about data, both the one used for training, and the actual statistics of the model itself is pretty worrying.
There needs to be regulations around that, because you can’t expect companies to automatically be transparent and forthcoming if they have something to gain by not being so.
This is a really important concern, thanks for bringing it up. I'd really like to know more about what they are doing in this case to try and combat that. Law enforcement in particular feels like an application where managing bias is extremely important.
I would imagine the risk of bias here is much lower than, for example, the predictive policing systems that are already in use in US police departments. Or the bias involved in ML models for making credit decisions. 🙃
Machine learning is a type of artificial intelligence. As is a pathfinding algorithm in a game.
Neural networks were some of the original AI systems dating back decades. Machine learning is a relatively new term for it.
AI is an umbrella term for anything that mimics intelligence.
There's nothing intelligent about it. It's no smarter than a chatbot or a phone's autocorrect. It's a buzzword applied to it by tech bros that want to make a bunch of money off it
Indeed. That's why it's called artificial intelligence.
Anything that attempts to mimic intelligence is AI.
The field was established in the 50s.
Your definition of it is wrong I'm afraid.
The only people that call it that are people who don't get what AI actually is or don't want to know because they think it's the future. There is exactly nothing intelligent about it. Stop spreading tech bro bullshit, call it machine learning bc that's what it actually is. Or are you really drinking the ML kool-aid hard enough that this is your hill to die on? It's not even as intelligent as a parrot that's learned to recognize colors and materials, it's literally just a souped up cleverbot
Literally the definition my friend. You just don't know what the term is refering to.
Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of human beings or animals. AI applications include advanced web search engines (e.g., Google Search), recommendation systems (used by YouTube, Amazon, and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Waymo), generative or creative tools (ChatGPT and AI art), and competing at the highest level in strategic games (such as chess and Go).[1]
Artificial intelligence was founded as an academic discipline in 1956.[2] The field went through multiple cycles of optimism[3][4] followed by disappointment and loss of funding,[5][6] but after 2012, when deep learning surpassed all previous AI techniques,[7] there was a vast increase in funding and interest.
The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics.[a] General intelligence (the ability to solve an arbitrary problem) is among the field's long-term goals.[8] To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, probability, and economics.[b] AI also draws upon psychology, linguistics, philosophy, neuroscience and many other fields.[9]
I know exactly what AI is referring to. It's referring to a process that has no intelligence behind it. There is no "field of AI" it's a blatant misnomer, just like when they came up with "hoverboards" that still had wheels. Stop being a tech bro before you embarrass yourself and brag about your bored apes or some shit
Lol ok mate
They will tho. In the future. And because its a camera (black and white) there will be many false positives. And this is what the normal driver should fear. That the police just say yeah everything fineeeee and let the ai loose, this is just the first step into. I really doubt it would RELIABLY detect seat belt offenses.
See this is why I only drive when I am drinking a 20oz coffee and eating a footlong sub. That way when AI acuses me of being distracted from being on the phone, the human it gets sent off to for review will be like "oh no, he was simply balancing a sandwich on his lap while he took the lid off to blow on his coffee so it wasn't to hot, the AI must have thought the lid was a phone."
Besides, it also ensures I use a handfree device for my phone because face it... I don't have any free hands, I'm busy trying to find where that marinara sauce fell on my shirt when I was eating the last bite of meatball sub. (Add pepperoni and buffalo sauce) Have to stay legal after all.
We have a couple of these cameras in The Netherlands.
We found it quite intrusive to look into people’s cars. Therefore the computer will flag photos, of possible offenses, and a person verifies them.
Unfortunately the movable camera has a huge lens and it’s reported to a waze-like app before they are even finished setting it up.