this post was submitted on 30 Jan 2025
10 points (100.0% liked)

Technology

1012 readers
85 users here now

A tech news sub for communists

founded 2 years ago
MODERATORS
 

[...] a publicly accessible ClickHouse database linked to DeepSeek, completely open and unauthenticated, exposing sensitive data. It was hosted at oauth2callback.deepseek.com:9000 and dev.deepseek.com:9000.

This database contained a significant volume of chat history, backend data and sensitive information, including log streams, API Secrets, and operational details.

More critically, the exposure allowed for full database control and potential privilege escalation within the DeepSeek environment, without any authentication or defense mechanism to the outside world.

It seems that the Empire has decided to strike.

top 4 comments
sorted by: hot top controversial new old
[–] maodun@lemmygrad.ml 5 points 4 hours ago (1 children)

obviously start-ups might want to give some pause/thought cover their asses in terms of security, as it's got less to do with their actual goals/operations therefore might not be thinking about hiring people for specifically infosec. but ngl this article reads more like advertising for this company's services, rather than some "empire gonna try to strike back" thing

[–] itsraining@lemmygrad.ml 3 points 4 hours ago* (last edited 4 hours ago) (1 children)

AFAICT its the infosec company that found the vulnerability so of course they are bragging about it like it's the greatest thing, but the timing is still interesting, how quickly deepseek became a target.

Since I saw some interest in deepseek around here it might be useful to know that at its current stage it has known vulnerabilities which might expose queries, so please take care.

Edit: typo

[–] maodun@lemmygrad.ml 6 points 3 hours ago* (last edited 3 hours ago) (1 children)

internet-connected vulnerabilities seems relatively avoidable to the individual user if you're able to run it locally, which is a feature of deepseek that isn't available via competitor AI services. I thought that was one of the main reasons it's giving openai & chatgpt a run for their money? that you could download it yourself and with decent enough specs run it completely locally and without internet connection.

also, statistical models used by LLMs don't store data that could eg be used to steal someone's identity, so the headline/first few paragraphs of alarmist "security" concerns is misleading. because, at least from me just skimming it, they're crowing about accessing certain backends and ""highly sensititive information"" when it's like... chat log between the devs??? of course that's sensitive info and the devs themselves should care about securing it. but the framing is again, misleading, lowkey clickbait in trying to play it/ambiguous reading as "this program retains chat logs submitted to feed the learning datasets"... like the general public doesn't know how tech works, so much alarmism about "ai stealing my art/fanfic" because they dont understand none of that gets stored in the algorithm/model, so it's easy to make the headline read like that to people who already think that way. ergo, reads like an advert to scaremonger people who are relatively tech-illiterate

[–] itsraining@lemmygrad.ml 3 points 3 hours ago

thank you for the analysis