Captain

joined 1 year ago
MODERATOR OF
 

A very interesting approach. Apparently it generates lots of results: https://twitter.com/feross/status/1672401333893365761?s=20

4
submitted 1 year ago* (last edited 1 year ago) by Captain@infosec.pub to c/ai_infosec@infosec.pub
 

They used OpenSSF Scorecard to check the most starred AI projects on GitHub and found that many of them didn't fare well.

The article is based on the report from Rezilion. You can find the report here: https://info.rezilion.com/explaining-the-risk-exploring-the-large-language-models-open-source-security-landscape (any email name works, you'll get access to the report without email verification)

[–] Captain@infosec.pub 2 points 1 year ago

My take so far is that there isn't really any great options to protect against prompt injections. Simon Wilson presents an idea here on his blog which could is a bit interesting. NVIDIA has open sourced a framework for this as well, but it's not without problems. Otherwise I've mostly seen prompt injection firewall products but I wouldn't trust them too much yet.

[–] Captain@infosec.pub 1 points 1 year ago (6 children)

Looks like you're right. It's not mentioned on that page but here he says he's the one running it.

view more: next ›