this post was submitted on 13 Jun 2023
7 points (100.0% liked)

AI Infosec

771 readers
1 users here now

Infosec news and articles related to AI.

founded 1 year ago
MODERATORS
 

Let's deploy LLMs everywhere! What could possibly go wrong?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] 0xCBE@infosec.pub 1 points 1 year ago (1 children)

This stuff is fascinating to think about.

What if prompt injection is not really solvable? I still see jailbreaks for chatgpt4 from time to time.

Let's say we can't validate and sanitize user input to the LLM, so that also the LLM output is considered untrusted.

In that case security could only sit in front of the connected APIs the LLM is allowed to orchestrate. Would that even scale? How? It feels we will have to reduce the nondeterministic nature of LLM outputs to a deterministic set of allowed possible inputs to the APIs... which is a castration of the whole AI vision?

I am also curious to understand what is the state of the art in protecting from prompt injection, do you have any pointers?

[โ€“] Captain@infosec.pub 2 points 1 year ago

My take so far is that there isn't really any great options to protect against prompt injections. Simon Wilson presents an idea here on his blog which could is a bit interesting. NVIDIA has open sourced a framework for this as well, but it's not without problems. Otherwise I've mostly seen prompt injection firewall products but I wouldn't trust them too much yet.