pull down to refresh
132 sats \ 1 reply \ @freetx 11 Sep \ on: AI privacy concerns: How exposed is your personal data? AI
I think we haven't even begun to understand the security concerns.
"Prompt injection" is going to be a huge issue. How long until malicious webpages start putting
The problem with LLM context windows, is it can't really distinguish what are instructions from user and what is data its gathered from elsewhere. Its just a jumble of tokens.
This is the malicious usecase, but the far more common will be advertising and consensus forming prompts.
I agree with what you're saying, but I have a small addition:
The runtime for your LLM is software. Like all software that you didn't review line-by-line and actually understand what it does, it needs to run on the basis of least-privilege. 1 You do not give your AI runtime shell or filesystem access on an actual workstation while running under your (power) user. You need to either not give that access, or sandbox it in a container or VM, just like you would with any software you run in a minimally secure environment.
Now, would you give OpenAI the keys to your kingdom? Anthropic? Cursor? Did anyone actually review their code line by line?
Footnotes
-
For example: every few hours I get a warning that Google Play Store (sandboxed) is attempting to perform DCL via memory, which I don't allow for any app on my phone. Nothing is authorized to do that, and especially not Google's Spyware. ↩
reply