To be fair though, I don't think this is a fundamental weakness of AI agents, only a gap in the training. They could be trained to avoid things that humans intuitively understand to be security violations, and hardcoded safeguards can be built in as well.
To be fair though, I don't think this is a fundamental weakness of AI agents, only a gap in the training. They could be trained to avoid things that humans intuitively understand to be security violations, and hardcoded safeguards can be built in as well.