pull down to refresh

LLMs are helpful, but don't use them for anything important
AI models just can't seem to stop making things up. As two recent studies point out, that proclivity underscores prior warnings not to rely on AI advice for anything that really matters.
One thing AI makes up quite often is the names of software packages.
As we noted earlier this year, Lasso Security found that large language models (LLMs), when generating sample source code, will sometimes invent names of software package dependencies that don't exist.
That's scary, because criminals could easily create a package that uses a name produced by common AI services and cram it full of malware. Then they just have to wait for a hapless developer to accept an AI's suggestion to use a poisoned package that incorporates a co-opted, corrupted dependency.
Researchers from University of Texas at San Antonio, University of Oklahoma, and Virginia Tech recently looked at 16 LLMs used for code generation to explore their penchant for making up package names.