It feels to me like the llm-to-app interface is both more powerful and riskier than app-to-llm, but, app-to-llm is easier to both standardize and optimize. I think it really depends on what you want to achieve.
There was a nice post coming in via HN this Monday, #1057610, that basically discusses that chatbot interfaces suck. I subscribe to that thought and feel that prompt writing equals inefficiency, but it's how the LLMs are trained: to be a chatbot, a companion.
However, I believe, like the author of that article, that the better application of the technology is not interactive, but a background task incorporated into the process, rather than besides it.
If you want a chatbot, the mechanism I propose will probably hinder adoption because it requires adoption per-app. It's always cheaper to just circumvent everything and not ask for permission, but then you will quickly run into shenanigans like #1052744. I'd really not want any unchecked capability that can do this on any of my devices, so the slower adoption is imho worth it. 1
Footnotes
One of my favorite things nowadays is that I get a "DCL attempted by <bad app> and prevented" message from GrapheneOS, just like I've always loved SELinux, despite its complexity. It's always nice to have OS-level (and hardware) protections against naughty software. ↩
llm-to-app
interface is both more powerful and riskier thanapp-to-llm
, but,app-to-llm
is easier to both standardize and optimize. I think it really depends on what you want to achieve.Footnotes
<bad app>
and prevented" message from GrapheneOS, just like I've always loved SELinux, despite its complexity. It's always nice to have OS-level (and hardware) protections against naughty software. ↩