It sounds like most of the heavy lifting is mimicking OpenAI's API. Are LLMs generic enough that they're otherwise interchangeable? So long as we're using a model that isn't wrapped in a bespoke API, we can just swap models?
That's pretty cool and I guess that's the benefit on natural language being the "raw" interface to all these models. For embeddings it seems a little trickier - the models tend to truncate differently afaict and the output vector can vary in the number of dimensions.
It sounds like most of the heavy lifting is mimicking OpenAI's API. Are LLMs generic enough that they're otherwise interchangeable? So long as we're using a model that isn't wrapped in a bespoke API, we can just swap models?
That's pretty cool and I guess that's the benefit on natural language being the "raw" interface to all these models. For embeddings it seems a little trickier - the models tend to truncate differently afaict and the output vector can vary in the number of dimensions.