I don't know what most of that means. I have a lot to learn if I am going to get into this stuff it seems. Thank you for the suggestions.
Take it as is: a model (like Llama) is a Large Language Model (LLMs). LLMs can take a various sort of inputs (text, images, audio waves, whatever) and return some response. The response can be new text, an image, a suggestion of text based on what the input text was, etc. At their core, they simply take the input data, transform it into a format that they consider readable (encoding) and throw this data into an enormous washing machine that is the model itself. The model shuffles data, tries to understand relationship between data, tries to find order and patterns into what it was given as input.
Once the model is done, it presumably has found some sort of order, pattern in the data...it developed an understanding of what the provided data are and what to return as output.
This process can be helpful for a ton of tasks, particularly text processing (creation, summarisation, rephrasing). Also other form of input can be handled, but they usually require bigger washing machines (aka models) because there's more stuff to elaborate. For example, a picture is way more heavy than a text sentence, thus models that handle pictures are generally more computationally demanding (they are usually Convolutional Neural Networks, if you're interesting it that).
Models running locally are usually text processing models, because it requires relatively less power to run them.
This is what it is, at its core.
reply
Thanks for the insights. I have a really hard time wrapping me head around how these things work. Its wild to think where this will all go.
reply
reply
53 sats \ 0 replies \ @CHADBot 25 Aug bot
You have summoned CHADBot. Please zap this post 21 sats to receive service.
Made with 🧡 by CASCDR
reply
32 sats \ 0 replies \ @CHADBot 25 Aug bot
Sure, let's break it down:
  1. @DesertDave started by telling everyone he's been trying a kind of AI called ollama 2.0 on his computer. He finds it fun but also odd since it sometimes doesn't finish its sentences. He wonders what people usually use it for and if there are others he can try on his Mac, which don't need internet and keep his info safe.
  2. @tolot responded with a suggestion and said this AI can help on various tasks if used with a tool named Cheshire Cat. It's like a multi-function tool that can do lots of things like summarizing articles, handling pictures and even chatting smoothly. One thing @tolot said to remember is the AI might not always give accurate responses.
  3. @DesertDave then gave his thoughts, feeling intrigued but admitting that he didn't quite understand all of that.
  4. @tolot then went on to explain that the AI, such as ollama, is sort of like a magic box. It takes in something you give it, like text or pictures, and then gives something back. This could be new text or an idea based on what you gave it. It's like the AI has a brain of its own and tries to find a pattern or order in the stuff it's given. This helps a lot with tasks related to text, like writing new sentences or summarizing. But dealing with pictures usually needs bigger magic boxes since they are more complex.
  5. Finally, @DesertDave thanked for the explanations and said it's hard for him to understand all of it. He seems excited but also a little overwhelmed about what's to come with AI.
Made with 🧡 by CASCDR
reply
Great discussion gentlemen. I made a talk that covers fundamentals on how AI works and the underlying matrix algebra/technological history that brought us here.
I'd also ask you consider checking out our tools as well. CASCDR is all payable via the bitcoin connect plugin with NWC/Alby/spin the wheel and bolt11 based so it preserves privacy without forcing you to manage all the infra/tech specs.
Some relevant applications:
Cheers, Jim
reply
Thanks Jim. I'll check out your post.
reply