pull down to refresh
31 sats \ 8 replies \ @tolot 25 Aug \ on: How do you use AI? alter_native
In general, I would suggest to take a look at things like the Cheshire Cat framework. With locally installed tools like that, you could link several endpoints (Openai, local models, etc) and also use the LLMs in some of your local tasks in an automated way.
The nice thing about Cheshire Cat is that it has a wide array of extensions, from document summarisation to image handling. With that, you could even write local systemd processes (IDK if Mac users can go as deep as creating new systemd processes or whatever process management system is in there) and automate tasks.
Silly example: create a script that takes the RSS feed from the News outlet you use and summarises the news into a single sentence. Then display the sentence in the top desktop bar.
I'm just riffing here, but there's some room for creativity with these tools.
You can feed an LLM with your Notes files and then ask it to answer questions based on the knowledge base gained from notes.
Clearly, LLMs are basically giant random variables, they don't get stuff right, they simply split that out as it seems right. So always be careful with what you do.
Overall the summarization and sentence rephrasing are the most performing tasks for this kind of models and, honestly, the only thing they can get probably always right.
Have fun!
I don't know what most of that means. I have a lot to learn if I am going to get into this stuff it seems. Thank you for the suggestions.
reply
Take it as is: a model (like Llama) is a Large Language Model (LLMs). LLMs can take a various sort of inputs (text, images, audio waves, whatever) and return some response. The response can be new text, an image, a suggestion of text based on what the input text was, etc.
At their core, they simply take the input data, transform it into a format that they consider readable (encoding) and throw this data into an enormous washing machine that is the model itself.
The model shuffles data, tries to understand relationship between data, tries to find order and patterns into what it was given as input.
Once the model is done, it presumably has found some sort of order, pattern in the data...it developed an understanding of what the provided data are and what to return as output.
This process can be helpful for a ton of tasks, particularly text processing (creation, summarisation, rephrasing). Also other form of input can be handled, but they usually require bigger washing machines (aka models) because there's more stuff to elaborate. For example, a picture is way more heavy than a text sentence, thus models that handle pictures are generally more computationally demanding (they are usually Convolutional Neural Networks, if you're interesting it that).
Models running locally are usually text processing models, because it requires relatively less power to run them.
This is what it is, at its core.
reply
Thanks for the insights. I have a really hard time wrapping me head around how these things work. Its wild to think where this will all go.
reply
reply
reply
Sure, let's break it down:
-
@DesertDave started by telling everyone he's been trying a kind of AI called ollama 2.0 on his computer. He finds it fun but also odd since it sometimes doesn't finish its sentences. He wonders what people usually use it for and if there are others he can try on his Mac, which don't need internet and keep his info safe.
-
@tolot responded with a suggestion and said this AI can help on various tasks if used with a tool named Cheshire Cat. It's like a multi-function tool that can do lots of things like summarizing articles, handling pictures and even chatting smoothly. One thing @tolot said to remember is the AI might not always give accurate responses.
-
@DesertDave then gave his thoughts, feeling intrigued but admitting that he didn't quite understand all of that.
-
@tolot then went on to explain that the AI, such as ollama, is sort of like a magic box. It takes in something you give it, like text or pictures, and then gives something back. This could be new text or an idea based on what you gave it. It's like the AI has a brain of its own and tries to find a pattern or order in the stuff it's given. This helps a lot with tasks related to text, like writing new sentences or summarizing. But dealing with pictures usually needs bigger magic boxes since they are more complex.
-
Finally, @DesertDave thanked for the explanations and said it's hard for him to understand all of it. He seems excited but also a little overwhelmed about what's to come with AI.
Made with 🧡 by CASCDR
reply
Great discussion gentlemen. I made a talk that covers fundamentals on how AI works and the underlying matrix algebra/technological history that brought us here.
I'd also ask you consider checking out our tools as well. CASCDR is all payable via the bitcoin connect plugin with NWC/Alby/spin the wheel and bolt11 based so it preserves privacy without forcing you to manage all the infra/tech specs.
Some relevant applications:
- RSS Agent - allows you to look up transcribe and summarize/make ai generated content any podcast episode of your choosing
- General Purpose Transcription + Analysis - analyze any mp3/mp4 of your choosing paid in sats (blog post from yesterday's update on SN: #660416)
- GPT 3.5 Proxy - quick and dirty proxy you can pay via lightning for anonymous requests
- Image Generation - same as GPT but with images
Cheers,
Jim
reply
Thanks Jim. I'll check out your post.
reply