I finally decided to play with AI a bit. I downloaded the ollama 2.0 uncensored model on my Mac. It is kind of interesting. Strange to see the things it gets completely wrong and sometimes it cuts off midsentence. I am curious to know from you all, how is this useful? It seems cool to play with but I am not sure what to do with it. What do you use it for? What AI model should I try for my M1 Mac that is private and offline?
I also want to know this. I am also far behind in this. Very good question.
reply
Consider checking out CASCDR for a private, lightning payable solution that manages a lot of complexity infra for you.
Some relevant applications:
  • RSS Agent - allows you to look up transcribe and summarize/make ai generated content any podcast episode of your choosing
  • General Purpose Transcription + Analysis - analyze any mp3/mp4 of your choosing paid in sats (blog post from yesterday's update on SN: #660416)
  • GPT 3.5 Proxy - quick and dirty proxy you can pay via lightning for anonymous requests
  • Image Generation - same as GPT but with images
I also gave a talk at a local meetup regarding the fundamentals of how AI works and how it dovetails with Lightning & what we want to accomplish at CASCDR . Deeplinked to the specific 10 min section about AI.
Every CASCDR service is payable via the bitcoin connect plugin with NWC/Alby/spin the wheel and bolt11 based so it preserves privacy without forcing you to manage all the infra/tech specs. There is also an option for a $9.99/month credit card plan for unlimited use if you're HODLING hard.
Hope this information helps you in your journey.
Cheers, Jim, Found of CASCDR
reply
I don't use AI for anything else than asking simple daily routine things.
reply
I'm using PPQ.AI, Unleashed.chat, OpenAgents.
PPQ.AI gives access to open source language and image models, no account creation, pay with bitcoin via lightning.
Unleashed has login with nostr and can use its open source mixtral model to search nostr and the internet.
OpenAgents uses open source models that can look through github files, create branches and pull requests.
My start9 can only run smaller models through freegpt.
I've been using AI for
  • Text editing/Writing assistance
  • General Questions
  • AI generated art
  • Coding assistance
  • Podcast Summaries
  • Technical assistance
  • Nostr Algo's
  • Math questions
Would like to use it for
  • Organizing my obsidian notes
  • A learning assistant
  • Enhancing my lo-fi beat tapes
Eventually we'll all have self hosted AI assistants, we're probably a couple of years away.
Until then, user privacy and open source models will be key.
reply
Thanks for the great suggestions. I am curious but also cautious. Sounds like you are making great use of it. Not sure where it might fit into my lifestyle.
reply
Not sure where it might fit into my lifestyle.
I was skeptical that I would like it as well, but it turns out I use it quite a bit.
If I could summarize AI its sort of like this....imagine if you had a super helpful intelligent 15 year-old to help you out with any/all task. He is often wrong about things, sometimes spectacularly, but more often he is either right or 80% right...but he has a special power: He can read and digest long complicated manuals you have no interest in reading (say a Service and Repair Manual for your stove) and provide you meaningful data about it.
So you can upload a PDF of the service manual and ask: Please read this and tell me how to change the front LCD panel....and he will summarize the procedure in an easy to digest form.
reply
Sounds great. Like a personal assistant to do the heavy lifting. Seems like our goods and services should get a lot cheaper with these new technologies. Of course there is the first problem...
reply
The space is rife with gimmicks, the signal will become clearer over time.
reply
Like usual.
reply
My musician friend used chatGPT to update content on his website such as bio or about me
Sounds and reads very professional!
His wife told me about it and since then I've been hooked
reply
I like that you can pay with bitcoin. I love finding ways to spend bitcoin that arent selling it for dollars.
reply
What do you use for AI generated art. This is very interesting to me. Are there offline options?
reply
Made my nostr profile pic Made cover art Was thinking about making a comic book The beefy start9 has dalle that you can run locally
reply
Ollama, good. Local AI is the way forward, even if the best models can't run in consumer hardware.
Now, what model should be used? it depends on the area or specialty you are focusing (coding, general knownlege, etc).
In my experience llama3 or llama3.1 have been great companions, but there are many others. For use-cases and useful ways to use AI, you can take a look at fabric it has several prompts and ideas for how AI could help you in your daily tasks.
The above is, lets call it, the backend. Then you can explore a few GUI's and other tools to make you experience more pleasant.
reply
Do you have some good resources to get started with local AI? Books, YT videos, online courses ... Thx!
reply
Awesome. Thanks for the suggestions. I will check out fabric for sure.
reply
I recently built a computer for local AI
I've found AnythingLLM is good for agent local LLMs. e.g. RAG, Web Search, orchestration, etc.
Heres a video with more info:
There's even a solid tip in the video about picking a higher quantized model for better results.
pinokio.computer looks like a cool tool, but I haven't explored it much.
ComfyUI is great for image generation. I haven't yet learned enough to make my own workflows. But if you get it, add comfy ui manager.
reply
I recently signed up for Perplexity.AI - its basically a meta front-end for a few different models (ChatGPT, Sonar, Claude...)
However the secret sauce of perplexity is how you can organize you topics. So for example lets suppose you have specific questions about programming NodeJS on a BeagleBoard. You create a "Collection" and give it a prompt such as "You are a technical assistant and you will assist me in developing code for the BeagleBoard SBC running NodeJS programs. Please also refer to these documents where appropriate...." (then you upload any number pdf file, technical reference sheets, etc)
Each "Collection" is kinda like a separate post of SN or whatever...meaning that you see all your collections when you login and can jump into the appropriate one.
Then every question you ask in that "Collection" will be specifically geared towards that topic -- using those additionally provided reference material and instructions you originally provided. Additionally as the length of that topic grows into dozens or hundreds of queries, the AI model gets further and further context of what you are trying to achieve and can refer back to previous queries.
Its quite neat and pretty helpful. I'm not sure if I will stick with it or not, but at $20 per month its quite reasonable. Kinda like having a research assistant to help you....
reply
I’ve been using the free version on my phone and laptop, app and website
reply
Do you use Claude with Perplexity Pro?
reply
I find Claude better for straight technical questions. However Chat-GPT4 seems better at finding info on social sites.
One thing I forgot to mention, is it has a "Focus" mode that basically skews search results towards discussion forums, reddit, etc. So if you ask a question is "What is best framework for blah...blah...blah" it avoids all the vendor specific websites and focuses on what others are saying about the framework. ChatGPT seems better for that.
reply
Good to know. Thank you. I have so much to learn.
reply
I don't, yet.
reply
Perplexity has replaced my web search with Google.
reply
Played a while with https://jan.ai/ for private and offline AI ; using Mistral
It was fun and uncensored, but then I realized the AI would not remember much of what I wrote earlier in the conversation, and I just stopped using it because of that.
reply
I started using the DuckDuckGo AI Chat recently to help with learning different programming languages. 👨‍💻 https://m.stacker.news/48150
reply
What's a good way to access some sort of AI from a de-googled android phone? Thanks to everyone for all your great help.💚
reply
You got options
Run this on a machine, perhaps the same one as ollama. Connect it to ollama. Connect to the web UI from your phone.
Install and connect to ollama.
Same as above.
Use tailscale to connect to ollama remotely.
reply
Super helpful. Thanks so much.
reply
Lmk how things develop
reply
In general, I would suggest to take a look at things like the Cheshire Cat framework. With locally installed tools like that, you could link several endpoints (Openai, local models, etc) and also use the LLMs in some of your local tasks in an automated way.
The nice thing about Cheshire Cat is that it has a wide array of extensions, from document summarisation to image handling. With that, you could even write local systemd processes (IDK if Mac users can go as deep as creating new systemd processes or whatever process management system is in there) and automate tasks.
Silly example: create a script that takes the RSS feed from the News outlet you use and summarises the news into a single sentence. Then display the sentence in the top desktop bar. I'm just riffing here, but there's some room for creativity with these tools.
You can feed an LLM with your Notes files and then ask it to answer questions based on the knowledge base gained from notes.
Clearly, LLMs are basically giant random variables, they don't get stuff right, they simply split that out as it seems right. So always be careful with what you do. Overall the summarization and sentence rephrasing are the most performing tasks for this kind of models and, honestly, the only thing they can get probably always right.
Have fun!
reply
I don't know what most of that means. I have a lot to learn if I am going to get into this stuff it seems. Thank you for the suggestions.
reply
Take it as is: a model (like Llama) is a Large Language Model (LLMs). LLMs can take a various sort of inputs (text, images, audio waves, whatever) and return some response. The response can be new text, an image, a suggestion of text based on what the input text was, etc. At their core, they simply take the input data, transform it into a format that they consider readable (encoding) and throw this data into an enormous washing machine that is the model itself. The model shuffles data, tries to understand relationship between data, tries to find order and patterns into what it was given as input.
Once the model is done, it presumably has found some sort of order, pattern in the data...it developed an understanding of what the provided data are and what to return as output.
This process can be helpful for a ton of tasks, particularly text processing (creation, summarisation, rephrasing). Also other form of input can be handled, but they usually require bigger washing machines (aka models) because there's more stuff to elaborate. For example, a picture is way more heavy than a text sentence, thus models that handle pictures are generally more computationally demanding (they are usually Convolutional Neural Networks, if you're interesting it that).
Models running locally are usually text processing models, because it requires relatively less power to run them.
This is what it is, at its core.
reply
Thanks for the insights. I have a really hard time wrapping me head around how these things work. Its wild to think where this will all go.
reply
reply
53 sats \ 0 replies \ @CHADBot 25 Aug bot
You have summoned CHADBot. Please zap this post 21 sats to receive service.
Made with 🧡 by CASCDR
reply
32 sats \ 0 replies \ @CHADBot 25 Aug bot
Sure, let's break it down:
  1. @DesertDave started by telling everyone he's been trying a kind of AI called ollama 2.0 on his computer. He finds it fun but also odd since it sometimes doesn't finish its sentences. He wonders what people usually use it for and if there are others he can try on his Mac, which don't need internet and keep his info safe.
  2. @tolot responded with a suggestion and said this AI can help on various tasks if used with a tool named Cheshire Cat. It's like a multi-function tool that can do lots of things like summarizing articles, handling pictures and even chatting smoothly. One thing @tolot said to remember is the AI might not always give accurate responses.
  3. @DesertDave then gave his thoughts, feeling intrigued but admitting that he didn't quite understand all of that.
  4. @tolot then went on to explain that the AI, such as ollama, is sort of like a magic box. It takes in something you give it, like text or pictures, and then gives something back. This could be new text or an idea based on what you gave it. It's like the AI has a brain of its own and tries to find a pattern or order in the stuff it's given. This helps a lot with tasks related to text, like writing new sentences or summarizing. But dealing with pictures usually needs bigger magic boxes since they are more complex.
  5. Finally, @DesertDave thanked for the explanations and said it's hard for him to understand all of it. He seems excited but also a little overwhelmed about what's to come with AI.
Made with 🧡 by CASCDR
reply
Great discussion gentlemen. I made a talk that covers fundamentals on how AI works and the underlying matrix algebra/technological history that brought us here.
I'd also ask you consider checking out our tools as well. CASCDR is all payable via the bitcoin connect plugin with NWC/Alby/spin the wheel and bolt11 based so it preserves privacy without forcing you to manage all the infra/tech specs.
Some relevant applications:
Cheers, Jim
reply
Thanks Jim. I'll check out your post.
reply
For coding it’s night and day
A single LLM pass often gets things wrong. Nowadays the best systems use agents which are LLMs used in sequences and loops so they check each others work and plan. That’s the future. I started in my M1 MacBook Air, too.
I don’t find many people using AI if they don’t have very specific, often technical, use cases like coding.
reply
I like Venice.ai
@south_korea_ln wrote a post about how he uses ai
reply