I finally decided to play with AI a bit. I downloaded the ollama 2.0 uncensored model on my Mac. It is kind of interesting. Strange to see the things it gets completely wrong and sometimes it cuts off midsentence. I am curious to know from you all, how is this useful? It seems cool to play with but I am not sure what to do with it. What do you use it for? What AI model should I try for my M1 Mac that is private and offline?
pull down to refresh
I also want to know this. I am also far behind in this. Very good question.
Consider checking out CASCDR for a private, lightning payable solution that manages a lot of complexity infra for you.
https://m.stacker.news/48196
Some relevant applications:
Some relevant screenshots:
https://m.stacker.news/48199
https://m.stacker.news/48197
https://m.stacker.news/48198
https://m.stacker.news/48200
I also gave a talk at a local meetup regarding the fundamentals of how AI works and how it dovetails with Lightning & what we want to accomplish at CASCDR . Deeplinked to the specific 10 min section about AI.
Every CASCDR service is payable via the bitcoin connect plugin with NWC/Alby/spin the wheel and bolt11 based so it preserves privacy without forcing you to manage all the infra/tech specs. There is also an option for a $9.99/month credit card plan for unlimited use if you're HODLING hard.
Hope this information helps you in your journey.
Cheers,
Jim, Found of CASCDR
I recently signed up for Perplexity.AI - its basically a meta front-end for a few different models (ChatGPT, Sonar, Claude...)
However the secret sauce of perplexity is how you can organize you topics. So for example lets suppose you have specific questions about programming NodeJS on a BeagleBoard. You create a "Collection" and give it a prompt such as "You are a technical assistant and you will assist me in developing code for the BeagleBoard SBC running NodeJS programs. Please also refer to these documents where appropriate...." (then you upload any number pdf file, technical reference sheets, etc)
Each "Collection" is kinda like a separate post of SN or whatever...meaning that you see all your collections when you login and can jump into the appropriate one.
Then every question you ask in that "Collection" will be specifically geared towards that topic -- using those additionally provided reference material and instructions you originally provided. Additionally as the length of that topic grows into dozens or hundreds of queries, the AI model gets further and further context of what you are trying to achieve and can refer back to previous queries.
Its quite neat and pretty helpful. I'm not sure if I will stick with it or not, but at $20 per month its quite reasonable. Kinda like having a research assistant to help you....
Do you use Claude with Perplexity Pro?
I find Claude better for straight technical questions. However Chat-GPT4 seems better at finding info on social sites.
One thing I forgot to mention, is it has a "Focus" mode that basically skews search results towards discussion forums, reddit, etc. So if you ask a question is "What is best framework for blah...blah...blah" it avoids all the vendor specific websites and focuses on what others are saying about the framework. ChatGPT seems better for that.
Good to know. Thank you. I have so much to learn.
I’ve been using the free version on my phone and laptop, app and website
I'm using PPQ.AI, Unleashed.chat, OpenAgents.
PPQ.AI gives access to open source language and image models, no account creation, pay with bitcoin via lightning.
Unleashed has login with nostr and can use its open source mixtral model to search nostr and the internet.
OpenAgents uses open source models that can look through github files, create branches and pull requests.
My start9 can only run smaller models through freegpt.
I've been using AI for
Would like to use it for
Eventually we'll all have self hosted AI assistants, we're probably a couple of years away.
Until then, user privacy and open source models will be key.
Thanks for the great suggestions. I am curious but also cautious. Sounds like you are making great use of it. Not sure where it might fit into my lifestyle.
I was skeptical that I would like it as well, but it turns out I use it quite a bit.
If I could summarize AI its sort of like this....imagine if you had a super helpful intelligent 15 year-old to help you out with any/all task. He is often wrong about things, sometimes spectacularly, but more often he is either right or 80% right...but he has a special power: He can read and digest long complicated manuals you have no interest in reading (say a Service and Repair Manual for your stove) and provide you meaningful data about it.
So you can upload a PDF of the service manual and ask: Please read this and tell me how to change the front LCD panel....and he will summarize the procedure in an easy to digest form.
Sounds great. Like a personal assistant to do the heavy lifting. Seems like our goods and services should get a lot cheaper with these new technologies. Of course there is the first problem...
The space is rife with gimmicks, the signal will become clearer over time.
Like usual.
My musician friend used chatGPT to update content on his website such as bio or about me
Sounds and reads very professional!
His wife told me about it and since then I've been hooked
What do you use for AI generated art. This is very interesting to me. Are there offline options?
Made my nostr profile pic
Made cover art
Was thinking about making a comic book
The beefy start9 has dalle that you can run locally
I like that you can pay with bitcoin. I love finding ways to spend bitcoin that arent selling it for dollars.
In general, I would suggest to take a look at things like the Cheshire Cat framework. With locally installed tools like that, you could link several endpoints (Openai, local models, etc) and also use the LLMs in some of your local tasks in an automated way.
The nice thing about Cheshire Cat is that it has a wide array of extensions, from document summarisation to image handling. With that, you could even write local systemd processes (IDK if Mac users can go as deep as creating new systemd processes or whatever process management system is in there) and automate tasks.
Silly example: create a script that takes the RSS feed from the News outlet you use and summarises the news into a single sentence. Then display the sentence in the top desktop bar.
I'm just riffing here, but there's some room for creativity with these tools.
You can feed an LLM with your Notes files and then ask it to answer questions based on the knowledge base gained from notes.
Clearly, LLMs are basically giant random variables, they don't get stuff right, they simply split that out as it seems right. So always be careful with what you do.
Overall the summarization and sentence rephrasing are the most performing tasks for this kind of models and, honestly, the only thing they can get probably always right.
Have fun!
I don't know what most of that means. I have a lot to learn if I am going to get into this stuff it seems. Thank you for the suggestions.
Take it as is: a model (like Llama) is a Large Language Model (LLMs). LLMs can take a various sort of inputs (text, images, audio waves, whatever) and return some response. The response can be new text, an image, a suggestion of text based on what the input text was, etc.
At their core, they simply take the input data, transform it into a format that they consider readable (encoding) and throw this data into an enormous washing machine that is the model itself.
The model shuffles data, tries to understand relationship between data, tries to find order and patterns into what it was given as input.
Once the model is done, it presumably has found some sort of order, pattern in the data...it developed an understanding of what the provided data are and what to return as output.
This process can be helpful for a ton of tasks, particularly text processing (creation, summarisation, rephrasing). Also other form of input can be handled, but they usually require bigger washing machines (aka models) because there's more stuff to elaborate. For example, a picture is way more heavy than a text sentence, thus models that handle pictures are generally more computationally demanding (they are usually Convolutional Neural Networks, if you're interesting it that).
Models running locally are usually text processing models, because it requires relatively less power to run them.
This is what it is, at its core.
Thanks for the insights. I have a really hard time wrapping me head around how these things work. Its wild to think where this will all go.
@CHADBot /eli5
You have summoned CHADBot. Please zap this post 21 sats to receive service.
Made with 🧡 by CASCDR
Sure, let's break it down:
Made with 🧡 by CASCDR
Great discussion gentlemen. I made a talk that covers fundamentals on how AI works and the underlying matrix algebra/technological history that brought us here.
I'd also ask you consider checking out our tools as well. CASCDR is all payable via the bitcoin connect plugin with NWC/Alby/spin the wheel and bolt11 based so it preserves privacy without forcing you to manage all the infra/tech specs.
Some relevant applications:
Cheers,
Jim
Thanks Jim. I'll check out your post.
Ollama, good. Local AI is the way forward, even if the best models can't run in consumer hardware.
Now, what model should be used? it depends on the area or specialty you are focusing (coding, general knownlege, etc).
In my experience
llama3orllama3.1have been great companions, but there are many others. For use-cases and useful ways to use AI, you can take a look at fabric it has several prompts and ideas for how AI could help you in your daily tasks.The above is, lets call it, the backend. Then you can explore a few GUI's and other tools to make you experience more pleasant.
Do you have some good resources to get started with local AI? Books, YT videos, online courses ... Thx!
Awesome. Thanks for the suggestions. I will check out fabric for sure.
I don't use AI for anything else than asking simple daily routine things.
I recently built a computer for local AI
I've found AnythingLLM is good for agent local LLMs. e.g. RAG, Web Search, orchestration, etc.
Heres a video with more info: https://www.youtube.com/watch?v=4UFrVvy7VlA
There's even a solid tip in the video about picking a higher quantized model for better results.
pinokio.computer looks like a cool tool, but I haven't explored it much.
ComfyUI is great for image generation. I haven't yet learned enough to make my own workflows. But if you get it, add comfy ui manager.
What's a good way to access some sort of AI from a de-googled android phone? Thanks to everyone for all your great help.💚
You got options
Open WebUI
https://github.com/open-webui/open-webui
Run this on a machine, perhaps the same one as ollama. Connect it to ollama. Connect to the web UI from your phone.
Ollama App
https://github.com/SMuflhi/ollama-app-for-Android-
Install and connect to ollama.
Maid
https://github.com/Mobile-Artificial-Intelligence/maid
Same as above.
Use tailscale to connect to ollama remotely.
Super helpful. Thanks so much.
Lmk how things develop
I don't, yet.
Perplexity has replaced my web search with Google.
I started using the DuckDuckGo AI Chat recently to help with learning different programming languages. 👨💻
https://m.stacker.news/48150
Played a while with https://jan.ai/ for private and offline AI ; using Mistral
It was fun and uncensored, but then I realized the AI would not remember much of what I wrote earlier in the conversation, and I just stopped using it because of that.
For coding it’s night and day
A single LLM pass often gets things wrong. Nowadays the best systems use agents which are LLMs used in sequences and loops so they check each others work and plan. That’s the future. I started in my M1 MacBook Air, too.
I don’t find many people using AI if they don’t have very specific, often technical, use cases like coding.
I like Venice.ai
@south_korea_ln wrote a post about how he uses ai