pull down to refresh

I'm fearful that it in the interim will destroy many jobs but that underlying assertions turn out to have been painting a far prettier picture than reality will prove. The industry feels hypey.. like the dotcom bubble?
160 sats \ 6 replies \ @freetx 19h
I think the best analogy of what it will do to employment is to compare to Walmart checkout lines circa 2000 vs 2025.
You used to go and there were 25 checkout lines each staffed by 2 people.
Now you go and there are 25 self-checkout pods staffed by 1 person.
However there are now 40 employees wandering around the store fulfilling curbside and delivery orders.
I think LLMs will offer a similar effect. There are lots of jobs, like say the person who reviews your loan application, or analyzes marketing data that will now be replaced by 1 human overseeing the work of 20 bots.
The deeper I delve into setting up my own LLMs for RAG use, the more I see its very real limitations. I don't mean that negative towards LLMs as the tech is amazing, I just don't think it can ever be left completely unsupervised.
People keep trying to ascribe agency to it and act like its conscious, but its just a rube-goldberg pattern generator.
reply
100 sats \ 5 replies \ @Signal312 18h
What is RAG use?
reply
145 sats \ 4 replies \ @freetx 17h
Technically its "Retrieval Augmented Generation" but what it means practically is when you load documents or data into your LLM so you can reference it in chats.
I've been building my own self-hosted "personal RAG". The goal is to load it with my personal data to help with tax / financial / personal planning, etc
I dare not do that with any publicly hosted AI service, so am building my own.
reply
100 sats \ 2 replies \ @Signal312 17h
Very interesting, thanks.
Is a potential use case for RAG to set up something like a debater bot - a resource that would be primed to answer arguments on a particular topic.
Maybe I could load up a set of my preferred books on a topic (like bitcoin, or climate change), and then have my own private LLM that would also primarily be about giving good replies for debate? If so, how might one do this?
So, for instance, when someone comes up with a reason why bitcoin won't work, I could come back with some well-structured arguments on that topic.
This is as opposed to thinking, "Darn, I know I've heard a great response to that argument, but I can't remember it now."
reply
That’s a cool idea! You could definitely create a private LLM or debater bot using RAG with your own book/library content—it’d be like having a personal debate assistant ready with structured arguments anytime. Thanks for sharing.
reply
35 sats \ 0 replies \ @optimism 15h
Maybe I could load up a set of my preferred books on a topic (like bitcoin, or climate change), and then have my own private LLM that would also primarily be about giving good replies for debate? If so, how might one do this?
You can do this with fine-tuning, see this short primer from #1076304 and for example HF's docs for a more technical guide.
reply
21 sats \ 0 replies \ @optimism 17h
I too self-host all my production applications that use LLM and all of it is RAG (in the most basic form where I just feed documents to a single-shot summarizer / categorizer / NLP chain.)
reply