@anon
sign up
@anon
sign up
pull down to refresh
Run LLMs on my own Mac fast and efficient Only 2 MBs
www.secondstate.io/articles/fast-llm-inference/
13 sats
\
1 comment
\
@hn
13 Nov 2023
tech
related
LLoms - A simple mcp enabled llm cli chat
github.com/gzuuus/lloms
155 sats
\
0 comments
\
@gzuuus_
16 Mar
nostr
Context text files for LLMs to do one-shot applications for nostr
github.com/nostr-net/llms/
288 sats
\
3 comments
\
@aljaz
17 Apr
nostr
Are you using a local LLM?
267 sats
\
5 comments
\
@itsrealfake
5 Mar
AI
Experimenting with local LLMs on macOS
blog.6nok.org/experimenting-with-local-llms-on-macos/
150 sats
\
0 comments
\
@carter
8 Sep
AI
LM Studio - Discover, download, and run local LLMs
lmstudio.ai/
148 sats
\
1 comment
\
@k00b
16 Mar
AI
LM Studio - Experiment with local LLMs
lmstudio.ai/
274 sats
\
0 comments
\
@Rsync25
12 Nov 2024
tech
Everything I've learned so far about running local LLMs
nullprogram.com/blog/2024/11/10/
141 sats
\
0 comments
\
@co574
10 Nov 2024
tech
Running LLMs Locally on AMD GPUs with Ollama
community.amd.com/t5/ai/running-llms-locally-on-amd-gpus-with-ollama/ba-p/713266
10 sats
\
0 comments
\
@Rsync25
27 Sep 2024
tech
cocktailpeanut/dalai: The simplest way to run LLaMA on your local machine
github.com/cocktailpeanut/dalai
247 sats
\
0 comments
\
@random_
24 Mar 2023
bitcoin
Hardware Acceleration of LLMs: A comprehensive survey and comparison
arxiv.org/abs/2409.03384
21 sats
\
0 comments
\
@hn
7 Sep 2024
tech
LLMs are cheap
www.snellman.net/blog/archive/2025-06-02-llms-are-cheap/
10 sats
\
0 comments
\
@hn
9 Jun
tech
Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac
simonwillison.net/2024/Nov/12/qwen25-coder/
19 sats
\
0 comments
\
@Rsync25
13 Nov 2024
tech
The Best Way of Running GPT-OSS Locally - KDnuggets
www.kdnuggets.com/the-best-way-of-running-gpt-oss-locally
118 sats
\
0 comments
\
@optimism
25 Aug
AI
Apple collaborates with NVIDIA to research faster LLM performance - 9to5Mac
9to5mac.com/2024/12/18/apple-collaborates-with-nvidia-to-research-faster-llm-performance/
14 sats
\
1 comment
\
@Rsync25
19 Dec 2024
tech
Small LLMs Can Beat Large Ones at 5-30x Lower Cost with Automated Data Curation
www.tensorzero.com/blog/fine-tuned-small-llms-can-beat-large-ones-at-5-30x-lower-cost-with-programmatic-data-curation/
274 sats
\
1 comment
\
@carter
5 Aug
AI
Locally running ChatGPT 3.5 turbo type of LLM AI in your hard drive on laptop.
github.com/nomic-ai/gpt4all
10 sats
\
0 comments
\
@Gian
6 Apr 2023
bitcoin
LLM Memory
grantslatton.com/llm-memory
269 sats
\
2 comments
\
@carter
2 Jul
AI
1-Bit LLM: The Most Efficient LLM Possible?
www.youtube.com/watch?v=7hMoz9q4zv0
533 sats
\
1 comment
\
@carter
24 Jun
AI
Bounty - 1000 sats for the best "Top 03 Best Self Hosted LLMs" list
3167 sats
\
10 comments
\
@Levlion
13 Dec 2023
Mining_Self_Hosting_FOSS
Meet PowerInfer: A Fast LLM on a Single Consumer-Grade GPU
www.marktechpost.com/2023/12/23/meet-powerinfer-a-fast-large-language-model-llm-on-a-single-consumer-grade-gpu-that-speeds-up-machine-learning-model-inference-by-11-times/
10 sats
\
2 comments
\
@ch0k1
24 Dec 2023
AI
Compiling LLMs into a MegaKernel: A path to low-latency inference
zhihaojia.medium.com/compiling-llms-into-a-megakernel-a-path-to-low-latency-inference-cf7840913c17
10 sats
\
0 comments
\
@hn
19 Jun
tech
more