@anon
sign up
@anon
sign up
pull down to refresh
Running LLMs Locally on AMD GPUs with Ollama
community.amd.com/t5/ai/running-llms-locally-on-amd-gpus-with-ollama/ba-p/713266
10 sats
\
0 comments
\
@Rsync25
27 Sep 2024
tech
related
LLoms - A simple mcp enabled llm cli chat
github.com/gzuuus/lloms
155 sats
\
0 comments
\
@gzuuus_
16 Mar
nostr
Ollama now supports AMD graphics cards
ollama.com/blog/amd-preview
10 sats
\
1 comment
\
@hn
15 Mar 2024
tech
Use Open-Source LLMs in PostgreSQL With Ollama and Pgai
www.timescale.com/blog/use-open-source-llms-in-postgresql-with-ollama-and-pgai/
62 sats
\
0 comments
\
@Rsync25
25 Jun 2024
opensource
Experimenting with local LLMs on macOS
blog.6nok.org/experimenting-with-local-llms-on-macos/
150 sats
\
0 comments
\
@carter
8 Sep
AI
LM Studio - Experiment with local LLMs
lmstudio.ai/
274 sats
\
0 comments
\
@Rsync25
12 Nov 2024
tech
LM Studio - Discover, download, and run local LLMs
lmstudio.ai/
148 sats
\
1 comment
\
@k00b
16 Mar
AI
cocktailpeanut/dalai: The simplest way to run LLaMA on your local machine
github.com/cocktailpeanut/dalai
247 sats
\
0 comments
\
@random_
24 Mar 2023
bitcoin
The Best Way of Running GPT-OSS Locally - KDnuggets
www.kdnuggets.com/the-best-way-of-running-gpt-oss-locally
118 sats
\
0 comments
\
@optimism
25 Aug
AI
Run LLMs on my own Mac fast and efficient Only 2 MBs
www.secondstate.io/articles/fast-llm-inference/
13 sats
\
1 comment
\
@hn
13 Nov 2023
tech
Hardware Acceleration of LLMs: A comprehensive survey and comparison
arxiv.org/abs/2409.03384
21 sats
\
0 comments
\
@hn
7 Sep 2024
tech
Everything I've learned so far about running local LLMs
nullprogram.com/blog/2024/11/10/
141 sats
\
0 comments
\
@co574
10 Nov 2024
tech
Signallama - chat with your ollama instance (or any other llm) over signal
github.com/aljazceru/signallama
293 sats
\
3 comments
\
@aljaz
30 May
tech
Open-source project ZLUDA lets CUDA apps run on AMD GPUs
www.cgchannel.com/2024/02/open-source-project-zluda-lets-cuda-apps-run-on-amd-gpus/
31 sats
\
1 comment
\
@hn
5 Mar 2024
tech
Run CUDA, unmodified, on AMD GPUs
docs.scale-lang.com/
53 sats
\
0 comments
\
@hn
15 Jul 2024
tech
How to Run Llama 3.1 405B on Home Devices? Build AI Cluster!
b4rtaz.medium.com/how-to-run-llama-3-405b-on-home-devices-build-ai-cluster-ad0d5ad3473b
116 sats
\
3 comments
\
@Rsync25
29 Jul 2024
alter_native
Episode 145: Going Local
20 sats
\
1 comment
\
@AtlantisPleb
14 Dec 2024
openagents
LLMs are cheap
www.snellman.net/blog/archive/2025-06-02-llms-are-cheap/
10 sats
\
0 comments
\
@hn
9 Jun
tech
Apple collaborates with NVIDIA to research faster LLM performance - 9to5Mac
9to5mac.com/2024/12/18/apple-collaborates-with-nvidia-to-research-faster-llm-performance/
14 sats
\
1 comment
\
@Rsync25
19 Dec 2024
tech
Ollama is now available on Windows in preview
ollama.com/blog/windows-preview
52 sats
\
1 comment
\
@doofus
18 Feb 2024
tech
AMD's MI300X Outperforms Nvidia's H100 for LLM Inference
www.blog.tensorwave.com/amds-mi300x-outperforms-nvidias-h100-for-llm-inference/
202 sats
\
0 comments
\
@hn
13 Jun 2024
tech
llm.pdf (Run an LLM in a PDF)
evanzhoudev.github.io/llm.pdf/
155 sats
\
0 comments
\
@itsrealfake
26 Apr
tech
more