@anon
sign up
@anon
sign up
pull down to refresh
LiveBench - A Challenging, Contamination-Free LLM Benchmark
livebench.ai
161 sats
\
0 comments
\
@supratic
17 Jul
AI
related
Deep Dive into LLMs like ChatGPT
www.youtube.com/watch?v=7xTGNNLPyMI
98 sats
\
1 comment
\
@kepford
6 May
AI
Here’s how I use LLMs to help me write code -- Simon Willison
simonwillison.net/2025/Mar/11/using-llms-for-code/
520 sats
\
0 comments
\
@StillStackinAfterAllTheseYears
12 Mar
tech
LLM evaluation at scale with the NeurIPS Efficiency Challenge
blog.mozilla.ai/exploring-llm-evaluation-at-scale-with-the-neurips-large-language-model-efficiency-challenge/
110 sats
\
0 comments
\
@localhost
22 Feb 2024
tech
Lessons learned from programming with LLMs
crawshaw.io/blog/programming-with-llms
120 sats
\
1 comment
\
@m0wer
5 Jul
AI
DBRX: A new open LLM
www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm
10 sats
\
1 comment
\
@hn
31 Mar 2024
tech
LLM in a Flash: Efficient LLM Inference with Limited Memory
huggingface.co/papers/2312.11514
13 sats
\
1 comment
\
@hn
20 Dec 2023
tech
Things we learned about LLMs in 2024
simonwillison.net/2024/Dec/31/llms-in-2024/
370 sats
\
0 comments
\
@Rsync25
31 Dec 2024
tech
Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM
www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm
306 sats
\
1 comment
\
@nullama
13 Apr 2023
bitcoin
OpenCoder: Open-Source LLM for Coding
arxiv.org/abs/2411.04905
52 sats
\
0 comments
\
@hn
9 Nov 2024
tech
LLaMA-Factory: Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
github.com/hiyouga/LLaMA-Factory
157 sats
\
0 comments
\
@carter
19 Sep
AI
Compiling LLMs into a MegaKernel: A path to low-latency inference
zhihaojia.medium.com/compiling-llms-into-a-megakernel-a-path-to-low-latency-inference-cf7840913c17
10 sats
\
0 comments
\
@hn
19 Jun
tech
Coding with LLMs in the summer of 2025 (an update) - <antirez>
antirez.com/news/154
444 sats
\
6 comments
\
@carter
20 Jul
AI
From Artificial Needles to Real Haystacks: Improving Capabilities in LLMs
arxiv.org/abs/2406.19292
21 sats
\
0 comments
\
@Rsync25
29 Jun 2024
alter_native
I built LLMArena: a tool to create and share beautiful LLM Comparisons
llmarena.ai
412 sats
\
0 comments
\
@IroncladDev
22 May 2024
AI
1-Bit LLM: The Most Efficient LLM Possible?
www.youtube.com/watch?v=7hMoz9q4zv0
533 sats
\
1 comment
\
@carter
24 Jun
AI
Building LLMs from the Ground Up: A 3-hour Coding Workshop
magazine.sebastianraschka.com/p/building-llms-from-the-ground-up
55 sats
\
0 comments
\
@Rsync25
31 Aug 2024
tech
Fine-Tuning Increases LLM Vulnerabilities and Risk
arxiv.org/abs/2404.04392
21 sats
\
0 comments
\
@hn
12 Apr 2024
tech
Hardware Acceleration of LLMs: A comprehensive survey and comparison
arxiv.org/abs/2409.03384
21 sats
\
0 comments
\
@hn
7 Sep 2024
tech
Small LLMs Can Beat Large Ones at 5-30x Lower Cost with Automated Data Curation
www.tensorzero.com/blog/fine-tuned-small-llms-can-beat-large-ones-at-5-30x-lower-cost-with-programmatic-data-curation/
274 sats
\
1 comment
\
@carter
5 Aug
AI
What We Know About LLMs (A Primer)
willthompson.name/what-we-know-about-llms-primer
163 sats
\
1 comment
\
@hn
25 Jul 2023
tech
Compute Where It Counts: High Quality Sparsely Activated LLMs
crystalai.org/blog/2025-08-18-compute-where-it-counts
100 sats
\
0 comments
\
@carter
21 Aug
AI
more