pull down to refresh

It consuming only 10W is great. So with a small board like odroid M-series that'd be <15W total. Solar powered portable LLMs become feasible that way.

I asked my buddy to measure his M3 Mac mini with 32GB RAM and he said it took around 30W when using llama.cpp with gpt-oss.

reply