Tools and resources for running AI models locally
Inference Engine
4 ToolsModels
7 ToolsDeepSeek V3 & R1
Chinese open-source MoE models with excellent reasoning performance
Gemma 3
Google's lightweight open-source models based on Gemini 2.0 technology
GPT-OSS
OpenAI's first open-weight models with reasoning capabilities
Kimi K2.5
1 Trillion parameter MoE model with visual capabilities and Agent Swarm
Llama 4
Meta's latest open-source LLM generation with native multimodal capabilities
Mistral 3
French open-source MoE model with multimodal capabilities
Qwen3-Coder
Alibaba's specialized coding model with agentic capabilities
Hardware
5 ToolsMac Mini M4 / M4 Pro
Compact AI workstation with Unified Memory for local LLM inference
Minisforum AI Mini PCs
Compact mini PCs with AMD Ryzen AI and Intel Core Ultra for local AI
NVIDIA DGX Spark
AI supercomputer with Grace Blackwell on your desk – up to 1 PetaFLOP AI performance
Nvidia Project DIGITS
Desktop AI computer with 128GB Unified Memory for local AI development
Nvidia RTX 4090
Premium GPU with 24GB VRAM for local LLM inference and fine-tuning
Interface
4 ToolsWorkflow Automation
3 ToolsBenchmarks
4 ToolsArtificial Analysis
Independent performance analysis of 100+ LLMs with detailed metrics
Hugging Face Open LLM Leaderboard
Technical benchmark leaderboard for open-source LLMs with 6 standardized tests
LLM Explorer
Interactive table with 51,600+ models and extensive filtering options
LMSYS Chatbot Arena
Community-driven LLM ranking through pairwise comparisons with Elo system