Tired of vendor lock-in with AI models? LiteLLM solves the “write once, run anywhere” problem for Large Language Models. Instead of learning different APIs for OpenAI, Anthropic, AWS Bedrock, Azure, and dozens of other providers, you write standard OpenAI-format code and LiteLLM handles the translation. One completion() call works across 100+ models - just change the model parameter from “openai/gpt-4” to “anthropic/claude-3” and you’re done.

What sets this apart from simple API wrappers is the production-ready infrastructure: automatic cost tracking across providers, intelligent load balancing, request logging, and guardrails. The proxy server acts as an AI gateway, letting you centrally manage API keys, set spending limits, and monitor usage across your entire team. With 40K+ stars and Y Combinator backing, it’s become the de facto standard for multi-LLM applications.

Perfect for anyone building with AI who wants flexibility without complexity. Whether you’re prototyping with different models, building resilient production systems with fallbacks, or just want to avoid rewriting code when switching providers, LiteLLM handles the plumbing so you can focus on your application logic.


Stars: 40347
💻 Language: Python
🔗 Repository: BerriAI/litellm