Installation¶
Install LionAGI¶
Recommended: Use uv for faster dependency resolution:
Alternative: Standard pip installation:
Configure API Keys¶
Create a .env file in your project root:
# At minimum, add one provider
OPENAI_API_KEY=your_key_here
# Optional: Additional providers
ANTHROPIC_API_KEY=your_key_here
OPENROUTER_API_KEY=your_key_here
NVIDIA_NIM_API_KEY=your_key_here
GROQ_API_KEY=your_key_here
PERPLEXITY_API_KEY=your_key_here
EXA_API_KEY=your_key_here
LionAGI automatically loads from .env - no manual configuration needed.
Verify Installation¶
Run this test to confirm everything works:
from lionagi import Branch, iModel
async def test():
gpt4 = iModel(provider="openai", model="gpt-4o-mini")
branch = Branch(chat_model=gpt4)
reply = await branch.chat("Hello from LionAGI!")
print(f"LionAGI says: {reply}")
if __name__ == "__main__":
import anyio
anyio.run(test)
Expected output: A conversational response from the model.
Supported Providers¶
LionAGI comes pre-configured for these providers:
openai- GPT-5, GPT-4.1, o4-minianthropic- Claude 4.5 Sonnet, Claude 4.1 Opusclaude_code- Claude Code SDK integrationollama- Local model hostingopenrouter- Access 200+ models via single APInvidia_nim- NVIDIA inference microservicesgroq- Fast inference on LPU hardwareperplexity- Search-augmented responses
Custom Providers¶
OpenAI-compatible endpoints: