Official Zaguán SDK for Python - The easiest way to integrate with Zaguán CoreX, an enterprise-grade AI gateway that provides unified access to 15+ AI providers and 500+ models through a single, OpenAI-compatible API.
One API. 15+ Providers. 500+ Models. Infinite Possibilities: Zaguán CoreX eliminates vendor lock-in with intelligent routing across providers, supports 5,000+ concurrent connections, and provides unified access to 500+ models through a single OpenAI-compatible API.
Zaguán CoreX eliminates vendor lock-in and optimizes costs while unlocking advanced capabilities:
Drop-in replacement for OpenAI SDK with familiar interfaces and minimal code changes.
Unified access to 15+ AI providers and 500+ models through a single API.
Synchronous and streaming chat with function calling, tool use, and vision support.
Text and batch embeddings for semantic search and RAG applications.
Speech-to-text (Whisper), audio translation, and text-to-speech with 6 voices.
DALL-E 2 & 3 support with image editing and variations.
Content safety checks with policy compliance and safety scores.
Balance tracking, usage history, and cost analytics built-in.
Full async/await support with AsyncZaguanClient for high-performance applications.
pip install zaguan-sdkfrom zaguan_sdk import ZaguanClient, ChatRequest, Message
# Initialize the client
client = ZaguanClient(
base_url="https://api.zaguanai.com",
api_key="your-api-key"
)
# Simple chat completion
response = client.chat(ChatRequest(
model="openai/gpt-4o-mini",
messages=[Message(role="user", content="What is Python?")]
))
print(response.choices[0].message.content)# Stream responses in real-time
for chunk in client.chat_stream(ChatRequest(
model="openai/gpt-4o-mini",
messages=[Message(role="user", content="Tell me a story")]
)):
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)import asyncio
from zaguan_sdk import AsyncZaguanClient
async def main():
async with AsyncZaguanClient(
base_url="https://api.zaguanai.com",
api_key="your-api-key"
) as client:
response = await client.chat(ChatRequest(
model="anthropic/claude-3-5-sonnet",
messages=[Message(role="user", content="Hello!")]
))
print(response.choices[0].message.content)
asyncio.run(main())Access any of the 15+ supported AI providers with a simple model name change:
# Switch providers without changing code
models = [
"openai/gpt-4o",
"anthropic/claude-3-5-sonnet",
"google/gemini-2.0-flash",
"deepseek/deepseek-chat"
]
for model in models:
response = client.chat(ChatRequest(
model=model,
messages=[Message(role="user", content="Hi!")]
))
print(f"{model}: {response.choices[0].message.content}")Zaguán CoreX supports 18+ AI providers with 500+ models:
Contributions are welcome! Whether it's bug reports, feature requests, or pull requests, we appreciate all forms of contribution.
Apache 2.0 Licensed - The Zaguán SDK for Python is free and open source software. You're welcome to use, modify, and distribute it for any purpose.