Opper AI Provider
A comprehensive guide to using Opper AI as a provider for reliable LLM task completions through LangChain integration.
What is Opper AI?
Opper AI is a unified API platform that makes it easy to build AI applications that are model-independent, structured, and performant. It provides a powerful toolkit for common LLM operations with built-in reliability, tracing, and evaluation capabilities.
Key Features
- 🔄 Model Independence: Switch between different LLMs without changing your code
- 📊 Built-in Tracing: Automatic span creation and workflow tracking
- 📈 Metrics & Evaluation: Easy metric collection for monitoring and optimization
- 🏗️ Structured I/O: Native support for Pydantic models and schema validation
- 🎯 Task Management: Organize your AI operations as reusable, named tasks
- 🔧 Reliability: Built-in error handling and retry mechanisms
Core Concepts
- Call: A structured definition of an AI task with clear input/output schemas
- Model: An LLM (see supported models)
- Span: A log entry that can hold metrics and be linked into traces
- Trace: A chain of spans representing a complete workflow
- Metric: Data points for feedback and evaluation
For more details, visit docs.opper.ai.
Setup
First, you'll need an Opper AI API key. Sign up at platform.opper.ai to get your key.
import os
import sys
import getpass
# Add project root to path for local development
project_root = os.path.abspath(os.path.join(os.path.dirname('provider.ipynb'), '..'))
sys.path.insert(0, project_root)
# Set up your Opper API key
if not os.getenv("OPPER_API_KEY"):
os.environ["OPPER_API_KEY"] = getpass.getpass("Enter your Opper API key: ")
print("✅ Setup complete!")
Installation
For local development, we'll use the local package:
# Install core dependencies
%pip install -qU langchain-core opperai pydantic
print("✅ Dependencies installed!")
The OpperProvider
class is your main entry point for creating Opper-powered LangChain models:
from langchain_opperai import OpperProvider
from pydantic import BaseModel, Field
# Initialize the provider
provider = OpperProvider()
print("✅ Provider initialized successfully!")
print(f"Provider client: {type(provider.client).__name__}")
Creating Chat Models
Create chat models for conversational AI tasks:
from langchain_core.messages import HumanMessage
# Create a general-purpose chat model
chat_model = provider.create_chat_model(
task_name="general_chat",
model_name="anthropic/claude-3.5-sonnet",
instructions="You are a helpful AI assistant. Provide clear, accurate, and concise responses."
)
# Test the chat model
response = chat_model.invoke([HumanMessage(content="What are the benefits of using a unified AI API?")])
print(f"Response: {response.content}")
print(f"Span ID: {response.additional_kwargs.get('span_id', 'N/A')}")
Structured Output Models
Create models that return structured data using Pydantic schemas:
class ProductAnalysis(BaseModel):
"""Structured analysis of a product or service."""
thoughts: str = Field(description="Analysis process and reasoning")
product_name: str = Field(description="Name of the product")
category: str = Field(description="Product category")
strengths: list[str] = Field(description="Key strengths and advantages")
weaknesses: list[str] = Field(description="Areas for improvement")
target_audience: str = Field(description="Primary target audience")
market_position: str = Field(description="Position in the market")
confidence_score: float = Field(description="Confidence in analysis (0-1)")
# Create a structured model
analyzer = provider.create_structured_model(
task_name="product_analysis",
instructions="Analyze the given product and provide a comprehensive structured assessment.",
output_schema=ProductAnalysis,
model_name="anthropic/claude-3.5-sonnet"
)
# Test with a product description
product_input = """Notion is an all-in-one workspace that combines notes, docs,
wikis, and project management. It allows teams to collaborate on documents,
create databases, and organize information in a flexible, block-based interface."""
analysis = analyzer.invoke(product_input)
print(f"Product: {analysis.product_name}")
print(f"Category: {analysis.category}")
print(f"Confidence: {analysis.confidence_score}")
print(f"\nStrengths: {', '.join(analysis.strengths)}")
print(f"Target Audience: {analysis.target_audience}")
Tracing and Workflow Management
Opper's tracing capabilities help you monitor and debug complex AI workflows:
# Start a trace for a multi-step workflow
trace_id = provider.start_trace(
"product_research_workflow",
"Comprehensive analysis of Notion workspace tool"
)
print(f"Started trace: {trace_id}")
# Step 1: Market analysis
market_model = provider.create_chat_model(
task_name="market_analysis",
instructions="Analyze the market position and competitive landscape for this product."
)
market_analysis = market_model.invoke([
HumanMessage(content=f"Analyze the market for: {product_input}")
])
# Step 2: Technical assessment
tech_model = provider.create_chat_model(
task_name="technical_assessment",
instructions="Evaluate the technical aspects and implementation quality."
)
tech_analysis = tech_model.invoke([
HumanMessage(content=f"Evaluate technical aspects of: {product_input}")
])
# End the trace
provider.end_trace("Completed comprehensive product research workflow")
print(f"Market Analysis Span: {market_analysis.additional_kwargs.get('span_id')}")
print(f"Technical Analysis Span: {tech_analysis.additional_kwargs.get('span_id')}")
print("✅ Workflow completed and traced")
Next Steps
Learn More
- Opper Documentation: Complete API reference and guides
- Supported Models: Full list of available LLMs
- LangChain Integration: Learn more about LangChain patterns
Happy building with Opper AI! 🚀
This notebook demonstrates the core capabilities of the Opper AI provider integration with LangChain. For more advanced examples and production patterns, check out the additional documentation and examples in this repository.