Skip to main content
Open In ColabOpen on GitHub

Opper Chat Models

This guide will help you get started with Opper chat models through the langchain-opperai integration. Opper provides a unified API for building AI applications with structured input/output, tracing, and model-independent code.

For detailed information about Opper's capabilities, visit docs.opper.ai.

Overview

Integration details

ClassPackageLocalSerializablePY supportPackage downloadsPackage latest
ChatOpperAIlangchain-opperaiGitHubGitHub release

Model features

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs

Setup

To access Opper models you'll need to create an Opper account, get an API key, and install the langchain-opper integration package.

Credentials

Head to platform.opper.ai to sign up to Opper and generate an API key. Once you've done this, set the OPPER_API_KEY environment variable:

import getpass
import os

if not os.getenv("OPPER_API_KEY"):
os.environ["OPPER_API_KEY"] = getpass.getpass("Enter your Opper API key: ")

If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:

# os.environ["LANGSMITH_TRACING"] = "true"
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")

Installation

The LangChain Opper integration lives in the langchain-opperai package:

%pip install langchain-opperai
ERROR: Could not find a version that satisfies the requirement langchain-opperai (from versions: none)
ERROR: No matching distribution found for langchain-opperai
Note: you may need to restart the kernel to use updated packages.

Instantiation

Now we can instantiate our model object and generate chat completions. Opper provides a OpperProvider for easy model management and tracing:

from langchain_opperai import OpperProvider, ChatOpperAI

# Using the provider (recommended for tracing and model management)
provider = OpperProvider()
llm = provider.create_chat_model(
task_name="translation",
instructions="You are a helpful assistant that translates text.",
)

# Or directly instantiate the chat model
llm = ChatOpperAI(
task_name="translation",
instructions="You are a helpful assistant that translates text.",
)

content='Here\'s the translation of "Hello, world!" to several common languages:\n\nSpanish: ¡Hola, mundo!\nFrench: Bonjour, monde!\nGerman: Hallo, Welt!\nItalian: Ciao, mondo!\nPortuguese: Olá, mundo!\nJapanese: こんにちは、世界!\nChinese (Simplified): 你好,世界!\nRussian: Привет, мир!\n\nWould you like the translation in any specific language?' additional_kwargs={'span_id': '52073b05-55e3-44b2-8015-63eb24615fce', 'structured': False} response_metadata={} id='run--08e2aba0-f20c-4de2-a18a-7eb5b63ba228-0'

Invocation

messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
print(f"Response: {ai_msg.content}")
print(f"Span ID: {ai_msg.additional_kwargs.get('span_id', 'N/A')}")
ai_msg
Response: Translation to French:
"J'aime la programmation."
Span ID: ccb8b9c2-c18f-4842-9b05-ef1d260caf4e
AIMessage(content='Translation to French:\n"J\'aime la programmation."', additional_kwargs={'span_id': 'ccb8b9c2-c18f-4842-9b05-ef1d260caf4e', 'structured': False}, response_metadata={}, id='run--f238cae4-81df-4e2d-a4f7-b2cc4b7b57dd-0')
print(ai_msg.content)
Translation to French:
"J'aime la programmation."

Chaining

We can chain our model with a prompt template like so:

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)

chain = prompt | llm
result = chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
print(f"Translation: {result.content}")
result
API Reference:ChatPromptTemplate
Translation: "Ich liebe Programmierung."
AIMessage(content='"Ich liebe Programmierung."', additional_kwargs={'span_id': 'a0445f35-f428-4774-9e98-e3a16c9aac2d', 'structured': False}, response_metadata={}, id='run--edc047f6-30a1-4da3-beb2-bcec2b67cc95-0')

Structured Input and Output

One of Opper's key features is structured output using Pydantic models:

from pydantic import BaseModel, Field
from typing import List
from langchain_core.messages import HumanMessage

class TranslationInput(BaseModel):
"""Structured input for translations."""
text: str = Field(description="The text to translate")
target_language: str = Field(description="The target language for translation")
source_language: str = Field(description="The source language (optional)", default="auto-detect")

class TranslationOutput(BaseModel):
"""Structured output for translations."""
thoughts: str = Field(description="Translation analysis process")
original_text: str = Field(description="The original text")
translated_text: str = Field(description="The translated text")
source_language: str = Field(description="Detected source language")
target_language: str = Field(description="Target language")
confidence: float = Field(description="Translation confidence (0-1)", ge=0, le=1)

# Create a structured model with both input and output schemas
structured_llm = provider.create_structured_model(
task_name="structured_translation",
instructions="Translate text and provide structured output with metadata.",
output_schema=TranslationOutput
)

# Create structured input for translation
translation_request = TranslationInput(
text="Hello, world!",
target_language="Spanish",
source_language="English"
)

# Create message with structured input
message = HumanMessage(
content="Translate the provided text to the specified language",
additional_kwargs=translation_request.model_dump(),
)

# Invoke with structured message
result = structured_llm.invoke([message])

# Access the parsed structured output
parsed_output = result.additional_kwargs.get("parsed")
print(f"Original: {parsed_output.original_text}")
print(f"Translation: {parsed_output.translated_text}")
print(f"Confidence: {parsed_output.confidence}")
parsed_output
API Reference:HumanMessage
Original: Hello, world!
Translation: ¡Hola, mundo!
Confidence: 0.98
TranslationOutput(thoughts='Processing simple greeting translation from English to Spanish. This is a common phrase with standard translation.', original_text='Hello, world!', translated_text='¡Hola, mundo!', source_language='English', target_language='Spanish', confidence=0.98)

Tracing and Observability

Opper provides built-in tracing for observability across your AI workflows:

# Start a trace for a workflow
trace_id = provider.start_trace("translation_workflow", "Translate multiple texts")
print(f"Started trace: {trace_id}")

# All subsequent calls will be part of this trace
result1 = llm.invoke([("human", "Translate 'Good morning' to French")])
result2 = llm.invoke([("human", "Translate 'Good evening' to Spanish")])

print(f"Call 1 span: {result1.additional_kwargs.get('span_id')}")
print(f"Call 2 span: {result2.additional_kwargs.get('span_id')}")

# End the trace
provider.end_trace("Translation workflow completed")
print("Trace completed - view in Opper dashboard")

Started trace: 0135cd76-52c6-4629-8cc1-7b4ab9918122
Call 1 span: 01d70752-5195-489d-b4e2-2c284ebb9a78
Call 2 span: fb6937ee-45f8-4f52-b472-875af4c91e2f
Trace completed - view in Opper dashboard

LangGraph Integration

Opper works seamlessly with LangGraph for building complex AI workflows. Here's a simple multi-researcher demo:

from langgraph.graph import StateGraph, END
from typing import TypedDict, List, Annotated
from langgraph.graph.message import add_messages

class ResearchState(TypedDict):
"""State for research workflow."""
messages: Annotated[List, add_messages]
query: str
provider: OpperProvider
market_research: str
tech_research: str

def market_researcher(state: ResearchState) -> ResearchState:
"""Market research specialist."""
provider = state["provider"]

market_model = provider.create_chat_model(
task_name="market_research",
instructions="You are a market research specialist. Focus on market opportunities and business viability."
)

result = market_model.invoke([
("human", f"Conduct market research on: {state['query']}")
])

return {
"market_research": result.content,
"messages": state["messages"] + [result]
}

def tech_researcher(state: ResearchState) -> ResearchState:
"""Technical research specialist."""
provider = state["provider"]

tech_model = provider.create_chat_model(
task_name="tech_research",
instructions="You are a technical research specialist. Focus on implementation feasibility and architecture."
)

result = tech_model.invoke([
("human", f"Conduct technical feasibility research on: {state['query']}")
])

return {
"tech_research": result.content,
"messages": state["messages"] + [result]
}

# Build workflow
workflow = StateGraph(ResearchState)
workflow.add_node("market_researcher", market_researcher)
workflow.add_node("tech_researcher", tech_researcher)

workflow.set_entry_point("market_researcher")
workflow.add_edge("market_researcher", "tech_researcher")
workflow.add_edge("tech_researcher", END)

app = workflow.compile()

# Run workflow
provider_instance = OpperProvider()
trace_id = provider_instance.start_trace("research_workflow", "AI platform research")

initial_state = {
"messages": [],
"query": "AI-powered personalized learning platform",
"provider": provider_instance,
"market_research": "",
"tech_research": ""
}

final_state = app.invoke(initial_state)
print("Market Research:", final_state["market_research"][:200] + "...")
print("Tech Research:", final_state["tech_research"][:200] + "...")

provider_instance.end_trace("Research workflow completed")

API Reference:StateGraph | add_messages
Market Research: I'll analyze the market opportunity for an AI-powered personalized learning platform across key dimensions:

Market Overview:
- Global edtech market size: $254.8B (2021), projected CAGR of 13.5% throu...
Tech Research: Technical Feasibility Analysis: AI-Powered Personalized Learning Platform

1. Core Technical Components

a) Learning Management System (LMS) Foundation
- Cloud-based architecture for scalability
- Mic...