LangChain Integration
Tools Reference
LangChain tool implementations for ProsodyAI
Tools Reference
The @prosody/langchain package provides several LangChain-compatible tools.
ProsodyEmotionTool
Analyze audio for emotion recognition. This is the primary tool for adding emotional awareness to your agents.
Configuration
import { ProsodyEmotionTool } from '@prosody/langchain';
const tool = new ProsodyEmotionTool({
// Required
apiKey: process.env.PROSODY_API_KEY,
// Optional
vertical: 'contact_center', // Default vertical
features: ['emotion', 'vad'], // Features to include
includeRawProsody: false, // Include prosodic features
timeout: 30000, // Request timeout (ms)
});Tool Schema
The tool accepts audio input and returns emotion analysis:
// Input schema
interface EmotionToolInput {
audio: string; // File path, URL, or base64
transcript?: string; // Optional transcript
vertical?: string; // Override default vertical
}
// Output schema
interface EmotionToolOutput {
emotion: string;
confidence: number;
valence: number;
arousal: number;
dominance: number;
state?: string;
metrics?: Record<string, unknown>;
}Usage Examples
const tool = new ProsodyEmotionTool({ apiKey });
// Analyze from file path
const result = await tool.invoke({
audio: '/path/to/audio.wav',
});
// Analyze from URL
const result = await tool.invoke({
audio: 'https://storage.example.com/call.wav',
});
// Analyze from buffer
const result = await tool.invoke({
audio: audioBuffer.toString('base64'),
});
console.log(result);
// {
// emotion: "frustrated",
// confidence: 0.87,
// valence: -0.6,
// arousal: 0.7,
// dominance: 0.4,
// state: "frustrated",
// metrics: { escalationRisk: "high", csatPredicted: 2.3 }
// }import { ChatOpenAI } from '@langchain/openai';
import { AgentExecutor, createToolCallingAgent } from 'langchain/agents';
const emotionTool = new ProsodyEmotionTool({
apiKey,
vertical: 'contact_center',
});
const llm = new ChatOpenAI({ model: 'gpt-4' });
const agent = createToolCallingAgent({
llm,
tools: [emotionTool],
prompt: `You are an empathetic customer service assistant.
When given audio, analyze the customer's emotional state
and provide an appropriate response.
Always acknowledge the customer's feelings before
addressing their issue.`,
});
const executor = new AgentExecutor({ agent, tools: [emotionTool] });
const response = await executor.invoke({
input: 'Analyze this customer call and suggest a response',
audio: 'https://storage.example.com/angry-customer.wav',
});const tool = new ProsodyEmotionTool({ apiKey });
// Stream analysis results
for await (const chunk of tool.stream({
audio: audioStream,
})) {
console.log('Emotion update:', chunk.emotion);
}ProsodyPredictionTool
Get forward predictions for ongoing conversations.
Configuration
import { ProsodyPredictionTool } from '@prosody/langchain';
const tool = new ProsodyPredictionTool({
apiKey: process.env.PROSODY_API_KEY,
vertical: 'contact_center',
});Tool Schema
// Input schema
interface PredictionToolInput {
sessionId: string; // Conversation session ID
}
// Output schema
interface PredictionToolOutput {
escalationRisk: number;
churnRisk: number;
predictedCsat: number;
sentimentForecast: number;
recommendedTone: string;
confidence: number;
}Usage
const tool = new ProsodyPredictionTool({ apiKey });
const predictions = await tool.invoke({
sessionId: 'call-12345',
});
console.log(predictions);
// {
// escalationRisk: 0.72,
// churnRisk: 0.45,
// predictedCsat: 2.8,
// sentimentForecast: -0.3,
// recommendedTone: "empathetic",
// confidence: 0.85
// }ProsodySessionTool
Manage conversation sessions for multi-turn analysis.
Configuration
import { ProsodySessionTool } from '@prosody/langchain';
const tool = new ProsodySessionTool({
apiKey: process.env.PROSODY_API_KEY,
vertical: 'contact_center',
});Actions
The session tool supports multiple actions:
// Create a new session
const session = await tool.invoke({
action: 'create',
metadata: { callId: 'abc123' },
});
// Add an utterance
const result = await tool.invoke({
action: 'add_utterance',
sessionId: session.sessionId,
audio: audioBuffer,
speakerId: 'customer',
});
// Get current predictions
const predictions = await tool.invoke({
action: 'get_predictions',
sessionId: session.sessionId,
});
// End the session
const summary = await tool.invoke({
action: 'end',
sessionId: session.sessionId,
});ProsodyVerticalTool
Get vertical-specific metrics and recommendations.
Configuration
import { ProsodyVerticalTool } from '@prosody/langchain';
const tool = new ProsodyVerticalTool({
apiKey: process.env.PROSODY_API_KEY,
vertical: 'healthcare',
});Usage
const tool = new ProsodyVerticalTool({ apiKey, vertical: 'healthcare' });
const metrics = await tool.invoke({
audio: patientAudio,
});
console.log(metrics);
// {
// state: "anxious",
// depressionMarkers: 0.35,
// anxietyMarkers: 0.62,
// distressLevel: "mild",
// clinicalAttention: "monitor",
// mentalHealthScreeningRecommended: true
// }Creating Custom Tools
Extend the base tools for custom functionality:
import { ProsodyBaseTool } from '@prosody/langchain';
import { z } from 'zod';
class CustomEmotionTool extends ProsodyBaseTool {
name = 'custom_emotion_analyzer';
description = 'Analyzes emotion with custom post-processing';
schema = z.object({
audio: z.string().describe('Audio file path or URL'),
context: z.string().optional().describe('Additional context'),
});
async _call(input: z.infer<typeof this.schema>) {
const result = await this.client.analyze({
audio: input.audio,
vertical: 'contact_center',
});
// Custom post-processing
return {
...result,
urgency: this.calculateUrgency(result),
suggestedAction: this.getSuggestedAction(result),
};
}
private calculateUrgency(result: AnalyzeResponse): string {
if (result.metrics?.escalationRisk === 'critical') return 'immediate';
if (result.metrics?.escalationRisk === 'high') return 'high';
return 'normal';
}
private getSuggestedAction(result: AnalyzeResponse): string {
const actions: Record<string, string> = {
frustrated: 'Acknowledge frustration and offer immediate assistance',
angry: 'Escalate to supervisor',
confused: 'Provide clear explanation',
};
return actions[result.emotion] || 'Continue conversation';
}
}Tool Callbacks
Add callbacks for logging and monitoring:
import { ProsodyEmotionTool } from '@prosody/langchain';
const tool = new ProsodyEmotionTool({
apiKey,
callbacks: [
{
handleToolStart: async (tool, input) => {
console.log(`Starting ${tool.name} with input:`, input);
},
handleToolEnd: async (output) => {
console.log('Tool completed:', output);
// Log to your analytics
analytics.track('emotion_analyzed', output);
},
handleToolError: async (error) => {
console.error('Tool error:', error);
// Alert on errors
alerting.send('ProsodyAI tool error', error);
},
},
],
});