Back to site
ProsodyAI Docs
LangChain Integration

Examples

Real-world examples of ProsodyAI + LangChain integrations

Examples

Real-world examples of building emotion-aware applications with ProsodyAI and LangChain.

Customer Service Bot

A complete customer service bot that adapts its responses based on customer emotion:

import { ProsodyEmotionTool, ProsodyPredictionTool } from '@prosody/langchain';
import { ChatOpenAI } from '@langchain/openai';
import { AgentExecutor, createToolCallingAgent } from 'langchain/agents';
import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts';
import { BufferMemory } from 'langchain/memory';

// Initialize tools
const emotionTool = new ProsodyEmotionTool({
  apiKey: process.env.PROSODY_API_KEY!,
  vertical: 'contact_center',
});

const predictionTool = new ProsodyPredictionTool({
  apiKey: process.env.PROSODY_API_KEY!,
});

// Create prompt with emotional intelligence
const prompt = ChatPromptTemplate.fromMessages([
  ['system', `You are Sarah, an empathetic customer service representative for TechCorp.

EMOTIONAL RESPONSE GUIDELINES:
- FRUSTRATED/ANGRY: "I completely understand your frustration, and I'm truly sorry for this experience. Let me personally ensure we resolve this right now."
- CONFUSED: "That's a great question, and I want to make sure I explain this clearly. Let me walk you through it step by step."
- ANXIOUS: "I want to reassure you that we're going to take care of this. Here's exactly what's going to happen..."
- SATISFIED: "I'm so glad to hear that! Is there anything else I can help you with today?"

ESCALATION PROTOCOL:
- If escalation risk > 70%: Offer supervisor callback within 1 hour
- If churn risk > 50%: Proactively offer retention discount
- If customer mentions legal/BBB/social media: Immediately escalate

Always:
1. Acknowledge the customer's emotion first
2. Take ownership of the issue
3. Provide a clear path to resolution
4. Confirm understanding before proceeding`],
  new MessagesPlaceholder('chat_history'),
  ['human', '{input}'],
  new MessagesPlaceholder('agent_scratchpad'),
]);

// Create agent
const llm = new ChatOpenAI({ model: 'gpt-4', temperature: 0.7 });
const memory = new BufferMemory({
  memoryKey: 'chat_history',
  returnMessages: true,
});

const agent = createToolCallingAgent({
  llm,
  tools: [emotionTool, predictionTool],
  prompt,
});

const executor = new AgentExecutor({
  agent,
  tools: [emotionTool, predictionTool],
  memory,
  verbose: true,
});

// Handle customer interaction
async function handleCustomer(audioUrl: string, textMessage?: string) {
  const input = textMessage
    ? `Customer audio: ${audioUrl}\nCustomer message: "${textMessage}"\n\nAnalyze their emotional state and respond appropriately.`
    : `Customer audio: ${audioUrl}\n\nAnalyze their emotional state and respond appropriately.`;
  
  const response = await executor.invoke({ input });
  return response.output;
}

// Usage
const response = await handleCustomer(
  'https://calls.example.com/angry-customer.wav',
  "I've been on hold for 45 minutes and no one can help me!"
);
console.log(response);
// "I completely understand your frustration, and I'm truly sorry you've had to wait 
// so long. That's absolutely not the experience we want for you. I'm Sarah, and I'm 
// going to personally make sure we resolve this for you right now. Can you tell me 
// a bit more about the issue you're experiencing?"

Healthcare Screening Assistant

A voice assistant for mental health pre-screening:

import { ProsodyEmotionTool, ProsodySessionTool } from '@prosody/langchain';
import { ChatAnthropic } from '@langchain/anthropic';

const emotionTool = new ProsodyEmotionTool({
  apiKey: process.env.PROSODY_API_KEY!,
  vertical: 'healthcare',
});

const sessionTool = new ProsodySessionTool({
  apiKey: process.env.PROSODY_API_KEY!,
  vertical: 'healthcare',
});

const prompt = ChatPromptTemplate.fromMessages([
  ['system', `You are a compassionate mental health screening assistant.

Your role is to conduct initial screenings while being warm and supportive.
You are NOT a replacement for professional mental health care.

IMPORTANT GUIDELINES:
- If distress level is "severe" or clinicalAttention is "immediate": 
  Immediately provide crisis resources and recommend professional help
- If depression markers > 0.6 or anxiety markers > 0.6:
  Gently suggest speaking with a mental health professional
- Always validate the person's feelings
- Use open-ended questions
- Never diagnose or provide medical advice

CRISIS RESOURCES TO SHARE IF NEEDED:
- National Suicide Prevention Lifeline: 988
- Crisis Text Line: Text HOME to 741741
- International Association for Suicide Prevention: https://www.iasp.info/resources/Crisis_Centres/`],
  ['human', '{input}'],
  ['placeholder', '{agent_scratchpad}'],
]);

const llm = new ChatAnthropic({ model: 'claude-3-sonnet' });

async function conductScreening(patientId: string) {
  // Start session
  const session = await sessionTool.invoke({
    action: 'create',
    metadata: { patientId, type: 'mental_health_screening' },
  });
  
  const questions = [
    "How have you been feeling over the past two weeks?",
    "Have you been able to enjoy activities you usually like?",
    "How has your sleep been lately?",
    "Have you been feeling worried or anxious about things?",
    "Is there anything specific that's been on your mind?",
  ];
  
  const responses: Array<{
    question: string;
    emotion: EmotionResult;
    aiResponse: string;
  }> = [];
  
  for (const question of questions) {
    // In real app, this would be actual patient audio
    const patientAudio = await capturePatientResponse(question);
    
    const emotionResult = await sessionTool.invoke({
      action: 'add_utterance',
      sessionId: session.sessionId,
      audio: patientAudio,
      speakerId: 'patient',
    });
    
    // Get AI response based on emotion
    const aiResponse = await executor.invoke({
      input: `Patient responded to: "${question}"
Their emotion: ${emotionResult.emotion}
Depression markers: ${emotionResult.metrics?.depressionMarkers}
Anxiety markers: ${emotionResult.metrics?.anxietyMarkers}
Clinical attention: ${emotionResult.metrics?.clinicalAttention}

Provide a compassionate follow-up or transition to the next question.`,
    });
    
    responses.push({
      question,
      emotion: emotionResult,
      aiResponse: aiResponse.output,
    });
    
    // Check for crisis indicators
    if (emotionResult.metrics?.clinicalAttention === 'immediate') {
      return {
        status: 'crisis_detected',
        recommendation: 'immediate_professional_contact',
        responses,
      };
    }
  }
  
  // End session and get summary
  const summary = await sessionTool.invoke({
    action: 'end',
    sessionId: session.sessionId,
  });
  
  return {
    status: 'completed',
    summary,
    responses,
    recommendation: determineRecommendation(summary),
  };
}

function determineRecommendation(summary: SessionSummary): string {
  const avgDepression = summary.metrics?.averageDepressionMarkers || 0;
  const avgAnxiety = summary.metrics?.averageAnxietyMarkers || 0;
  
  if (avgDepression > 0.6 || avgAnxiety > 0.6) {
    return 'recommend_professional_consultation';
  }
  if (avgDepression > 0.3 || avgAnxiety > 0.3) {
    return 'recommend_follow_up_screening';
  }
  return 'routine_check_in';
}

This example is for illustration only. Mental health screening tools require proper clinical validation, regulatory compliance, and should always include human professional oversight.

Sales Intelligence Copilot

Real-time coaching for sales representatives:

import { ProsodyEmotionTool, ProsodyPredictionTool } from '@prosody/langchain';

const emotionTool = new ProsodyEmotionTool({
  apiKey: process.env.PROSODY_API_KEY!,
  vertical: 'sales',
});

const predictionTool = new ProsodyPredictionTool({
  apiKey: process.env.PROSODY_API_KEY!,
});

const salesCoachPrompt = ChatPromptTemplate.fromMessages([
  ['system', `You are an AI sales coach providing real-time guidance.

PROSPECT EMOTIONAL STATES AND RECOMMENDED ACTIONS:
- HIGHLY_ENGAGED: Strike while hot - ask for the close or next commitment
- READY_TO_BUY: Close now - don't oversell, handle paperwork
- SKEPTICAL: Provide social proof - share relevant case studies and ROI data
- OBJECTING: Acknowledge concern, ask clarifying questions, address directly
- PRICE_SENSITIVE: Focus on value and ROI, offer payment plans if available

BUYING SIGNALS TO WATCH:
- Questions about implementation
- Asking about other customers
- Discussing timeline
- Involving additional stakeholders

DANGER SIGNALS:
- Decreasing engagement (arousal dropping)
- Increasing skepticism
- Repeated price objections
- Vague responses to commitment questions

Provide brief, actionable coaching tips that can be read in <5 seconds.`],
  ['human', '{input}'],
  ['placeholder', '{agent_scratchpad}'],
]);

interface CoachingTip {
  urgency: 'low' | 'medium' | 'high';
  tip: string;
  suggestedPhrase?: string;
}

async function getRealtimeCoaching(
  prospectAudio: Buffer,
  sessionId: string,
  context: { dealValue: number; stage: string }
): Promise<CoachingTip> {
  // Analyze prospect emotion
  const emotion = await emotionTool.invoke({
    audio: prospectAudio.toString('base64'),
  });
  
  // Get predictions
  const predictions = await predictionTool.invoke({
    sessionId,
  });
  
  // Generate coaching
  const response = await executor.invoke({
    input: `Prospect Analysis:
- Emotion: ${emotion.state} (${emotion.confidence} confidence)
- Buying Intent: ${emotion.metrics?.buyingIntent}
- Deal Close Probability: ${predictions.dealCloseProbability}
- Engagement Score: ${emotion.metrics?.engagementScore}

Context:
- Deal Value: $${context.dealValue.toLocaleString()}
- Sales Stage: ${context.stage}

Provide a brief coaching tip for the sales rep.`,
  });
  
  return parseCoachingResponse(response.output, predictions.dealCloseProbability);
}

function parseCoachingResponse(
  response: string,
  dealProbability: number
): CoachingTip {
  let urgency: 'low' | 'medium' | 'high' = 'low';
  
  if (dealProbability > 0.7) urgency = 'high';
  else if (dealProbability > 0.4) urgency = 'medium';
  
  // Extract suggested phrase if present
  const phraseMatch = response.match(/["']([^"']+)["']/);
  
  return {
    urgency,
    tip: response.split('\n')[0], // First line as tip
    suggestedPhrase: phraseMatch?.[1],
  };
}

// WebSocket handler for real-time coaching
app.ws('/coaching/:sessionId', async (ws, req) => {
  const { sessionId } = req.params;
  const { dealValue, stage } = req.query;
  
  ws.on('message', async (audioChunk: Buffer) => {
    try {
      const coaching = await getRealtimeCoaching(
        audioChunk,
        sessionId,
        { dealValue: Number(dealValue), stage: String(stage) }
      );
      
      ws.send(JSON.stringify(coaching));
    } catch (error) {
      console.error('Coaching error:', error);
    }
  });
});

Voice-First RAG with Emotional Context

RAG system that considers emotional state when retrieving and generating responses:

import { ProsodyEmotionTool } from '@prosody/langchain';
import { OpenAIEmbeddings } from '@langchain/openai';
import { PineconeStore } from '@langchain/pinecone';
import { ChatOpenAI } from '@langchain/openai';

const emotionTool = new ProsodyEmotionTool({
  apiKey: process.env.PROSODY_API_KEY!,
  vertical: 'contact_center',
});

// Initialize vector store
const embeddings = new OpenAIEmbeddings();
const vectorStore = await PineconeStore.fromExistingIndex(embeddings, {
  pineconeIndex,
  namespace: 'support-docs',
});

async function emotionAwareRAG(
  userAudio: Buffer,
  userText: string
): Promise<string> {
  // Analyze emotion from voice
  const emotion = await emotionTool.invoke({
    audio: userAudio.toString('base64'),
    transcript: userText,
  });
  
  // Adjust retrieval based on emotion
  const retrievalConfig = getRetrievalConfig(emotion);
  
  // Retrieve relevant documents
  const docs = await vectorStore.similaritySearch(
    userText,
    retrievalConfig.k,
    retrievalConfig.filter
  );
  
  // Generate response with emotional context
  const llm = new ChatOpenAI({ model: 'gpt-4' });
  
  const response = await llm.invoke([
    {
      role: 'system',
      content: `You are a helpful support assistant. 
      
The user's current emotional state: ${emotion.emotion} (valence: ${emotion.valence})
Escalation risk: ${emotion.metrics?.escalationRisk}

Tone guidance: ${getToneGuidance(emotion)}

Use the following context to answer the user's question:
${docs.map(d => d.pageContent).join('\n\n')}`,
    },
    {
      role: 'user',
      content: userText,
    },
  ]);
  
  return response.content as string;
}

function getRetrievalConfig(emotion: EmotionResult) {
  // Frustrated users get more comprehensive responses
  if (emotion.emotion === 'frustrated' || emotion.emotion === 'angry') {
    return {
      k: 5, // More context
      filter: { includeEscalation: true },
    };
  }
  
  // Confused users get simpler, more focused results
  if (emotion.emotion === 'confused') {
    return {
      k: 3,
      filter: { difficulty: 'beginner' },
    };
  }
  
  return { k: 4, filter: {} };
}

function getToneGuidance(emotion: EmotionResult): string {
  const guidance: Record<string, string> = {
    frustrated: 'Be empathetic, apologetic, and solution-focused. Acknowledge their frustration.',
    angry: 'Stay calm and professional. Focus on resolution. Offer escalation if needed.',
    confused: 'Be patient and clear. Use simple language. Offer to explain step by step.',
    anxious: 'Be reassuring. Provide clear timelines and expectations.',
    satisfied: 'Be friendly and efficient. Confirm their understanding.',
  };
  
  return guidance[emotion.emotion] || 'Be helpful and professional.';
}

Deploying to Production

For production deployments, consider:

  • Using connection pooling for the ProsodyAI client
  • Implementing circuit breakers for resilience
  • Adding observability with LangSmith or similar
  • Caching emotion results for repeated analysis
import { ProsodyEmotionTool } from '@prosody/langchain';
import { Client } from 'langsmith';

// Production configuration
const emotionTool = new ProsodyEmotionTool({
  apiKey: process.env.PROSODY_API_KEY!,
  vertical: 'contact_center',
  timeout: 10000,
  callbacks: [
    new LangSmithTracer({
      client: new Client(),
      projectName: 'production-support-bot',
    }),
  ],
});

// Add circuit breaker
import CircuitBreaker from 'opossum';

const emotionBreaker = new CircuitBreaker(
  (input: any) => emotionTool.invoke(input),
  {
    timeout: 15000,
    errorThresholdPercentage: 50,
    resetTimeout: 30000,
  }
);

emotionBreaker.fallback(() => ({
  emotion: 'neutral',
  confidence: 0,
  valence: 0,
  arousal: 0,
  dominance: 0.5,
  fallback: true,
}));

// Use in production
const result = await emotionBreaker.fire({ audio: audioUrl });