guides
LLM Node

LLM Node

The LLM (Large Language Model) node enables your bot to leverage AI language models to process conversations, generate responses, and extract information.

LLM Node

Overview

LLM nodes allow your bot to:

  • Process and understand natural language from users
  • Generate contextually relevant responses
  • Extract specific information from conversations
  • Make assessments or classifications based on text
  • Save outputs as variables for later use in your flow

Configuration Options

Prompt Engineering

The most important part of using the LLM node is crafting a clear prompt:

LLM Prompt Settings

  • Type a specific instruction for what you want the LLM to do
  • Be clear and concise in your instructions
  • Include examples if helpful for complex tasks
  • You can insert variables from previous nodes using the Variables button

Conversation History

You can choose whether to include previous conversation history:

  • Toggle "Include conversation history" to provide context from earlier messages
  • This helps the LLM understand the full conversation thread
  • Useful for tasks that require understanding the entire interaction

Response Handling

Configure how the LLM's response should be handled:

  • Send response to user: Toggle whether to automatically send the LLM's response to the user
  • Save LLM response in variable: Store the output in a variable for use in later nodes
  • You can reference this variable in subsequent nodes (like Condition nodes or Message nodes)

Common Use Cases

  • Lead Qualification: Analyze conversation text to score or classify leads
  • Sentiment Analysis: Determine customer sentiment from their messages
  • Information Extraction: Pull specific details from unstructured text
  • Content Generation: Create custom responses based on specific criteria
  • Decision Making: Evaluate complex inputs to recommend next steps
  • Data Transformation: Convert information from one format to another

Best Practices

  1. Be specific in prompts: The more specific your instructions, the better the results
  2. Test thoroughly: Try different prompts to see which produces the best results
  3. Set boundaries: Include what the LLM should NOT do in your instructions
  4. Consider token limits: Very long prompts or conversation histories may get truncated
  5. Handle uncertainty: Plan for cases where the LLM might not produce the expected output
  6. Validate outputs: For critical applications, verify the LLM's output before using it for important decisions

Connecting to Other Nodes

After an LLM node, you typically connect to:

  • A Message node to send information derived from the LLM
  • A Condition node to branch based on the LLM's assessment
  • A Webhook node to send the processed information to external systems
  • Another LLM node for sequential processing

Example Implementations

Lead Scoring

Prompt: Based on the conversation rank the lead and save output the lead score
Variable Stored: leadScore
Next Step: Connect to a Condition node that branches based on score value

Customer Support Triage

Prompt: Analyze the customer query and classify it as "billing", "technical", or "general"
Variable Stored: queryType
Next Steps: Connect to a Condition node that routes to different support teams

Notes and Limitations

  • Response quality is highly dependent on the clarity of your prompt
  • The LLM works best with clear, specific instructions
  • Results may vary for very complex or ambiguous requests
  • Consider data privacy implications when processing sensitive information
  • For optimal performance, keep conversation history concise and relevant