Skip to main content

Creating Your First Agent

This guide walks you through the editor tabs and the essential settings to get your first agent working quickly.

Overview

Step 1: Basic Information

Start by configuring your agent’s fundamental settings. Basic Info tab with name, language, and system prompt fields Start with your agent’s name, language, and system prompt

Agent Name

Choose a descriptive name that clearly identifies your agent’s purpose:
  • Good: “Customer Support Bot”, “Sales Qualification Agent”
  • Avoid: “Agent 1”, “Test Bot”, “My Agent”

Primary Language

Select the language your agent will primarily use for conversations. This affects:
  • Voice synthesis language
  • Speech recognition accuracy
  • Cultural context for responses

System Prompt

This is the most important setting - it defines your agent’s personality and behavior.
You are a helpful customer support agent for [Your Company]. 

Your role:
- Help customers with account issues, billing questions, and product support
- Be friendly, patient, and professional
- Escalate complex issues to human agents when needed
- Always ask clarifying questions to understand the problem

Guidelines:
- Keep responses concise but helpful
- Use a warm, professional tone
- Focus on resolving issues quickly
Pro Tip: Start with a simple, clear system prompt. You can always refine it based on how your agent performs in testing.

Step 2: LLM Configuration

Configure the language model that powers your agent’s intelligence. LLM Configuration tab showing vendor selection and model options Configure your LLM provider and model settings

Vendor Selection

Choose from the following GA providers:
  • openai — OpenAI models (GPT-4, GPT-4.1, etc.)
  • groq — Groq’s high-speed inference
  • grok — xAI’s Grok models
  • deepseek — DeepSeek models
  • customCompatible — Custom OpenAI-compatible endpoints

Model Parameters

Fine-tune your model’s behavior with these parameters: Temperature (0.0 - 2.0)
  • Controls response randomness
  • Lower (0.0-0.5): More focused, deterministic responses
  • Medium (0.5-1.0): Balanced creativity and consistency
  • Higher (1.0-2.0): More creative, varied responses
  • Recommended: 0.7 for most conversational agents
Top P (0.0 - 1.0)
  • Controls diversity via nucleus sampling
  • Lower values: More focused word choices
  • Higher values: More diverse vocabulary
  • Recommended: 0.9 or leave default
Max Tokens (positive integer)
  • Maximum response length
  • Higher values allow longer responses
  • Consider latency and cost tradeoffs
  • Recommended: 150-300 for conversational agents

OpenAI-Specific Options

When using OpenAI, you can enable: Use priority tier (lower latency)
  • Sets vendorSpecificOptions.service_tier = 'priority'
  • Reduces response latency for time-sensitive applications
  • May incur additional costs
  • Default: Off

Custom Compatible Providers

For customCompatible, you must provide:
  • API Endpoint: Your custom OpenAI-compatible endpoint URL
  • API Key: Authentication key for your endpoint
Start with openai and gpt-4.1-mini for the best balance of quality and speed. You can always switch providers later based on your needs.

Step 3: Voice & Speech

Configure how your agent sounds to create the right experience for your users. Voice & Speech configuration with provider and voice selection Select TTS provider and voice for your agent

TTS Provider Selection

Choose from these GA text-to-speech providers:
  • ElevenLabs — High-quality, natural-sounding voices with extensive customization
  • Cartesia — Fast, low-latency voices with emotion support
  • Dasha — Optimized for conversational AI with wide speed range
  • Inworld — Character voices with pitch and temperature controls
  • LMNT — Consistent, reliable voice synthesis

Voice Selection

Each provider offers a variety of voices. You can:
  • Browse available voices by name
  • Preview voices before selecting
  • Filter by language and accent
  • Add custom voice IDs not in the default list

Speed Adjustment

Speed support varies by provider:
  • ElevenLabs: 0.7× to 1.2×
  • Cartesia: 0× to 2.0× (0 = fastest)
  • Dasha: 0.25× to 4.0× (widest range)
  • Inworld: 0.8× to 1.5×
  • LMNT: Fixed speed (1.0×)

ASR (Speech Recognition)

Automatic Selection: Speech recognition (ASR/STT) is automatically selected and managed by the Dasha platform. There is no user-visible toggle or configuration needed. The platform uses Auto, Deepgram, or Microsoft STT vendors as appropriate.
For detailed voice configuration options, see Voice & Speech.

Step 4: Tools & Functions

Tools configuration showing function definitions Define custom tools and functions for your agent Add custom functions your agent can call during conversations. This enables your agent to:
  • Query external APIs
  • Look up customer data
  • Perform calculations
  • Schedule appointments
  • Transfer calls
Start with simple tools and add more as needed. Each tool requires:
  • Name: Unique identifier for the function
  • Description: What the function does (helps the LLM decide when to call it)
  • Parameters: Input schema (JSON Schema format)
  • Implementation: Your backend endpoint that executes the function
For detailed guidance, see Tools & Functions.

Step 5: Schedule & Availability

Schedule configuration with business hours per weekday Set your agent’s business hours and timezone Configure when your agent is available to take calls: Business Hours
  • Set hours per weekday (Monday through Sunday)
  • Multiple time blocks per day supported
  • Example: Mon-Fri 9:00-12:00, 13:00-18:00
Timezone
  • Defaults to your browser’s timezone
  • Important for scheduling outbound calls
  • Affects call routing and availability
Calls outside business hours can be:
  • Rejected automatically
  • Sent to voicemail (if configured)
  • Queued for next available time

Step 6: Features

Features tab showing runtime capabilities Enable additional runtime features Enable GA runtime features for your agent: Language Switching
  • Allow users to request different languages mid-call
  • Agent adapts voice and responses to match
  • Requires multilingual LLM and voice support
Additional Features
  • Background noise handling (ambient noise)
  • Transfer capabilities (cold, warm, HTTP)
  • Talk-first greetings
  • Maximum call duration limits
  • IVR detection
  • Post-call analysis
These features are optional and can be enabled as your needs grow.

Step 7: Webhooks

Webhooks configuration with Start and Result endpoints Configure webhooks for real-time notifications Set up webhooks to receive notifications about call events: Start Webhook
  • Called when a conversation begins
  • Receive caller information and context
  • Return dynamic data for the call
Result Webhook
  • Called when a conversation ends
  • Receive full transcript and metadata
  • Process outcomes and update your systems
For webhook security, events, and testing, see Webhooks & Events.

Step 8: MCP Connections

MCP Connections showing Model Context Protocol servers Connect to MCP servers for expanded capabilities Connect Model Context Protocol (MCP) servers to expand your agent’s capabilities: What is MCP?
  • Standard protocol for connecting AI agents to data sources and tools
  • Pre-built integrations for popular services
  • Community-maintained server ecosystem
Benefits
  • Access external data sources
  • Use pre-built tool sets
  • Maintain context across sessions
  • Simplify complex integrations
For setup instructions, see MCP Connections.

Step 9: Phone & SIP

Phone & SIP configuration panel Configure phone numbers and SIP settings Configure phone connectivity for real calls: When to Configure
  • Only needed for actual phone calls
  • Not required for dashboard testing
  • Skip this for web widget deployments
What to Configure
  • Phone numbers for inbound calls
  • SIP settings for your telephony provider
  • Call routing rules
  • Twilio integration (if using Twilio)
For detailed phone setup, see Phone Numbers and Twilio Integration.

Step 10: Review & Finalize

Review tab showing agent configuration summary Review all settings before saving your agent Before saving, review your configuration: Pre-Save Checklist
  • ✓ Agent name is descriptive and meaningful
  • ✓ System prompt is clear and specific
  • ✓ LLM provider and model selected
  • ✓ Voice provider and voice chosen
  • ✓ Required tools configured (if any)
  • ✓ Business hours set correctly
  • ✓ Webhooks configured (if needed)
Configuration Summary The Review tab shows a complete summary of all your settings. Verify:
  • All required fields are filled
  • Settings match your requirements
  • No validation errors present
Once everything looks good, proceed to save your agent.

Save and Test

  1. Save Your Agent
    • Click “Save Agent” to create your agent
    • Your agent starts in “Draft” status
  2. Enable Your Agent
    • Toggle the “Enabled” switch
    • Your agent is now active
  3. Test Your Agent
    • Use the dashboard test widget
    • Try different conversation scenarios
    • Verify voice quality and responses

Common Configuration Mistakes

Avoid these common mistakes:
  • Too vague system prompts: “Be helpful” is not specific enough
  • Wrong temperature settings: Too high makes responses inconsistent
  • Mismatched voice/language: Ensure voice matches your target audience
  • Skipping testing: Always test before enabling in production

Next Steps

Now that you’ve created your first agent:
  1. Test Thoroughly: Use the Testing Overview guide
  2. Configure Advanced Features: Explore Advanced Features
  3. Deploy Your Agent: Learn about Phone Numbers and Web Widgets

Troubleshooting

Agent Not Responding

  • Check if agent is enabled
  • Verify system prompt is not empty
  • Test with simple questions first

Voice Issues

  • Ensure voice provider is configured
  • Test voice preview before saving
  • Check language settings match

Poor Conversation Quality

  • Refine your system prompt
  • Adjust temperature settings
  • Test with different conversation types
Need Help? Check our Troubleshooting Guide for detailed solutions to common problems.

API Cross‑Refs