Regal Copilot — Capabilities, Use Cases, and Limits
1. Overview
What Copilot is
Regal Copilot is an AI assistant backed by agent-service. It works inside your Regal brand: AI agents, test cases, simulations, real calls and transcripts, contacts, journeys, and task routing. Most actions are read-only; a smaller set creates or updates drafts, tests, or simulation runs (see section 7).
The primary Copilot reasoning model today is OpenAI GPT-5.1; we are evaluating additional models, with more to announce soon.
How requests are routed
- Copilot decides which guided workflow applies (for example building an agent vs analyzing calls vs troubleshooting routing) from the user’s intent. Refining an existing agent is handled differently from building a new one from scratch.
Safety
-
Copilot asks before deleting or bulk-creating content. Agent changes are saved as drafts until someone publishes in the Regal app. For Journeys and Routing Rules, only support read-only capabilities.
-
When Copilot loads your AI agent or journey configuration, secret values in HTTP headers for webhooks and custom actions are masked so they are not included in what gets sent to the assistant to read.
In scope vs out of scope
-
In scope: Your Regal agents, calls, tests, simulations, journeys, tasks, and contacts—even when the question is vague; Copilot should ask a short follow-up instead of refusing.
-
Out of scope: non-Regal questions, unrelated web research, or “analyze our entire warehouse” style asks.
-
In the works: it does not run Looker, Snowflake, or broad “query the whole dataset” reporting YET. For now, use Regal for what Copilot can reach (see below). We are actively adding data access for Copilot!
Framing — what works well vs. what’s harder currently
Where Copilot shines
For a deep dive on something specific—one journey and how it’s set up, one AI agent or small samples of calls / recording, one task’s routing history, or a similarly bounded question—Copilot is built to give quick, accurate guidance tied to what’s in Regal.
What’s harder (and still evolving)
Anything that depends on scanning across very large sets of records from a vague prompt (for example “show me every problem call last year” with no other detail) is difficult: answers may be partial, take multiple steps, or need you to narrow the ask. Broader “query everything” style work is not Copilot’s sweet spot today—see section 5 for scale and roadmap notes.
How to get better answers
Share specific anchors whenever you can so Copilot can narrow quickly instead of guessing across the whole brand. Examples that help a lot:
-
Calls / recordings: a call or recording id, disposition, or paste the Recording link if you have it.
-
Contacts: profile context—phone, email, name, or id—especially when the name is common.
-
Time: a date range or clear timeframe (“since the deploy Tuesday”, “last 48 hours”).
-
Automation: journey name or id, event name, or what changed and which version you care about.
-
Routing / tasks: Task id as shown in Regal, queue, or the agent you expected—plus what triggered the question (campaign, rule change, customer report).
Even one or two concrete details usually improve the answer more than a long story with no identifiers.
2. Tools Copilot has access to
Copilot’s MCP server registers the tools below (names are exact MCP identifiers).
2.1 Regal knowledge base (coming next week!)
| Tool / capability | Typical use |
|---|---|
query-regal-kb | Search or fetch relevant chunks from the official developer and support docs for Copilot replies and workflows. |
2.2 AI agents (fetch, build helpers, draft writes)
| Tool | Typical use |
|---|---|
fetch-agent | Load one agent by UUID (full config, editor URL). |
fetch-latest-agent | Load highest version (draft or live); use before edit-agent for updates.version. |
list-agents / list-ai-agents | List agents with filters (same handler, two names). |
create-agent | Write — create a new voice AI agent. |
edit-agent | Write — update agent; always saves as draft. |
explore-agent-context | Builder: compare existing agents as references before a new build. |
summarize-agent-plan | Builder: compact plan for user confirmation before generation. |
generate-agent-config | Builder / reference: section-by-section config guidance for edit-agent. |
get-prompting-best-practices | Reference: voice prompt style, formatting, action invocation. |
plan-multi-state-agent | Builder / reference: multi-state graph plan before build-multi-state-agent. |
build-multi-state-agent | Builder: instructions to produce multi-state config (then saved via edit-agent). |
get-ai-agent-tool-context | Reference: action schemas, failures, deprecated patterns. |
2.3 Test cases
| Tool | Typical use |
|---|---|
fetch-test-cases | List test cases for an agent. |
create-test-cases | Write — create tests (max 10 per call). |
edit-test-case | Write — update one test case. |
delete-test-cases | Write — delete tests by id. |
2.4 Simulations
| Tool | Typical use |
|---|---|
start-simulations | Write — start a simulation test run (test_run_id). |
get-simulation-progress | Poll run status and per-case results until completed=true. |
fetch-simulation-transcript | Read simulation transcript for analysis. |
2.5 Production recordings and transcripts
| Tool | Typical use |
|---|---|
list-recordings | Filter and page recordings (e.g. agent, disposition, search). |
fetch-recording-transcripts | Load transcript + metadata for up to five recording_sids per call. |
get-trackers-lookup | Tracker lookup helper for transcript-related workflows. |
2.6 Profiles and contacts
| Tool | Typical use |
|---|---|
lookup-profile | Resolve contact by phone, email, name, or external id. |
fetch-profile | Full profile by id. |
fetch-profile-events | Recent profile events (paginated). |
2.7 Journeys
| Tool | Typical use |
|---|---|
list-journeys | Search / list journeys. |
fetch-journey | Full journey by UUID. |
list-journey-versions | Version history for a journey. |
fetch-journey-version | Full payload for one version (diffs). |
2.8 Task routing and queues
| Tool | Typical use |
|---|---|
list-tasks | List tasks with filters (state, queues, time window, etc.). |
fetch-task | One task by customer Task id. |
get-task-events | Routing / reservation timeline for a task. |
list-users | Browse brand users/agents. |
fetch-user | User/agent by email (capacity, queues, attributes). |
list-queues | Queues and eligibility expressions. |
get-routing-rules | Active routing rulesets. |
get-routing-rule-versions | Version history for a ruleset. |
2.9 Custom analysis fields
| Tool | Typical use |
|---|---|
list-analysis-data-points | List brand Intelligence / custom analysis field definitions (paginate when has_more). |
2.10 Copilot session UI and audit
| Tool | Typical use |
|---|---|
send-thinking | UI: collapsible “thinking” markdown (not persisted as Regal config). |
step-summary | UI: short step progress markdown. |
log-session-summary | Write — merge workflow/session audit into the Copilot session record. |
2.11 Workflow prompts for use cases
These return instruction messages for the client LLM; names are stable MCP prompt ids:
| Prompt |
|---|
build-agent-workflow |
build-multi-state-agent-workflow |
generate-test-case-prompt |
run-simulations-workflow |
post-simulation-analysis-workflow |
analyze-transcript |
troubleshoot-journey |
troubleshoot-task-routing |
3. Copilot write capabilities
Copilot creates new AI agents and saves draft updates to existing agents. It does not replace your live, customer-facing agent version by itself—someone still publishes (or promotes) in the Regal app when you are ready.
Copilot creates, edits, and deletes test cases for simulations. Large creates happen in batches; deletes need explicit confirmation.
Copilot can start simulation runs. That exercises your tests against the agent; it does not change live call traffic.
All other actions are read-only actions. For the exact MCP operations behind each bullet, see section 2.
4. Use cases and good-to-knows
| Use case | Example asks | Strong when | Good to know |
|---|---|---|---|
| New agent (single flow) | “Build an inbound…”, “Walk me through billing.” | End-to-end draft; uses queues that already exist in your account. | You still publish the live agent in Regal. Copilot does not switch production traffic to a new version for you. |
| New agent (branching / multi-step) | “Triage then schedule”, “Graph with qualification.” | Plans branches before filling in details. | Transfers and actions work differently than in a simple single-flow agent—expect Copilot to follow Regal’s product rules, not copy settings blindly across styles. |
| Improve existing agent | “Review my script”, “Fix the transfer.” | Recommendations tied to your current configuration. | Saves as a draft only. Knowledge libraries stay managed in the Regal app. |
| Test cases | “Generate 15 tests”, “Import this CSV.” | Mix of happy paths, objections, and edge cases. | Large imports are created in batches of 10. Deletes need a clear yes from you. |
| Simulations | “Run all tests”, “What happened on the last run?” | Clear pass/fail and links back to transcripts. | Long runs may pause—you can ask Copilot to keep checking. If one scenario never finishes, the test wording or end condition may need tightening. |
| Real calls & transcripts | “What happened on this call?”, “Find calls where…” "Where did customers drop off in these AI agent calls?" | One-call diagnosis; OR trends across a bounded set of calls (<100) | For many calls at once, start with a shorter time range or tighter filters—Copilot works in chunks and will ask you to continue or narrow as it goes. Stay within 100 calls for good analysis results - we are actively improving the bulk capabilities here. Stay tuned! Paste recording links directly to Copilot and ask questions. |
| Task routing | “Why didn’t [email protected] get this taskID W123?”, “Was this scheduled callback task snoozed?” | Explains routing, eligibility, and recent rule history. Explain "what happened" for a particular task. | Use the same Task id wording Regal shows in the UI to debug. |
| Journeys | “Where are all the conditional nodes with {{contact.age}} filter in this Journey?” "What is the difference between Journey 123 and Journey 345? | Scan across large Journey definitions to find gaps or hard-to-identify issues. | Single Journey action performs better, whereas scanning across all Journeys to find something is not fully intuitive. Journey execution details on individual profiles is not yet exposed to Copilot. This is in the works! For now, use workaround by asking about the profile and the Journeys separately, to see what Copilot |
| Contact Profiles | "What is the value for {{contact.product_type}} for this contact last Thursday?" "What were the values for this event on the profile? Should it have triggered Journey 123?" | Find profile attributes at a point in time; find events for a contact | |
| Capabilities | “What can you do?” — answer with categories (agents, tests, calls, routing, journeys), not a long internal feature list. |
5. Gaps and what’s coming
| Topic | Today | What’s coming / how to set expectations |
|---|---|---|
| Large datasets & analytics | Copilot is not a SQL, spreadsheet, or warehouse front end. It cannot answer “query every call we’ve ever made” in one step. | Treat broad reporting and cross-system analytics as work in progress—use Regal Intelligence, recordings, and simulations inside Copilot, and Looker / Snowflake / BI outside it for now. |
| Call & transcript analysis at scale | Searches and deep reads are staged; Copilot asks you to confirm between waves; rough caps (~25 transcripts per deep pass, ~100 per focused investigation unless you narrow or continue; ~200 calls listed per search wave when only metadata is needed). | Customers should narrow time ranges or filters for “all calls” style questions, or work in multiple Copilot sessions. |
| Knowledge & go-live | Draft agents and tests only inside Copilot’s write paths. | Knowledge base attachment and go-live remain in the Regal app. |
| File uploads for input | Most workflows expect typed prompts, pasted text, or links (for example Intelligence URLs)—not arbitrary file attachments as the primary input. | First-class file uploads (CSV, PDFs, screenshots, etc.) for bulk or structured input is planned; until then, paste content or use in-app surfaces that already accept files. |
| More agentic workflows | Many flows pause for confirmation between steps (writes, large searches, discovery waves) to stay safe and predictable. | Richer agentic runs that need fewer explicit “click to continue” checkpoints—while still respecting guardrails—are planned; expectations will be communicated as those modes ship. |
| Regal MCP for external use | The Regal Copilot MCP server is available for authenticated, brand-scoped clients (see section 2 and the MCP README). | Broader “Regal MCP” positioning for external tools and partners—clearer docs, patterns, and supported client scenarios—is on the roadmap. |
Updated 2 days ago
Try out Copilot in Regal and let us know what you think!
