VICIPanel
AI-native

A dialer Claude can actually run.

Every VICIPanel tenant ships with a Model Context Protocol server built in. Connect Claude, ChatGPT, Cursor, or Continue and let them operate your dialer over one secure URL. OAuth 2.1 for auth, full audit trail on every action, role gating per token. We think we're the first dialer to do this. Happy to be proven wrong.

MCP Server

11 promoted tools, 34 catalog actions, 6 widgets

Tool access is gated by the token owner's live VICIdial user level. Destructive actions require confirmation. Rate limits per-process.

Real-time streaming responses

AI hosts get progress updates as tools execute, not just final results. Cleaner chat experience with no silent waits.

OAuth 2.1 + Dynamic Client Registration

AI hosts register themselves without manual key exchange. Standard protocol that works with every major MCP client.

Role-gated tool exposure

Agents see agent tools. Admins see admin tools. Built-in 4-tier permission model enforced at every request.

Tenant-scoped tokens + snapshot safety

Every token is bound to your tenant. Tenant cloning workflows rotate secrets and revoke inherited tokens automatically.

Full audit log

Every AI action is captured. Who ran it, what tool, what arguments, what came back, how long it took. Admins can read the whole story of any prompt.

# Connect Claude Code to your VICIPanel instance
claude mcp add vicipanel \
--transport streamable-http \
https://your-tenant.vicipanel.com/api/mcp
# Then in chat:
> Show me the agent wallboard for campaign ACME
# Claude calls show_agent_wallboard via MCP,
# receives a widget, renders it inline.
Supported MCP hosts
· Claude Code
· Claude Desktop
· ChatGPT
· Cursor
· Continue
· Any MCP client

Copilot

Ask AI, and it actually does the thing

The sparkle icon in your header is an agent, not a help bot. It knows what page you're on and what your role is, and it runs on the same MCP surface with the same permission checks.

Page-aware

Knows the page you're on and the record you're looking at. Ask “why is this campaign dropping calls” and it pulls the right CDR window without you naming it.

Tool-calling, not chat

Copilot lists campaigns, activates agents, and renders live widgets inline. You get the result, not a page of instructions.

Permission + confirmation gates

Your live user level is enforced at runtime. Destructive actions like pausing a campaign or logging out an agent wait for you to confirm in chat.

Provider-agnostic

OpenAI by default. Swap to Anthropic, Gemini, Groq, or Bedrock in the AI settings. Your keys, your bill, your choice of model per feature.

Audited like everything else

Every Copilot action lands in the same audit trail as direct MCP calls. Admins see the full prompt, tool, and outcome.

Included in every paid tier

Copilot isn't locked behind an enterprise upsell. If you pay for VICIPanel, you get the Copilot.

Page-aware AI chat with MCP tool execution
Your keys, your models

Pick the model. Or don't think about it.

Every AI feature honors the provider you set for that feature. Local Ollama for transcription, Anthropic for Copilot, OpenAI for call summaries, Groq for real-time AMD. Mix and match inside one tenant.

Or default everything to OpenAI and stop thinking about it. We don't judge.

Watch Claude run a dialer

On the demo we wire Claude Code to a live tenant and activate a campaign with a prompt. You'll see the tool call, the confirm, the result, all in the chat.