Slack Integration

Natural Language Queries

Ask questions about your infrastructure in plain English. RedQueen translates your questions into Prometheus queries, OpenSearch searches, and Kubernetes API calls.

How It Works

@mention RedQueen in any Slack channel to start a conversation.

Ask a Question

@RedQueen what's the average response time for the API?

AI Processing

Claude analyzes your question, selects appropriate tools, and generates queries.

Streaming Response

Results stream back to Slack with visualizations and context.

AI Tools

RedQueen has access to specialized tools for querying your infrastructure.

Metrics Analysis

Query Prometheus metrics using natural language. Get CPU, memory, latency, error rates and more.

Application Logs

Search application logs with container awareness. Filter by service, pod, or error patterns.

WAF Logs

Analyze AWS WAF security logs. Investigate blocked requests, attack patterns, and rule matches.

ALB Logs

Search Application Load Balancer access logs. Track requests, response codes, and latencies.

IP Intelligence

Look up IP addresses with geolocation, WHOIS data, and threat intelligence context.

Kubernetes

Query Kubernetes cluster state. Check pods, deployments, events, and resource usage.

Example Queries

Just ask in natural language. RedQueen figures out the rest.

@RedQueen Analyse incoming requests in the last 30 minutes
@RedQueen Why is the checkout service slow?
@RedQueen Investigate the top blocked IP from today
@RedQueen Is 185.234.xx.xx a threat? Check WAF and ALB logs
@RedQueen Correlate the spike in 5xx errors with pod restarts
@RedQueen What changed before the latency increase?

Provider Agnostic

Not locked into any single LLM provider. Switch models or providers without code changes.

current

AWS Bedrock

Production-ready with Claude, Llama, Mistral and more. Runs in your AWS account.

supported

Other Providers

Easily switch to OpenAI, Azure, Google, or any OpenAI-compatible API.

supported

Self-Hosted

Run locally hosted models via Ollama or vLLM for full data privacy.

Switch Models from Slack

Users select their preferred model directly in Slack using native modals and selectors. No config files, no deployments — just pick and go.

Features

Thread Context

Conversations maintain context. Follow-up questions understand previous messages in the thread.

Streaming Responses

Responses stream in real-time. See tool invocations and results as they happen.

Tool Chaining

Complex queries automatically chain multiple tools. One question can trigger metrics, logs, and API queries.

Cost Tracking

Token usage and cost metrics published to CloudWatch. Track spending per query.