Bring Thinkstack
to LlamaIndex
Learn how to connect ThinkStack to LlamaIndex and start using 10 AI agent tools in minutes. Fully managed, enterprise secure, and ready to use without writing a single line of code.
What is the ThinkStack MCP Server?
Connect your ThinkStack account to any AI agent and manage your chatbots, knowledge bases, and conversations through natural language.
What you can do
- Chatbot Management u2014 List and configure all AI chatbots in your account
- Knowledge Base u2014 Add, list, and remove knowledge sources (URLs, documents) for any chatbot
- Live Queries u2014 Send messages to your chatbots and receive AI-generated responses in real time
- Conversation History u2014 Review all chat sessions with full message history and user metadata
- Actions & Webhooks u2014 View all configured REST API actions for your chatbots
How it works
1. Subscribe to this server
2. Retrieve your API Key from the ThinkStack dashboard
3. Start managing chatbots from Claude, Cursor, or any MCP client
Who is this for?
- Support Teams u2014 monitor chatbot conversations and optimize knowledge base accuracy
- Product Managers u2014 review chatbot usage patterns and refine AI responses
- Developers u2014 manage knowledge sources and test chatbot queries programmatically
Built-in capabilities (10)
The content will be crawled and indexed automatically. Add a knowledge source
Verify ThinkStack API connectivity
Remove a knowledge source
Get chatbot details
Get conversation details
List bot actions
List all chatbots
List conversations
List knowledge sources
Query a chatbot
Why LlamaIndex?
LlamaIndex agents combine ThinkStack tool responses with indexed documents for comprehensive, grounded answers. Connect 10 tools through Vinkius and query live data alongside vector stores and SQL databases in a single turn. ideal for hybrid search, data enrichment, and analytical workflows.
- —
Data-first architecture: LlamaIndex agents combine ThinkStack tool responses with indexed documents for comprehensive, grounded answers
- —
Query pipeline framework lets you chain ThinkStack tool calls with transformations, filters, and re-rankers in a typed pipeline
- —
Multi-source reasoning: agents can query ThinkStack, a vector store, and a SQL database in a single turn and synthesize results
- —
Observability integrations show exactly what ThinkStack tools were called, what data was returned, and how it influenced the final answer
ThinkStack in LlamaIndex
ThinkStack and 3,400+ other MCP servers. One platform. One governance layer.
Teams that connect ThinkStack to LlamaIndex through Vinkius don't need to source, host, or maintain individual MCP servers. Every tool call runs inside a hardened runtime with credential isolation, DLP, and a signed audit chain.
Raw MCP | Vinkius | |
|---|---|---|
| Server catalog | Find and host yourself | 3,400+ managed |
| Infrastructure | Self-hosted | Sandboxed V8 isolates |
| Credential handling | Plaintext in config | Vault + runtime injection |
| Data loss prevention | None | Configurable DLP policies |
| Kill switch | None | Global instant shutdown |
| Financial circuit breakers | None | Per-server limits + alerts |
| Audit trail | None | Ed25519 signed logs |
| SIEM log streaming | None | Splunk, Datadog, Webhook |
| Honeytokens | None | Canary alerts on leak |
| Custom domains | Not applicable | DNS challenge verified |
| GDPR compliance | Manual effort | Automated purge + export |
Why teams choose Vinkius for ThinkStack in LlamaIndex
The ThinkStack MCP Server runs on Vinkius-managed infrastructure inside AWS — a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts. All 10 tools execute in hardened sandboxes optimized for native MCP execution.
Your AI agents in LlamaIndex only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure, zero maintenance.

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
How Vinkius secures
ThinkStack for LlamaIndex
Every tool call from LlamaIndex to the ThinkStack MCP Server is protected by DLP redaction, cryptographic audit chains, V8 sandbox isolation, kill switch, and financial circuit breakers.
Frequently asked questions
How do I query my chatbot via AI agent?
Use the send_query tool with the bot ID and your message. The chatbot responds based on its trained knowledge base.
Can I manage knowledge sources programmatically?
Yes. Use add_source to add new URLs, list_sources to browse, and delete_source to remove outdated sources from any chatbot.
How do I review chat conversations?
Use list_conversations to see all chats for a bot, then get_conversation to read the full message history of any specific session.
How does LlamaIndex connect to MCP servers?
Use the MCP client adapter to create a connection. LlamaIndex discovers all tools and wraps them as query engine tools compatible with any LlamaIndex agent.
Can I combine MCP tools with vector stores?
Yes. LlamaIndex agents can query ThinkStack tools and vector store indexes in the same turn, combining real-time and embedded data for grounded responses.
Does LlamaIndex support async MCP calls?
Yes. LlamaIndex's async agent framework supports concurrent MCP tool calls for high-throughput data processing pipelines.
BasicMCPClient not found
Install: pip install llama-index-tools-mcp
