Bring Codebase Intelligence
to LlamaIndex
Learn how to connect Greptile to LlamaIndex and start using 11 AI agent tools in minutes. Fully managed, enterprise secure, and ready to use without writing a single line of code.
What is the Greptile MCP Server?
Connect your Greptile account to any AI agent and unlock AI-powered codebase understanding through natural conversation.
What you can do
- AI Codebase Q&A — Ask natural language questions about one or more repositories and receive AI-generated answers with code references
- Contextual Follow-ups — Continue conversations with session context for multi-turn codebase exploration
- Semantic Code Search — Search across indexed repositories to find relevant files, functions, and code patterns
- File-Specific Search — Target searches within a specific file path for precise results
- Repository Indexing — Submit GitHub or GitLab repositories for indexing, check progress, and trigger re-indexing
- Repository Management — List all indexed repos, inspect file metadata, and remove outdated indexes
- Usage Monitoring — Track API consumption and rate limits
How it works
1. Subscribe to this server
2. Enter your Greptile API Key from the developer dashboard
3. Start querying your codebase from Claude, Cursor, or any MCP-compatible client
Who is this for?
- Developers — understand unfamiliar codebases, find implementations, and navigate large repositories through conversation
- Code Reviewers — search for related patterns, understand code context, and trace dependencies
- Engineering Managers — get quick answers about architecture decisions, coding patterns, and technical debt
Built-in capabilities (11)
Delete indexed repository
Get file info
Check API usage
Get repository status
Index a repository
List indexed repositories
Query codebase with AI
Query with session context
Reindex a repository
Search in specific file
Search codebase
Why LlamaIndex?
LlamaIndex agents combine Greptile tool responses with indexed documents for comprehensive, grounded answers. Connect 11 tools through Vinkius and query live data alongside vector stores and SQL databases in a single turn. ideal for hybrid search, data enrichment, and analytical workflows.
- —
Data-first architecture: LlamaIndex agents combine Greptile tool responses with indexed documents for comprehensive, grounded answers
- —
Query pipeline framework lets you chain Greptile tool calls with transformations, filters, and re-rankers in a typed pipeline
- —
Multi-source reasoning: agents can query Greptile, a vector store, and a SQL database in a single turn and synthesize results
- —
Observability integrations show exactly what Greptile tools were called, what data was returned, and how it influenced the final answer
Greptile in LlamaIndex
Greptile and 3,400+ other MCP servers. One platform. One governance layer.
Teams that connect Greptile to LlamaIndex through Vinkius don't need to source, host, or maintain individual MCP servers. Every tool call runs inside a hardened runtime with credential isolation, DLP, and a signed audit chain.
Raw MCP | Vinkius | |
|---|---|---|
| Server catalog | Find and host yourself | 3,400+ managed |
| Infrastructure | Self-hosted | Sandboxed V8 isolates |
| Credential handling | Plaintext in config | Vault + runtime injection |
| Data loss prevention | None | Configurable DLP policies |
| Kill switch | None | Global instant shutdown |
| Financial circuit breakers | None | Per-server limits + alerts |
| Audit trail | None | Ed25519 signed logs |
| SIEM log streaming | None | Splunk, Datadog, Webhook |
| Honeytokens | None | Canary alerts on leak |
| Custom domains | Not applicable | DNS challenge verified |
| GDPR compliance | Manual effort | Automated purge + export |
Why teams choose Vinkius for Greptile in LlamaIndex
The Greptile MCP Server runs on Vinkius-managed infrastructure inside AWS — a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts. All 11 tools execute in hardened sandboxes optimized for native MCP execution.
Your AI agents in LlamaIndex only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure, zero maintenance.

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure
How Vinkius secures
Greptile for LlamaIndex
Every tool call from LlamaIndex to the Greptile MCP Server is protected by DLP redaction, cryptographic audit chains, V8 sandbox isolation, kill switch, and financial circuit breakers.
Frequently asked questions
Can I ask natural language questions about my codebase?
Yes! The query_codebase tool sends a natural language question along with repository references and returns AI-generated answers with specific code references (file paths and line numbers). For follow-up questions, use query_with_context with the session ID from the previous response to maintain conversation continuity.
Do I need to index my repository before querying it?
Yes. Use index_repository with the remote host (github or gitlab), repository path (owner/repo), and branch name. Check indexing progress with get_repository_status. Once indexed, you can query and search the repository. Use reindex_repository to refresh the index after significant code changes.
Can I search for specific code patterns across my repositories?
Yes. The search_codebase tool performs semantic search across your indexed repositories to find relevant files and functions. For targeted results, use search_by_filepath to narrow the search to a specific file path. Use get_file_info to retrieve indexed metadata for any file.
How does LlamaIndex connect to MCP servers?
Use the MCP client adapter to create a connection. LlamaIndex discovers all tools and wraps them as query engine tools compatible with any LlamaIndex agent.
Can I combine MCP tools with vector stores?
Yes. LlamaIndex agents can query Greptile tools and vector store indexes in the same turn, combining real-time and embedded data for grounded responses.
Does LlamaIndex support async MCP calls?
Yes. LlamaIndex's async agent framework supports concurrent MCP tool calls for high-throughput data processing pipelines.
BasicMCPClient not found
Install: pip install llama-index-tools-mcp
