3,400+ MCP servers ready to use
Vinkius
M

Bring Attendance Tracking
to Mastra AI

Learn how to connect Lamha to Mastra AI and start using 8 AI agent tools in minutes. Fully managed, enterprise secure, and ready to use without writing a single line of code.

Cancel OrderCheck City CoverageCreate OrderGet OrderList CarriersList InventoryList OrdersList Warehouses

What is the Lamha MCP Server?

Connect your Lamha account to any AI agent and manage HR operations through natural conversation.

What you can do

  • Employee Management — List employees, inspect profiles, and track status
  • Attendance Tracking — Monitor check-in/out times and attendance records
  • Department Browsing — Navigate organizational structure and departments
  • Leave Management — Track leave requests, balances, and approvals
  • Payroll Access — View payroll data and compensation details

How it works

1. Subscribe to this server
2. Enter your Lamha API Token
3. Start managing HR from Claude, Cursor, or any MCP-compatible client

Who is this for?

  • HR Teams — manage employee records and attendance
  • Managers — track leave requests and team attendance
  • Payroll — access compensation data and reports

Built-in capabilities (8)

cancel_order

Cancel an existing order

check_city_coverage

Check delivery coverage for a city

create_order

Create a new logistics order

get_order

Get details for a specific order

list_carriers

List delivery carriers

list_inventory

List product inventory

list_orders

List Lamha orders

list_warehouses

List warehouses

Why Mastra AI?

Mastra's agent abstraction provides a clean separation between LLM logic and Lamha tool infrastructure. Connect 8 tools through Vinkius and use Mastra's built-in workflow engine to chain tool calls with conditional logic, retries, and parallel execution. deployable to any Node.js host in one command.

  • Mastra's agent abstraction provides a clean separation between LLM logic and tool infrastructure. add Lamha without touching business code

  • Built-in workflow engine chains MCP tool calls with conditional logic, retries, and parallel execution for complex automation

  • TypeScript-native: full type inference for every Lamha tool response with IDE autocomplete and compile-time checks

  • One-command deployment to any Node.js host. Vercel, Railway, Fly.io, or your own infrastructure

M
See it in action

Lamha in Mastra AI

AI AgentVinkius
High Security·Kill Switch·Plug and Play
Why Vinkius

Lamha and 3,400+ other MCP servers. One platform. One governance layer.

Teams that connect Lamha to Mastra AI through Vinkius don't need to source, host, or maintain individual MCP servers. Every tool call runs inside a hardened runtime with credential isolation, DLP, and a signed audit chain.

3,400+MCP Servers ready
<40msCold start
60%Token savings
Raw MCP
Vinkius
Server catalogFind and host yourself3,400+ managed
InfrastructureSelf-hostedSandboxed V8 isolates
Credential handlingPlaintext in configVault + runtime injection
Data loss preventionNoneConfigurable DLP policies
Kill switchNoneGlobal instant shutdown
Financial circuit breakersNonePer-server limits + alerts
Audit trailNoneEd25519 signed logs
SIEM log streamingNoneSplunk, Datadog, Webhook
HoneytokensNoneCanary alerts on leak
Custom domainsNot applicableDNS challenge verified
GDPR complianceManual effortAutomated purge + export
Enterprise Security

Why teams choose Vinkius for Lamha in Mastra AI

The Lamha MCP Server runs on Vinkius-managed infrastructure inside AWS — a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts. All 8 tools execute in hardened sandboxes optimized for native MCP execution.

Your AI agents in Mastra AI only access the data you authorize, with DLP that blocks sensitive information from ever reaching the model, kill switch for instant shutdown, and up to 60% token savings. Enterprise-grade infrastructure, zero maintenance.

Lamha
Fully ManagedVinkius Servers
60%Token savings
High SecurityEnterprise-grade
IAMAccess control
EU AI ActCompliant
DLPData protection
V8 IsolateSandboxed
Ed25519Audit chain
<40msKill switch
Stream every event to Splunk, Datadog, or your own webhook in real-time

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure

The Vinkius Advantage

How Vinkius secures Lamha for Mastra AI

Every tool call from Mastra AI to the Lamha MCP Server is protected by DLP redaction, cryptographic audit chains, V8 sandbox isolation, kill switch, and financial circuit breakers.

< 40msCold start
Ed25519Signed audit chain
60%Token savings
FAQ

Frequently asked questions

01

Can I track employee attendance and leave?

Yes. Monitor check-in/out records, view attendance summaries, and track leave balances, requests, and approvals for any employee.

02

How does Lamha authentication work?

Lamha uses a Token header (not Bearer) for authentication against app.lamha.sa/api/v2. This is a custom token format.

03

Can I browse the organizational structure?

Yes. Navigate departments, teams, and reporting hierarchies within the organization.

04

How does Mastra AI connect to MCP servers?

Create an MCPClient with the server URL and pass it to your agent. Mastra discovers all tools and makes them available with full TypeScript types.

05

Can Mastra agents use tools from multiple servers?

Yes. Pass multiple MCP clients to the agent constructor. Mastra merges all tool schemas and the agent can call any tool from any server.

06

Does Mastra support workflow orchestration?

Yes. Mastra has a built-in workflow engine that lets you chain MCP tool calls with branching logic, error handling, and parallel execution.

07

createMCPClient not exported

Install: npm install @mastra/mcp