2,500+ MCP servers ready to use
Vinkius

Watershed Climate MCP Server for Pydantic AI 16 tools — connect in under 2 minutes

Built by Vinkius GDPR 16 Tools SDK

Pydantic AI brings type-safe agent development to Python with first-class MCP support. Connect Watershed Climate through the Vinkius and every tool is automatically validated against Pydantic schemas — catch errors at build time, not in production.

Vinkius supports streamable HTTP and SSE.

python
import asyncio
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerHTTP

async def main():
    # Your Vinkius token — get it at cloud.vinkius.com
    server = MCPServerHTTP(url="https://edge.vinkius.com/[YOUR_TOKEN_HERE]/mcp")

    agent = Agent(
        model="openai:gpt-4o",
        mcp_servers=[server],
        system_prompt=(
            "You are an assistant with access to Watershed Climate "
            "(16 tools)."
        ),
    )

    result = await agent.run(
        "What tools are available in Watershed Climate?"
    )
    print(result.data)

asyncio.run(main())
Watershed Climate
Fully ManagedVinkius Servers
60%Token savings
High SecurityEnterprise-grade
IAMAccess control
EU AI ActCompliant
DLPData protection
V8 IsolateSandboxed
Ed25519Audit chain
<40msKill switch
Stream every event to Splunk, Datadog, or your own webhook in real-time

* Every MCP server runs on Vinkius-managed infrastructure inside AWS - a purpose-built runtime with per-request V8 isolates, Ed25519 signed audit chains, and sub-40ms cold starts optimized for native MCP execution. See our infrastructure

About Watershed Climate MCP Server

Connect your Watershed Climate organization to any AI agent and take full control of your carbon measurement, reporting, and reduction workflows through natural conversation.

Pydantic AI validates every Watershed Climate tool response against typed schemas, catching data inconsistencies at build time. Connect 16 tools through the Vinkius and switch between OpenAI, Anthropic, or Gemini without changing your integration code — full type safety, structured output guarantees, and dependency injection for testable agents.

What you can do

  • Data Uploads — Create upload containers, add activity data records (electricity, travel, shipping), and validate data quality
  • Batch Data Ingestion — Upload multiple activity records in batch with proper formatting and emission factor mapping
  • GHG Inventories — List and inspect greenhouse gas inventories with Scope 1, 2, and 3 emissions breakdowns
  • Emissions Measurements — Query calculated carbon footprint measurements filtered by inventory or year
  • Processing Tasks — Monitor async processing tasks from upload submissions with real-time status checks
  • Reports & Disclosures — List and access generated sustainability reports (CDP, TCFD, custom formats)
  • Reduction Targets — View configured emissions reduction targets aligned with SBTi and net-zero commitments

The Watershed Climate MCP Server exposes 16 tools through the Vinkius. Connect it to Pydantic AI in under two minutes — no API keys to rotate, no infrastructure to provision, no vendor lock-in. Your configuration, your data, your control.

How to Connect Watershed Climate to Pydantic AI via MCP

Follow these steps to integrate the Watershed Climate MCP Server with Pydantic AI.

01

Install Pydantic AI

Run pip install pydantic-ai

02

Replace the token

Replace [YOUR_TOKEN_HERE] with your Vinkius token

03

Run the agent

Save to agent.py and run: python agent.py

04

Explore tools

The agent discovers 16 tools from Watershed Climate with type-safe schemas

Why Use Pydantic AI with the Watershed Climate MCP Server

Pydantic AI provides unique advantages when paired with Watershed Climate through the Model Context Protocol.

01

Full type safety: every MCP tool response is validated against Pydantic models, catching data inconsistencies before they reach your application

02

Model-agnostic architecture — switch between OpenAI, Anthropic, or Gemini without changing your Watershed Climate integration code

03

Structured output guarantee: Pydantic AI ensures tool results conform to defined schemas, eliminating runtime type errors

04

Dependency injection system cleanly separates your Watershed Climate connection logic from agent behavior for testable, maintainable code

Watershed Climate + Pydantic AI Use Cases

Practical scenarios where Pydantic AI combined with the Watershed Climate MCP Server delivers measurable value.

01

Type-safe data pipelines: query Watershed Climate with guaranteed response schemas, feeding validated data into downstream processing

02

API orchestration: chain multiple Watershed Climate tool calls with Pydantic validation at each step to ensure data integrity end-to-end

03

Production monitoring: build validated alert agents that query Watershed Climate and output structured, schema-compliant notifications

04

Testing and QA: use Pydantic AI's dependency injection to mock Watershed Climate responses and write comprehensive agent tests

Watershed Climate MCP Tools for Pydantic AI (16)

These 16 tools become available when you connect Watershed Climate to Pydantic AI via MCP:

01

create_upload

An upload is required before you can add data records to Watershed. After creating an upload, you add data records to it, validate the data, and then submit it for processing. The upload acts as a batch grouping mechanism for related activity data. You can optionally provide a name and description to identify the upload purpose. Create a new data upload container in Watershed

02

delete_upload_data_record

Use this to remove incorrect or unwanted data before validating and submitting the upload. This action cannot be undone. The record_id is obtained from list_upload_data_records. Delete a specific data record from an upload

03

get_inventory

Use the inventory_id from list_inventories to inspect detailed carbon footprint results and understand your organization's emissions composition. Get detailed information about a specific GHG inventory

04

get_report

Use the report_id from list_reports to access the full report details including generated files, disclosure frameworks covered, and emissions data summarized. Reports are typically generated after inventories are complete and validated. Get detailed information about a specific report

05

get_task_status

When you submit an upload for processing, a task is created and returns a task_id. Use this tool to check if the processing is complete, still in progress, or failed. Task status is useful for monitoring large data submissions that may take time to process. Check status of a processing task (e.g., upload submission)

06

get_upload

Use the upload_id from list_uploads to inspect details before adding data or submitting for validation. Get details of a specific data upload

07

list_inventories

An inventory represents your organization's carbon footprint measurement for a specific year, containing Scope 1 (direct), Scope 2 (energy), and Scope 3 (value chain) emissions data. Each inventory has a year, status, and total emissions calculated from submitted activity data. List all GHG inventories in your Watershed organization

08

list_measurements

Measurements represent the actual carbon footprint values derived from your uploaded activity data. You can filter by inventory_id to see measurements for a specific year's inventory, or by year to see measurements across all inventories for that year. Each measurement includes the activity type, emission factor used, and calculated CO2e value. List emissions measurements with optional filters

09

list_reduction_targets

Reduction targets define your organization's goals for decreasing emissions over time, often aligned with Science Based Targets initiative (SBTi) or net-zero commitments. Each target includes baseline year, target year, reduction percentage, and progress tracking. List all emissions reduction targets configured in your organization

10

list_reports

Reports are formatted outputs of your climate data for disclosure, analysis, or internal review. Reports can include CDP disclosures, TCFD reports, or custom carbon footprint summaries. Each report has metadata about its type, generation date, and scope. List all available reports in your Watershed organization

11

list_upload_data_records

Each record contains the activity data that will be processed into emissions measurements. Use this to review the data before validating and submitting the upload. List all data records in a specific upload

12

list_uploads

Uploads are containers for activity data that will be validated and processed into emissions measurements. Each upload can contain multiple data records representing activities like electricity usage, flights, or shipping. Use this to see all existing uploads and their IDs before adding data or submitting for processing. List all data uploads in your Watershed organization

13

submit_upload

This triggers Watershed's calculation engine to convert activity data into emissions measurements using appropriate emission factors. The upload must be validated successfully before submission. The response includes a task_id that can be used to track processing status via get_task_status. Processing may take some time depending on data volume. Submit a validated upload for emissions processing

14

update_upload_data_record

Use this to correct errors or modify activity data before validation and submission. The record_id is obtained from list_upload_data_records. The body should contain the complete updated record object with all required fields. Update a specific data record in an upload

15

upload_data_records

Each record represents an activity that generates emissions (e.g., electricity consumption, business travel, shipping). Records should follow Watershed's data format with fields like: activity_type, quantity, unit, start_date, end_date, location, etc. You can upload a single record or multiple records in a batch by providing an array of objects. Example record: { "activity_type": "electricity", "quantity": 1500, "unit": "kWh", "start_date": "2024-01-01", "end_date": "2024-01-31" } Upload activity data records to an existing upload container

16

validate_upload

Validation ensures data quality and prevents rejection during the submission phase. The response includes validation results with any errors or warnings that need to be addressed. Always validate before submitting to ensure successful processing. Validate data in an upload before submission

Example Prompts for Watershed Climate in Pydantic AI

Ready-to-use prompts you can give your Pydantic AI agent to start working with Watershed Climate immediately.

01

"List all our GHG inventories and show me the total emissions for 2024."

02

"Create a new upload called 'Q1 2024 Electricity Data', add these 3 records: electricity usage for NYC office (50,000 kWh), London office (35,000 kWh), and São Paulo office (28,000 kWh) for January 2024, then validate and submit it."

03

"Show me our reduction targets and current progress toward our net-zero goal."

Troubleshooting Watershed Climate MCP Server with Pydantic AI

Common issues when connecting Watershed Climate to Pydantic AI through the Vinkius, and how to resolve them.

01

MCPServerHTTP not found

Update: pip install --upgrade pydantic-ai

Watershed Climate + Pydantic AI FAQ

Common questions about integrating Watershed Climate MCP Server with Pydantic AI.

01

How does Pydantic AI discover MCP tools?

Create an MCPServerHTTP instance with the server URL. Pydantic AI connects, discovers all tools, and generates typed Python interfaces automatically.
02

Does Pydantic AI validate MCP tool responses?

Yes. When you define result types as Pydantic models, every tool response is validated against the schema. Invalid data raises a clear error instead of silently corrupting your pipeline.
03

Can I switch LLM providers without changing MCP code?

Absolutely. Pydantic AI abstracts the model layer — your Watershed Climate MCP integration works identically with OpenAI, Anthropic, Google, or any supported provider.

Connect Watershed Climate to Pydantic AI

Get your token, paste the configuration, and start using 16 tools in under 2 minutes. No API key management needed.