Superagent LogoSuperagent
Agent Frameworks

Mastra AI

Integration Superagent with the Mastra framework

Superagent provides enterprise-grade security validation for Mastra AI agents, ensuring that every prompt and tool invocation is checked for potential security risks before execution. This guide shows you how to integrate Superagent Guard with your Mastra agents to build secure, production-ready AI systems.

Overview

When building AI agents that can execute commands, access files, or interact with external systems, security is paramount. Superagent acts as a security layer that:

  • Validates user prompts before they reach your AI model
  • Guards tool executions to prevent harmful operations
  • Filters tool outputs to ensure safe content handling
  • Provides detailed security analysis with CWE codes and violation types

Prerequisites

Before starting, ensure you have:

  • Node.js v20.0 or higher
  • A Superagent account with API key (sign up here)
  • An OpenAI API key or other LLM provider credentials
  • Basic familiarity with Mastra agents

Installation

Install the required dependencies:

npm install @mastra/core superagent-ai zod @ai-sdk/openai
# or
pnpm add @mastra/core superagent-ai zod @ai-sdk/openai
# or
yarn add @mastra/core superagent-ai zod @ai-sdk/openai

Configuration

Setting up environment variables

Create a .env file in your project root:

SUPERAGENT_API_KEY=your_superagent_api_key
OPENAI_API_KEY=your_openai_api_key

Initialize the Guard client

import { createGuard } from 'superagent-ai';

const guard = createGuard({
  apiBaseUrl: 'https://app.superagent.sh/api/guard', // optional for self-hosted
  apiKey: process.env.SUPERAGENT_API_KEY!,
});

Creating Secure Tools

Pattern 1: Guarding Tool Inputs

Validate tool inputs before executing potentially dangerous operations:

import { createTool } from '@mastra/core/tools';
import { z } from 'zod';
import { exec } from 'child_process';
import { promisify } from 'util';

const execAsync = promisify(exec);

const secureCommandTool = createTool({
  id: 'secure-command',
  description: 'Execute shell commands with security validation',
  inputSchema: z.object({
    command: z.string().describe('The shell command to execute'),
    workingDirectory: z.string().optional().describe('Working directory'),
  }),
  outputSchema: z.object({
    result: z.string().nullable(),
    stderr: z.string().optional(),
    safetyAnalysis: z.object({
      decision: z.object({
        status: z.enum(['pass', 'block']),
        violation_types: z.array(z.string()).optional(),
        cwe_codes: z.array(z.string()).optional(),
      }),
      reasoning: z.string(),
    }),
  }),
  execute: async ({ context }) => {
    const { command, workingDirectory } = context;
    
    // Guard the command before execution
    const { decision, reasoning, rejected } = await guard(command, {
      onBlock: (reason) => {
        console.warn('🚫 Guard blocked command:', reason);
      },
      onPass: () => {
        console.log('✅ Guard approved command');
      },
    });

    if (rejected) {
      return {
        result: null,
        safetyAnalysis: {
          decision,
          reasoning,
        },
      };
    }

    // Execute the approved command
    try {
      const { stdout, stderr } = await execAsync(command, {
        cwd: workingDirectory || process.cwd(),
        timeout: 30000,
      });
      
      return {
        result: stdout,
        stderr: stderr || undefined,
        safetyAnalysis: {
          decision,
          reasoning,
        },
      };
    } catch (error: any) {
      return {
        result: null,
        stderr: error.message,
        safetyAnalysis: {
          decision,
          reasoning,
        },
      };
    }
  },
});

Pattern 2: Guarding Tool Outputs

Validate data returned by tools before passing it back to the model:

const secureWebScrapeTool = createTool({
  id: 'secure-web-scrape',
  description: 'Fetch and validate web content',
  inputSchema: z.object({
    url: z.string().url().describe('URL to fetch'),
  }),
  outputSchema: z.object({
    content: z.string().nullable(),
    safetyAnalysis: z.object({
      decision: z.object({
        status: z.enum(['pass', 'block']),
        violation_types: z.array(z.string()).optional(),
        cwe_codes: z.array(z.string()).optional(),
      }),
      reasoning: z.string(),
    }),
  }),
  execute: async ({ context }) => {
    const { url } = context;
    
    try {
      // Fetch the web content
      const response = await fetch(url);
      const textContent = await response.text();
      
      // Guard the scraped content before returning it
      const { decision, reasoning, rejected } = await guard(textContent, {
        onBlock: (reason) => {
          console.warn('🚫 Guard blocked web content:', reason);
        },
      });

      if (rejected) {
        return {
          content: null,
          safetyAnalysis: {
            decision,
            reasoning: `Web content blocked: ${reasoning}`,
          },
        };
      }

      return {
        content: textContent.substring(0, 5000), // Limit content length
        safetyAnalysis: {
          decision,
          reasoning,
        },
      };
    } catch (error: any) {
      return {
        content: null,
        safetyAnalysis: {
          decision: { status: 'block' },
          reasoning: `Failed to fetch URL: ${error.message}`,
        },
      };
    }
  },
});

Pattern 3: File Operations with Dual Validation

Validate both the operation and the content:

import fs from 'fs/promises';

const secureFileTool = createTool({
  id: 'secure-file-ops',
  description: 'Perform file operations with content validation',
  inputSchema: z.object({
    operation: z.enum(['read', 'write', 'delete']),
    path: z.string(),
    content: z.string().optional(),
  }),
  outputSchema: z.object({
    result: z.string().nullable(),
    content: z.string().optional(),
    safetyAnalysis: z.object({
      decision: z.object({
        status: z.enum(['pass', 'block']),
        violation_types: z.array(z.string()).optional(),
        cwe_codes: z.array(z.string()).optional(),
      }),
      reasoning: z.string(),
    }),
  }),
  execute: async ({ context }) => {
    const { operation, path: filePath, content } = context;
    
    // Guard the operation
    const operationDescription = `${operation} file at ${filePath}`;
    const { decision, reasoning, rejected } = await guard(operationDescription);

    if (rejected) {
      return {
        result: null,
        safetyAnalysis: {
          decision,
          reasoning,
        },
      };
    }

    try {
      switch (operation) {
        case 'read':
          const fileContent = await fs.readFile(filePath, 'utf-8');
          
          // Guard the file content before returning
          const contentGuard = await guard(fileContent);
          
          if (contentGuard.rejected) {
            return {
              result: null,
              safetyAnalysis: {
                decision: contentGuard.decision,
                reasoning: `File content blocked: ${contentGuard.reasoning}`,
              },
            };
          }
          
          return { 
            result: 'File read successfully',
            content: fileContent,
            safetyAnalysis: {
              decision,
              reasoning,
            },
          };
        
        case 'write':
          // Guard the content to be written
          if (content) {
            const writeGuard = await guard(content);
            if (writeGuard.rejected) {
              return {
                result: null,
                safetyAnalysis: {
                  decision: writeGuard.decision,
                  reasoning: `Content blocked: ${writeGuard.reasoning}`,
                },
              };
            }
          }
          
          await fs.writeFile(filePath, content || '');
          return { 
            result: 'File written successfully',
            safetyAnalysis: {
              decision,
              reasoning,
            },
          };
        
        case 'delete':
          await fs.unlink(filePath);
          return { 
            result: 'File deleted successfully',
            safetyAnalysis: {
              decision,
              reasoning,
            },
          };
      }
    } catch (error: any) {
      return {
        result: null,
        safetyAnalysis: {
          decision,
          reasoning: `Operation failed: ${error.message}`,
        },
      };
    }
  },
});

Creating Secure Agents

Guarding User Prompts

Create a wrapper function to validate prompts before they reach your agent:

import { Agent } from '@mastra/core';
import { openai } from '@ai-sdk/openai';

async function generateGuardedText(agent: Agent, userPrompt: string) {
  // Guard the user prompt
  const { decision, reasoning, rejected } = await guard(userPrompt, {
    onBlock: () => console.warn('Blocked prompt:', reasoning),
    onPass: () => console.log('Prompt cleared guard'),
  });

  if (rejected) {
    return {
      text: null,
      safetyAnalysis: {
        decision,
        reasoning,
      },
    };
  }

  // Generate response with the approved prompt
  const result = await agent.generate(userPrompt);
  
  return {
    text: result.text,
    toolCalls: result.toolCalls,
    safetyAnalysis: {
      decision,
      reasoning,
    },
  };
}