Global AI Network
Agent Template v1.0.0

Multi-AI Router

45+
Deployments
5m
Setup Time
Free
Pricing

Need custom configuration?

Our solution engineers can help you adapt this agent to your specific infrastructure and requirements.

Enterprise Grade Best Practices Production Optimized

INTEGRATED_MODULES

Anthropic
Anthropic
Groq
Groq
OpenAI
OpenAI
Step by Step

Setup Tutorial

mission-briefing.md

What This Agent Does

This Multi-Provider Data Enrichment Agent is a sophisticated automation workflow that intelligently routes data enrichment requests to the most appropriate AI provider—OpenAI, Anthropic, or Groq—based on your specific needs. The agent analyzes incoming data, determines the optimal provider for your use case, generates enriched content through that provider, calculates performance metrics, and returns comprehensive results via webhook.

Key benefits include:

  • Intelligent routing that automatically selects the best AI provider for your task
  • Cost optimization by distributing requests across providers based on performance and pricing
  • Flexibility to switch providers without changing your integration points
  • Real-time metrics that track performance and quality across all providers
  • Significant time savings by automating complex data enrichment workflows that would otherwise require manual processing

Target use cases: Customer data enrichment, content generation at scale, automated research compilation, product description enhancement, lead qualification, market analysis, and any scenario requiring intelligent AI-powered data processing.


Who Is It For

This agent is ideal for:

  • E-commerce teams enriching product catalogs and customer profiles
  • Sales and marketing professionals automating lead research and qualification
  • Content creators scaling content generation across multiple formats
  • Data analysts augmenting datasets with AI-generated insights
  • Enterprise teams requiring flexible, multi-provider AI infrastructure
  • Developers building applications that need intelligent routing and fallback capabilities

Whether you're processing dozens or thousands of records, this workflow handles the complexity of provider selection, execution, and result aggregation seamlessly.


Required Integrations

OpenAI

Why it's needed: OpenAI provides access to GPT-4o, one of the most capable and versatile language models available. This integration is essential for high-complexity tasks requiring advanced reasoning and nuanced understanding.

Setup steps:

  1. Visit platform.openai.com and sign in to your account
  2. Navigate to API keys in the left sidebar
  3. Click Create new secret key
  4. Copy the generated key immediately (it won't be displayed again)
  5. In TaskAGI, go to IntegrationsAdd IntegrationOpenAI
  6. Paste your API key in the authentication field
  7. Click Test Connection to verify the integration works
  8. Save the integration with a memorable name like OpenAI-Production

How to obtain API keys: Create an OpenAI account at openai.com, add a payment method, and generate API keys from your account dashboard. Ensure you have sufficient credits or a valid billing method configured.

Configuration in TaskAGI: Once integrated, the OpenAI node will be available for selection. The workflow uses the gpt-4o model, which is configured automatically. Monitor your usage in the OpenAI dashboard to track costs.


Anthropic

Why it's needed: Anthropic's Claude models excel at nuanced analysis, detailed reasoning, and safety-conscious outputs. This integration provides an alternative provider for tasks where Claude's strengths offer advantages.

Setup steps:

  1. Navigate to console.anthropic.com and create an account
  2. Go to API Keys in your account settings
  3. Click Create Key
  4. Copy your API key to a secure location
  5. In TaskAGI, select IntegrationsAdd IntegrationAnthropic
  6. Paste your API key in the credentials field
  7. Click Verify to confirm authentication
  8. Name the integration Anthropic-Production and save

How to obtain API keys: Sign up at Anthropic's console, complete identity verification if required, and generate API keys from the API Keys section. Ensure billing is configured for your account.

Configuration in TaskAGI: The workflow uses claude-sonnet-4-5-20250929, Anthropic's latest high-performance model. This model balances speed and capability, making it excellent for production workloads.


Groq

Why it's needed: Groq specializes in ultra-fast inference with their LPU (Language Processing Unit) technology. This integration is crucial for latency-sensitive applications where speed is paramount.

Setup steps:

  1. Go to console.groq.com and register for an account
  2. Complete email verification
  3. Navigate to API Keys in your dashboard
  4. Click Create API Key
  5. Copy the generated key immediately
  6. In TaskAGI, select IntegrationsAdd IntegrationGroq
  7. Paste your API key in the authentication field
  8. Click Test to validate the connection
  9. Save with the name Groq-Production

How to obtain API keys: Create a Groq account at their console, verify your email, and generate API keys from the API Keys section. Groq offers generous free tier limits, making it excellent for testing and development.

Configuration in TaskAGI: The workflow uses llama-3.3-70b-versatile, a powerful open-source model optimized for Groq's infrastructure. This model provides excellent performance at minimal latency.


Configuration Steps

Step 1: Webhook Trigger Setup

The workflow begins with a Webhook Trigger node that listens for incoming requests.

Configuration:

  • The webhook automatically generates a unique URL when you save the workflow
  • Copy this URL to use as your integration endpoint
  • The trigger accepts POST requests with JSON payloads
  • Example payload structure:
{
  "data_to_enrich": "Customer profile for John Smith",
  "preferred_provider": "auto",
  "enrichment_type": "comprehensive"
}

What to do: Test the webhook URL using a tool like Postman or curl to ensure it's accessible from your systems.


Step 2: Extract Parameters

The Extract Parameters node (core.edit_data) processes incoming webhook data and standardizes it for downstream processing.

Configuration:

  • Maps incoming JSON fields to standardized variable names
  • Extracts key fields like data_to_enrich, enrichment_type, and provider_preference
  • Validates data format and handles missing fields gracefully
  • Example: If your webhook sends customer_info, map it to the internal data_to_enrich variable

Parameter mapping example:

  • Input: {"customer_data": "..."}
  • Output: {enriched_data: "...", provider: "auto"}

Step 3: LLM Router Logic

The LLM Router Logic node (core.function) intelligently determines which provider to use based on the data characteristics and your preferences.

Configuration:

  • Analyzes the enrichment request complexity
  • Evaluates data size and processing requirements
  • Considers cost-benefit tradeoffs
  • Routes to OpenAI for complex reasoning tasks
  • Routes to Anthropic for nuanced analysis
  • Routes to Groq for speed-critical operations

Logic example: If enrichment_type == "fast", route to Groq; if enrichment_type == "detailed", route to OpenAI.


Step 4: Route to Provider

The Route to Provider node (core.switch) directs the request to the appropriate AI service based on the router's decision.

Configuration:

  • case_0 → OpenAI (complex analysis)
  • case_1 → Anthropic (nuanced reasoning)
  • case_2 → Groq (speed-optimized)

Each case connects to its respective AI provider node.


Step 5-7: AI Provider Nodes

Three parallel nodes execute the enrichment request:

OpenAI Generate (gpt-4o):

  • Prompt: "You are a data enrichment AI assistant. Analyze and enrich the provided data with relevant insights..."
  • Model: gpt-4o (latest, most capable)
  • Best for: Complex analysis, multi-step reasoning

Anthropic Generate (claude-sonnet-4-5-20250929):

  • Prompt: "You are a data enrichment AI assistant. Analyze and enrich the provided data with relevant insights..."
  • Model: claude-sonnet-4-5-20250929 (balanced performance)
  • Best for: Detailed analysis, safety-conscious outputs

Groq Generate (llama-3.3-70b-versatile):

  • Prompt: "You are a data enrichment AI assistant. Analyze and enrich the provided data with relevant insights..."
  • Model: llama-3.3-70b-versatile (ultra-fast)
  • Best for: Time-sensitive operations, high-volume processing

Configuration tip: Customize the prompts for your specific use case. For example, if enriching customer data, modify the prompt to: "Analyze the customer profile and provide enrichment including industry insights, company information, and relevant business metrics."


Step 8: Merge Results

The Merge Results node (core.merge) combines outputs from whichever provider was selected.

Configuration:

  • Standardizes the output format across all three providers
  • Adds metadata about which provider was used
  • Includes timestamps and processing duration
  • Creates a unified response structure

Output structure:

{
  "enriched_data": "...",
  "provider_used": "openai",
  "processing_time_ms": 1250,
  "confidence_score": 0.95
}

Step 9: Calculate Metrics

The Calculate Metrics node (core.function) generates performance analytics.

Configuration:

  • Tracks response time for each provider
  • Calculates data quality scores
  • Monitors cost per request
  • Generates comparative analytics

Metrics captured:

  • Latency (milliseconds)
  • Token usage
  • Cost estimate
  • Quality assessment

Step 10: Webhook Response

The Webhook Response node (core.respondToWebhook) returns the enriched data to your calling system.

Configuration:

  • Body: [[nodes.9]] (references the metrics node output)
  • Status code: 200 (success)
  • Headers: Automatically includes Content-Type: application/json

Testing Your Agent

Test Execution

Step 1: Prepare Test Data Create a simple test payload:

{
  "data_to_enrich": "Acme Corporation - technology startup in San Francisco",
  "enrichment_type": "comprehensive",
  "preferred_provider": "auto"
}

Step 2: Send Test Request Use curl or Postman to POST to your webhook URL:

curl -X POST https://your-webhook-url \
  -H "Content-Type: application/json" \
  -d '{"data_to_enrich":"Acme Corporation","enrichment_type":"comprehensive"}'

Step 3: Verify at Each Stage

  • Webhook Trigger: Confirm the request is received (check logs)
  • Extract Parameters: Verify data is properly parsed
  • LLM Router: Confirm the correct provider was selected
  • AI Provider: Check that enriched content was generated
  • Merge Results: Validate output format and completeness
  • Calculate Metrics: Review performance statistics
  • Webhook Response: Confirm response reaches your system

What to verify:

  • ✅ Response time is under 10 seconds
  • ✅ Enriched data contains meaningful additions
  • ✅ Provider selection matches your expectations
  • ✅ Metrics accurately reflect performance
  • ✅ No errors in logs or response body

Expected results:

  • HTTP 200 status code
  • Complete JSON response with enriched data
  • Performance metrics showing provider used
  • Processing time logged for optimization

Success indicators:

  • Consistent response times across multiple tests
  • High-quality enriched content from all providers
  • Proper provider routing based on request type
  • Zero failed requests in test batch

Congratulations! Your Multi-Provider Data Enrichment Agent is now ready for production use. Monitor performance metrics regularly and adjust routing logic as needed based on your evolving requirements.