Mongrel.io User Documentation
Overview
Mongrel.io allows you to create API routes that fetch data from external sources, transform it, and deliver it to one or more destinations. Each route consists of four main components: Request, Sources, Transformation, and Response.
Request Configuration
The Request section defines how external clients will call your API endpoint.
Path
- Required: Yes
- Format: Must start with
/ - Description: The URL path where your endpoint will be accessible
- Example:
/api/usersor/data/products/{id} - Supports Parameters: Use curly braces for path parameters (e.g.,
{id},{userId})
Method
- Required: Yes
- Options:
GET,PUT,POST,DELETE - Description: The HTTP method your endpoint will accept
Content Type
- Required: No
- Options:
application/jsonapplication/xmltext/csv- Empty (no body expected)
- Description: The format of the incoming request body
Parse Options
Parse options vary based on the selected content type:
JSON Parse Options
Currently no specific parse options are required for JSON.
XML Parse Options
Attribute Handling
-
ignoreAttributes: When enabled, all element attributes are excluded from parsing. Set to
trueto treat everything as tags only, orfalseto include attributes in the output (default behavior varies). -
allowBooleanAttributes: Enables parsing of attributes that have no value assignment (e.g.,
<input checked>). When set totrue, these attributes appear in the output with a value oftrue. Must be used withignoreAttributes: false. -
attributeNamePrefix: String prepended to all attribute names to distinguish them from child elements in the output object. Common value is
@_. Only applies whenignoreAttributesisfalse. -
attributesGroupName: When specified, collects all attributes of an element under a single property with this name. Helps separate attributes from child elements. Requires
ignoreAttributes: false.
Text Content
-
textNodeName: Specifies the property name used for the text content of elements. Useful when an element has both text and attributes or child elements. Default varies based on other settings.
-
alwaysCreateTextNode: Forces creation of a dedicated text node property even for simple text-only elements. When
false, simple text is assigned directly to the tag property. -
trimValues: Removes leading and trailing whitespace from all text values and attribute values during parsing.
Special Content Types
-
cdataPropName: Property name used to identify CDATA sections in the output. If not specified, CDATA content is merged with regular text content.
-
commentPropName: Property name for preserving XML comments in the parsed output. Comments are typically ignored unless this is set. Works best with
preserveOrder: true.
Structure and Order
-
preserveOrder: Maintains the exact order of elements as they appear in the XML document. When enabled, the output structure changes to preserve sequence, which is especially important for mixed content.
-
arrayPaths: Comma-separated list of element paths that should always be parsed as arrays, even when only one element exists. Helpful for consistent data structures.
Namespaces and Entities
-
removeNsPrefix: Strips namespace prefixes from element and attribute names (e.g., converts
ns:tagNametotagName). -
processEntities: Controls whether XML entities (like
<,>,&) and DOCTYPE entities are decoded during parsing. Enabled by default. Disable for better performance if your XML doesn't contain entities. -
htmlEntities: Enables recognition and parsing of HTML-specific entities beyond standard XML entities.
Processing Control
-
stopNodes: Comma-separated list of element paths where parsing should halt, leaving the content as raw text. Useful for elements like
<script>or<pre>where you want to preserve content exactly. Can use wildcards (e.g.,*.script). -
ignorePiTags: When enabled, skips processing instruction tags (e.g.,
<?xml-stylesheet ?>). -
ignoreDeclaration: Excludes the XML declaration (e.g.,
<?xml version="1.0"?>) from the parsed output. -
unpairedTags: Comma-separated list of tag names that are self-closing and don't require a closing tag (e.g., HTML tags like
br,img,hr).
CSV Parse Options
Data Format
-
objectMode: Controls the output format of parsed rows. When
true(default), each row is returned as an object with column names as keys. Whenfalse, rows are returned as arrays of values. -
delimiter: The character used to separate columns in your CSV file. Defaults to comma (
,). Change this if your file uses alternative separators like semicolon (;) or tab (\t). Must be a single character. -
headers: Determines how column headers are handled. Set to
trueto use the first row as headers. Provide a string array to manually define header names. Set tofalse(default) if your CSV has no headers. Headers must be unique or parsing will fail. -
renameHeaders: When enabled, replaces the first row of the CSV with custom headers specified in the
headersoption. Only applies whenheadersis provided as an array. Use this when you want to discard the original header row.
Quote and Escape Handling
-
quote: The character used to wrap fields containing special characters like delimiters or line breaks. Defaults to double quote (
"). For example,"first,name"allows a comma within the field. Set to empty string to disable quote handling entirely. -
escape: Character used to include a quote character within a quoted field. Defaults to double quote (
"), so"He said ""hello"""becomesHe said "hello".
Whitespace Management
-
trim: Removes whitespace from both the beginning and end of all column values. Useful for cleaning data with inconsistent spacing.
-
ltrim: Strips whitespace only from the left (beginning) of column values while preserving trailing spaces.
-
rtrim: Removes whitespace only from the right (end) of column values while keeping leading spaces.
Row Filtering
-
ignoreEmpty: Skips rows that are completely empty or contain only whitespace and delimiters. Helps filter out blank lines in your CSV.
-
comment: Single character that marks a line as a comment (e.g.,
#). Lines starting with this character are ignored during parsing. Leave unset if your CSV doesn't contain comments. -
maxRows: Limits parsing to a specific number of rows. For example, setting this to
100will only parse the first 100 data rows. Set to0or leave unset for no limit. -
skipRows: Number of data rows to skip after headers are processed. Different from
skipLinesas it counts parsed rows rather than raw file lines. -
skipLines: Number of raw lines to skip from the beginning of the file before parsing starts. Useful for files with metadata or instructions at the top.
Column Handling
-
discardUnmappedColumns: When enabled, any columns beyond the number of defined headers are silently dropped. Only applies when the row has more columns than headers.
-
strictColumnHandling: Treats rows with column count mismatches as invalid rather than throwing errors. When enabled with headers, rows that don't match the header count trigger a validation event but parsing continues.
Encoding
- encoding: Character encoding of the CSV file. Defaults to
utf8. Change toutf16le,ascii, oriso-8859-1if your file uses a different encoding.
CORS Configuration
Configure Cross-Origin Resource Sharing for browser-based clients.
Enabled
- Type: Boolean
- Default:
false - Description: Enable or disable CORS
Allowed Origins
- Format: Array of URL objects with
valueproperty - Description: List of origins permitted to access this endpoint
- Example:
[{ value: "https://example.com" }] - Validation: Must be valid URLs
Allowed Headers
- Format: Array of objects with
valueproperty - Description: HTTP headers that can be used in requests
- Example:
[{ value: "Content-Type" }, { value: "Authorization" }] - Validation: Must match pattern
[\w-]+
Sources Configuration
Sources are external APIs or data endpoints that your route will fetch data from before transformation.
Name
- Required: Yes
- Description: Unique identifier for this source within your transformation function
- Usage: Access source data in transforms via
sources.{name} - Example:
userData,productInfo
Type
- Required: Yes
- Options:
HTTP(currently the only supported type) - Description: The type of data source
URL
- Required: Yes (for HTTP sources)
- Description: The full URL to fetch data from
- Example:
https://api.example.com/users - Supports Variables: Yes - see Using Variables in URLs
Content Type
- Required: Yes
- Options:
application/jsonapplication/xmltext/csv
- Description: The format of data returned by the source
Authentication
Configure how to authenticate with the external source.
Type: NONE
No authentication required.
Type: BASIC
HTTP Basic Authentication - username: Username for authentication - password: Password for authentication
Type: HEADER_KEY
API key in request header
- keyName: Name of the header (e.g., X-API-Key)
- apiKey: The API key value
Type: QUERY_KEY
API key in query parameter
- keyName: Name of the query parameter (e.g., api_key)
- apiKey: The API key value
Type: BEARER_TOKEN
Bearer token authentication - token: The bearer token value
Type: OIDC
OpenID Connect authentication - tokenUrl: OAuth token endpoint URL - clientId: OAuth client ID - clientSecret: OAuth client secret - scope: Requested OAuth scopes - token (optional): Cached token (managed automatically)
Parse Options
Same options as Request parse options based on the source's content type.
Transformation Function
The transformation function is where you process incoming request data and source responses to create your output.
Function Requirements
- Parameters: Must accept exactly one parameter (typically named
data) - Return: Must return a value
- Language: JavaScript (ECMAScript 2020)
- Validation: Code is validated for correct syntax and structure
Input Data Structure
Your function receives a data object with the following structure:
{
request: {
body: {}, // Parsed request body (if any)
path: {}, // Path parameters as key-value pairs
query: {}, // Query parameters (values are arrays)
headers: {} // Request headers (values are arrays)
},
sources: {
sourceName: {}, // Data from each source by name
anotherSource: {}
},
item: {} // Present when split is enabled
}
Accessing Data from Different Formats
The way you access data in your transformation function depends on the content type of your sources. Here are examples for each supported format:
JSON Source Data
JSON sources are parsed into JavaScript objects, allowing direct property access:
Source Configuration:
- Name: devApi (this is how you'll reference it in your transform)
- URL: https://api.example.com/developer/status
- Content Type: application/json
Source Response:
{
"developer": {
"id": 42,
"name": "Ada Lovelace",
"email": "ada@recursion.dev"
},
"bugs": [
{"id": 1, "severity": "critical", "description": "Works on my machine"},
{"id": 2, "severity": "minor", "description": "Feature, not a bug"}
],
"coffeeConsumed": 9001
}
Accessing in Transform:
function(data) {
// Access the source using the name you configured: data.sources.devApi
const dev = data.sources.devApi.developer;
const bugs = data.sources.devApi.bugs;
const coffee = data.sources.devApi.coffeeConsumed;
return {
developerId: dev.id,
name: dev.name,
productivity: coffee > 9000 ? "legendary" : "mortal",
criticalBugs: bugs.filter(b => b.severity === "critical").length,
excuses: bugs.map(b => b.description),
statusMessage: `${dev.name} has ${bugs.length} features to document`
};
}
XML Source Data
XML sources are converted to JavaScript objects. Element attributes use the configured prefix (default @_), and text content uses the text node name:
Source Configuration:
- Name: standupApi (this is how you'll reference it in your transform)
- URL: https://api.example.com/standup/notes
- Content Type: application/xml
Source Response:
<standup_notes>
<developer id="123" team="backend" timezone="GMT-8">
<name>Grace Hopper</name>
<yesterday>Debugged the debugger</yesterday>
<blockers status="resolved">Found actual bug (it was a moth)</blockers>
</developer>
<developer id="456" team="frontend" timezone="GMT+1">
<name>Linus Torvalds</name>
<yesterday>Pushed directly to main</yesterday>
<blockers status="ongoing">Meetings</blockers>
</developer>
</standup_notes>
Parsed Structure (with default options):
{
standup_notes: {
developer: [
{
"@_id": "123",
"@_team": "backend",
"@_timezone": "GMT-8",
name: "Grace Hopper",
yesterday: "Debugged the debugger",
blockers: {
"@_status": "resolved",
"#text": "Found actual bug (it was a moth)"
}
},
{
"@_id": "456",
"@_team": "frontend",
"@_timezone": "GMT+1",
name: "Linus Torvalds",
yesterday: "Pushed directly to main",
blockers: {
"@_status": "ongoing",
"#text": "Meetings"
}
}
]
}
}
Accessing in Transform:
function(data) {
const devs = data.sources.standupApi.standup_notes.developer;
return devs.map(dev => ({
id: dev["@_id"],
name: dev.name,
team: dev["@_team"],
yesterday: dev.yesterday,
blocked: dev.blockers["@_status"] === "ongoing",
blocker: dev.blockers["#text"],
riskLevel: dev.yesterday.includes("main") ? "YOLO" : "safe"
}));
}
CSV Source Data
CSV sources are parsed as arrays of objects (when objectMode: true) or arrays of arrays. Column names become object keys:
Source Configuration:
- Name: devCsv (this is how you'll reference it in your transform)
- URL: https://api.example.com/developers.csv
- Content Type: text/csv
Source Response:
dev_id,name,language,tabs_or_spaces,last_commit,commit_message
101,Margaret Hamilton,Assembly,tabs,2024-01-15,Fixed moon landing bug
102,Dennis Ritchie,C,spaces,2024-02-20,Rewrite it in C
103,Guido van Rossum,Python,spaces,2024-03-10,Added more whitespace
Parsed Structure:
[
{
dev_id: "101",
name: "Margaret Hamilton",
language: "Assembly",
tabs_or_spaces: "tabs",
last_commit: "2024-01-15",
commit_message: "Fixed moon landing bug"
},
{
dev_id: "102",
name: "Dennis Ritchie",
language: "C",
tabs_or_spaces: "spaces",
last_commit: "2024-02-20",
commit_message: "Rewrite it in C"
},
{
dev_id: "103",
name: "Guido van Rossum",
language: "Python",
tabs_or_spaces: "spaces",
last_commit: "2024-03-10",
commit_message: "Added more whitespace"
}
]
Accessing in Transform:
function(data) {
// Access the source using the name you configured: data.sources.devCsv
const devs = data.sources.devCsv;
// Helper function to calculate days since commit
function calculateDays(dateStr) {
const committed = new Date(dateStr);
const now = new Date();
return Math.floor((now - committed) / (1000 * 60 * 60 * 24));
}
// The eternal debate
const spacesCount = devs.filter(d => d.tabs_or_spaces === 'spaces').length;
return devs.map(dev => ({
developerId: parseInt(dev.dev_id),
name: dev.name,
primaryLanguage: dev.language,
preference: dev.tabs_or_spaces,
isCorrect: dev.tabs_or_spaces === 'spaces', // We all know the truth
daysSinceCommit: calculateDays(dev.last_commit),
lastMessage: dev.commit_message,
needsCoffee: calculateDays(dev.last_commit) > 7
}));
}
Function Examples
Simple Pass-Through
function(data) {
return data.sources.myApi;
}
Combining Multiple Sources
function(data) {
return {
user: data.sources.userApi,
orders: data.sources.orderApi,
timestamp: Date.now()
};
}
Arrow Function
(data) => ({
id: data.request.path.id,
name: data.sources.users.name,
email: data.sources.users.email
})
Complex Transformation
function(data) {
const users = data.sources.userList;
const filter = data.request.query.status?.[0] || 'active';
return users
.filter(user => user.status === filter)
.map(user => ({
id: user.id,
fullName: `${user.firstName} ${user.lastName}`,
email: user.email
}));
}
Response Configuration
The Response section defines how your transformed data will be returned and optionally forwarded to other destinations.
Code
- Required: Yes
- Options:
200,201,204 - Description: HTTP status code for successful responses
Content Type
- Required: Yes
- Options:
application/jsonapplication/xmltext/csv
- Description: Format for the response body
Split
- Type: Boolean
- Default:
false - Description: When
true, if your transformation returns an array, each item will be sent separately to destinations (and optionally to the caller) - Use Case: Send individual records to destinations while returning the full array to the caller
Write Options
Write options vary based on the selected content type:
JSON Write Options
- includeFields: Comma-separated list of fields to include in output
- indent: Number of spaces for JSON formatting (omit for compact JSON)
XML Write Options
Structure and Formatting
-
format: Enables pretty-printing of the XML output with proper indentation and line breaks. When
false, produces compact single-line XML. Set totruefor human-readable output. -
indentBy: Defines the string used for each indentation level when formatting is enabled. Common values include two spaces (
" "), four spaces, or a tab character ("\t"). Only applies whenformatistrue. -
arrayNodeName: Specifies the tag name to use when building XML from an array at the root level. For example, setting this to
"item"wraps each array element in<item>tags. -
preserveOrder: Maintains the exact sequence of elements as they exist in the JavaScript object. Essential when you need to recreate XML from a parsed structure without reordering elements. Should match the parser setting if round-tripping data.
Attribute Control
-
ignoreAttributes: When
true, skips all attributes during XML generation. Whenfalse, includes attributes in the output. Can also accept an array of attribute names, regular expressions, or a callback function to selectively exclude specific attributes. -
attributeNamePrefix: String that identifies attribute properties in the JavaScript object. For example, with prefix
"@_", the property@_idbecomes the attributeidin XML. Must match the parser setting for consistent round-tripping. -
attributesGroupName: Property name that contains all attributes for an element grouped together. Helps organize attributes separately from child elements in the data structure. Not applicable when
preserveOrderis enabled. -
suppressBooleanAttributes: When enabled, attributes with boolean
truevalues are written without values (e.g.,<input checked>instead of<input checked="true">). Useful for HTML-style boolean attributes.
Text Content
- textNodeName: Identifies which property in the JavaScript object contains the text value of an element. Typically
"#text". Necessary when elements have both text content and attributes or child elements.
Special Content Handling
-
cdataPropName: Property name that marks content to be wrapped in CDATA sections. For example, setting this to
"rawContent"will output<![CDATA[...]]>for properties with that name. Useful for preserving content with special characters or markup. -
commentPropName: Property name identifying XML comments in the data structure. Content under this property becomes
<!-- comment -->in the output. Best used withpreserveOrder: trueto maintain comment positioning.
Entity Processing
- processEntities: Controls conversion of special characters to XML entities during output. When
true(default), characters like<,>, and&are encoded as<,>, and&. Disable for better performance if your data doesn't require entity encoding.
Empty and Self-Closing Tags
-
suppressEmptyNode: When enabled, elements with no content are rendered as self-closing tags (e.g.,
<tag/>instead of<tag></tag>). Useful for cleaner XML output when dealing with optional or nullable fields. -
unpairedTags: Comma-separated list of tag names that should be rendered as unpaired/self-closing tags without requiring empty content. Common for HTML-style tags like
br,hr,img, andinput. -
suppressUnpairedNode: Controls whether unpaired tags include a closing slash. When
true, renders as<br>. Whenfalse, renders as<br/>. Works in conjunction withunpairedTags.
Advanced Options
-
oneListGroup: When enabled, wraps array items under a parent container tag. Useful when you want repeated elements grouped within a single parent rather than appearing as siblings at the same level.
-
stopNodes: Comma-separated list of element paths where processing should stop, preserving the content as raw text or avoiding entity conversion. Useful for elements containing code, scripts, or pre-formatted content.
CSV Write Options
Field and Row Separators
-
delimiter: Character used to separate columns in the output. Defaults to comma (
,). Use semicolon (;) or tab (\t) for different formats. Must be a single character. -
rowDelimiter: Character sequence used to separate rows in the output. Defaults to newline (
\n). Change to\r\nfor Windows-style line endings or other custom row separators. -
includeEndRowDelimiter: When enabled, adds a row delimiter after the final row of data. Defaults to
false. Set totrueif you need a trailing newline at the end of the CSV file.
Quote and Escape Control
-
quote: Character used to wrap field values that contain special characters like delimiters or line breaks. Defaults to double quote (
"). Set to empty string to disable quoting entirely (use with caution as this can break CSV parsing if fields contain delimiters). -
escape: Character used to escape quote characters within quoted fields. Defaults to double quote (
"), so"He said ""hello"""outputs as a properly escaped quoted field. Must coordinate with the quote character. -
quoteColumns: Controls which data columns get quoted. Set to
trueto quote all columns. Provide a boolean array to quote specific column positions, or use an object mapping column names to boolean values for selective quoting. When unspecified, only columns requiring quotes (those with delimiters, quotes, or line breaks) are quoted. -
quoteHeaders: Determines which header values are quoted. Behaves like
quoteColumnsbut applies only to the header row. When not specified, inherits thequoteColumnssetting. Useful when you want different quoting behavior for headers versus data.
Header Configuration
-
headers: Controls header row generation. Set to
trueto auto-detect headers from the first data row (object keys become headers). Provide a string array to specify custom header names. Set tofalseor leave unset if no headers are needed. Headers must match object property names in your data for proper column mapping. -
writeHeaders: When
false, suppresses the header row entirely. Defaults totrue. Useful for appending data to existing CSV files that already have headers. -
alwaysWriteHeaders: Forces header row output even when no data rows are written. Requires headers to be explicitly defined as an array. Useful for creating empty CSV templates with predefined columns.
-
forceHeaders: Comma-separated string of header names to force in the output, overriding auto-detected headers. Use when you need specific headers regardless of the data structure.
Column-Level Quoting
-
quotedColumns: Comma-separated string listing specific column names that should always be quoted. Provides fine-grained control over which columns need quoting beyond the automatic logic.
-
quotedHeaders: Comma-separated string of header names that should be quoted. Allows explicit control over header quoting independent from data quoting rules.
Special Options
- writeBOM: When enabled, writes a UTF-8 Byte Order Mark (BOM) as the first character of the output. Set to
trueif your CSV will be opened in applications like Excel that rely on BOM for proper character encoding detection. Defaults tofalse.
Destinations
Define zero or more destinations to forward your data to.
URL
- Required: Yes
- Description: Target endpoint URL
- Example:
https://webhook.example.com/data - Supports Variables: Yes - see Using Variables in URLs
Content Type
- Required: Yes
- Options:
application/json,application/xml,text/csv - Description: Format for data sent to this destination
Method
- Required: Yes
- Options:
PUT,PATCH,POST,DELETE - Description: HTTP method to use when sending to destination
Split
- Type: Boolean
- Default:
false - Description: When
true, send array items individually to this destination - Note: Independent from response-level split setting
Authentication
Same authentication options as Sources (see Sources > Authentication section)
Write Options
Same write options as Response, specific to this destination's content type
Common Patterns
Using Variables in URLs
Both Source URLs and Destination URLs support template variables, allowing you to dynamically construct URLs based on incoming request data and even data from other sources.
Template Syntax
Variables are inserted using JavaScript template literal syntax: ${expression}
Available Data in URLs
You can access the same data object available in transformation functions:
{
request: {
body: {}, // Parsed request body
path: {}, // Path parameters as key-value pairs
query: {}, // Query parameters (values are arrays)
headers: {} // Request headers (values are arrays)
},
sources: {
sourceName: {} // Data from previously executed sources
}
}
Note: Sources are automatically executed in the correct order based on their dependencies. If one source references another in its URL, the system ensures the referenced source is fetched first.
Examples
Using Path Parameters
Request path: /api/users/{userId}/orders/{orderId}
Source URL:
https://api.example.com/users/${data.request.path.userId}
When a request comes to /api/users/123/orders/456, this resolves to:
https://api.example.com/users/123
Using Query Parameters
Request: /api/search?status=active&limit=10
Source URL:
https://api.example.com/items?status=${data.request.query.status[0]}&max=${data.request.query.limit[0]}
Remember that query parameter values are arrays, so use [0] to get the first value.
Using Request Headers
Source URL:
https://api.example.com/data?tenant=${data.request.headers['x-tenant-id'][0]}
Headers are also arrays, so access the first value with [0].
Using Request Body Fields
Request with JSON body:
{
"customerId": "C-12345",
"orderType": "express"
}
Source URL:
https://api.example.com/customers/${data.request.body.customerId}/orders?type=${data.request.body.orderType}
Using Data from Other Sources
You can reference data from other sources to create powerful API call chains. The system automatically determines the correct execution order based on dependencies.
Imagine you have two sources configured:
- Source 1 - Name:
userApi, URL:https://api.example.com/users/${data.request.path.userId} - Source 2 - Name:
ordersApi, URL:https://api.example.com/accounts/${data.sources.userApi.accountId}/orders
The second source references the first source in its URL. The system detects this dependency and ensures userApi executes before ordersApi. If userApi returns:
{
"userId": 123,
"accountId": "ACC-789",
"name": "Ada Lovelace"
}
Then the ordersApi URL becomes:
https://api.example.com/accounts/ACC-789/orders
Complex Expressions
You can use JavaScript expressions within the template variables:
https://api.example.com/items?limit=${data.request.query.limit ? data.request.query.limit[0] : '10'}
https://api.example.com/users/${data.request.path.userId.toUpperCase()}
Destination URLs
Destination URLs work the same way but have access to all sources since they execute after the transformation:
https://webhook.example.com/notify/${data.request.path.tenantId}/events
Best Practices
- URL Encode Values: If your variables might contain special characters, consider handling encoding in your transformation function and passing safe values
- Validate Required Fields: Ensure path parameters and required query parameters are present before they're used in URLs
- Use Fallbacks: Provide default values for optional parameters:
${data.request.query.page?.[0] || '1'} - Source Dependencies: The system automatically resolves source dependencies based on URL references, so you can define sources in any order
- Keep URLs Readable: For complex URL construction, consider building the URL in your transformation function instead
Using Path Parameters
// Request path: /users/{userId}/orders/{orderId}
function(data) {
const userId = data.request.path.userId;
const orderId = data.request.path.orderId;
return {
user: userId,
order: orderId,
details: data.sources.orderApi
};
}
Accessing Query Parameters
// Query parameters come as arrays
function(data) {
const limit = parseInt(data.request.query.limit?.[0] || '10');
const page = parseInt(data.request.query.page?.[0] || '1');
return data.sources.items.slice((page - 1) * limit, page * limit);
}
Working with Split Mode
// Transformation returns array
function(data) {
return data.sources.users.map(user => ({
id: user.id,
email: user.email
}));
}
// With split=true on destination, each user object
// is sent as a separate HTTP request
Error Handling
Errors in transformation functions or source fetching will result in error responses. Ensure your transformation handles missing or unexpected data gracefully:
function(data) {
const users = data.sources.userApi?.users || [];
return users.map(user => ({
id: user.id,
name: user.name || 'Unknown',
email: user.email || 'no-email@example.com'
}));
}