Airwallex logo
Airwallex logo

Build with LLMs

Accelerate your Airwallex API integration with AI-assisted development tools

Building with large language models (LLMs) is an approach to software development that uses AI coding assistants to accelerate your Airwallex API integrations. AI coding assistants equipped with the right tools can help you find information faster, write more accurate code, and test integrations easily. Airwallex provides tools to enable these capabilities for your AI coding assistants, including the Developer Model Context Protocol (MCP) server.

Developer MCP server

The Airwallex Developer Model Context Protocol (MCP) server connects your coding assistant to Airwallex documentation, API references, and sandbox testing tools. The server is distributed as an npm package (@airwallex/developer-mcp) and works with popular coding assistants including Cursor, Claude Code, Gemini CLI, and OpenAI Codex.

For installation instructions, configuration examples, and usage details, see Developer MCP server.

How LLM-assisted development with Developer MCP server works

LLM-assisted development combines three key parts: a coding assistant, the MCP layer, and Airwallex developer tools.

Coding assistants

Coding assistants like Cursor, Claude Code, and other MCP-compatible tools can generate code and answer technical questions. When these assistants need Airwallex-specific context—such as current API documentation, integration patterns, or sandbox testing capabilities—they invoke the tools provided by the Airwallex Developer MCP server.

Model Context Protocol

MCP is an open standard that enables AI assistants to interact with external systems and data sources. It defines how coding assistants discover and invoke tools, ensuring consistent behavior across different AI models and platforms.

The protocol works through a client-server architecture:

ComponentDescription
MCP clientBundled with your coding assistant, discovers available tools and facilitates their invocation
MCP serverRuns on your device or remotely, providing annotated tools and resources to the AI model
MCP toolsFunctions that AI models can invoke to perform read or write actions in external systems

When an AI model needs additional information or capabilities, the MCP client handles communication between the model and server, invoking the appropriate tools and returning results.

Airwallex tools

The Airwallex Developer MCP server provides the following categories of tools:

Tool categoryCapabilitiesAvailable tools
Documentation searchProvides coding agents with relevant documentation (product docs, API references, SDK documentation) to help answer questions and write integration code related to the Airwallex APIssearch_public_docs
Sandbox operationsEnables creation of test resources and simulation of transactions across billing, transfers, treasury, and payment productscreate_billing_checkout
create_billing_price
create_billing_product
list_billing_prices
list_billing_products
list_transfers
create_transfer
list_beneficiaries
simulate_transfer_update
get_fx_quote
list_global_accounts
get_account_balances
simulate_global_account_deposit
list_payment_links
create_payment_link

Benefits

LLM-assisted development addresses common challenges in API integration:

  • Faster information retrieval: Instead of searching documentation websites, you receive relevant information directly in your coding environment, with access to your codebase for context.
  • Reduced hallucination: AI models often lack up-to-date knowledge of specific APIs and may generate incorrect code. By providing current documentation and examples through MCP tools, the Airwallex Developer MCP server can reduce inaccurate responses.
  • Workflow automation: You can perform common tasks like creating test resources, simulating actions, and testing integrations using natural language prompts, eliminating context switching between tools.
  • Improved experience: Contextual assistance delivered directly to coding assistants helps you find information faster and write better code, reducing integration time.

Limitations

LLM-assisted development has inherent limitations:

  • Model dependency: The quality of responses depends on the AI model your coding assistant uses. Different models produce varying results, and no model is perfect.
  • Sandbox only: Direct API testing tools work exclusively in Sandbox environments. Production operations require proper integration through direct API calls.
  • Hallucination reduction, not elimination: While providing current documentation reduces incorrect responses, AI models can still generate inaccurate code or responses. Generated code must always be reviewed, validated, and tested before use in production environments.

See also

Was this page helpful?