Build with LLMs
Accelerate your Airwallex API integration with AI-assisted development tools
Building with large language models (LLMs) is an approach to software development that uses AI coding assistants to accelerate your Airwallex API integrations. AI coding assistants equipped with the right tools can help you find information faster, write more accurate code, and test integrations easily. Airwallex provides tools to enable these capabilities for your AI coding assistants, including the Developer Model Context Protocol (MCP) server.
Developer MCP server
The Airwallex Developer Model Context Protocol (MCP) server connects your coding assistant to Airwallex documentation, API references, and sandbox testing tools. The server is distributed as an npm package (@airwallex/developer-mcp) and works with popular coding assistants including Cursor, Claude Code, Gemini CLI, and OpenAI Codex.
For installation instructions, configuration examples, and usage details, see Developer MCP server.
How LLM-assisted development with Developer MCP server works
LLM-assisted development combines three key parts: a coding assistant, the MCP layer, and Airwallex developer tools.
Coding assistants
Coding assistants like Cursor, Claude Code, and other MCP-compatible tools can generate code and answer technical questions. When these assistants need Airwallex-specific context—such as current API documentation, integration patterns, or sandbox testing capabilities—they invoke the tools provided by the Airwallex Developer MCP server.
Model Context Protocol
MCP is an open standard that enables AI assistants to interact with external systems and data sources. It defines how coding assistants discover and invoke tools, ensuring consistent behavior across different AI models and platforms.
The protocol works through a client-server architecture:
| Component | Description |
|---|---|
| MCP client | Bundled with your coding assistant, discovers available tools and facilitates their invocation |
| MCP server | Runs on your device or remotely, providing annotated tools and resources to the AI model |
| MCP tools | Functions that AI models can invoke to perform read or write actions in external systems |
When an AI model needs additional information or capabilities, the MCP client handles communication between the model and server, invoking the appropriate tools and returning results.
Airwallex tools
The Airwallex Developer MCP server provides the following categories of tools:
| Tool category | Capabilities | Available tools |
|---|---|---|
| Documentation search | Provides coding agents with relevant documentation (product docs, API references, SDK documentation) to help answer questions and write integration code related to the Airwallex APIs | search_public_docs |
| Sandbox operations | Enables creation of test resources and simulation of transactions across billing, transfers, treasury, and payment products | create_billing_checkoutcreate_billing_pricecreate_billing_productlist_billing_priceslist_billing_productslist_transferscreate_transferlist_beneficiariessimulate_transfer_updateget_fx_quotelist_global_accountsget_account_balancessimulate_global_account_depositlist_payment_linkscreate_payment_link |
Benefits
LLM-assisted development addresses common challenges in API integration:
- Faster information retrieval: Instead of searching documentation websites, you receive relevant information directly in your coding environment, with access to your codebase for context.
- Reduced hallucination: AI models often lack up-to-date knowledge of specific APIs and may generate incorrect code. By providing current documentation and examples through MCP tools, the Airwallex Developer MCP server can reduce inaccurate responses.
- Workflow automation: You can perform common tasks like creating test resources, simulating actions, and testing integrations using natural language prompts, eliminating context switching between tools.
- Improved experience: Contextual assistance delivered directly to coding assistants helps you find information faster and write better code, reducing integration time.
Limitations
LLM-assisted development has inherent limitations:
- Model dependency: The quality of responses depends on the AI model your coding assistant uses. Different models produce varying results, and no model is perfect.
- Sandbox only: Direct API testing tools work exclusively in Sandbox environments. Production operations require proper integration through direct API calls.
- Hallucination reduction, not elimination: While providing current documentation reduces incorrect responses, AI models can still generate inaccurate code or responses. Generated code must always be reviewed, validated, and tested before use in production environments.
See also
- API reference API: Explore the full Airwallex API documentation
- Sandbox environment overview: Learn about testing in the Sandbox Environment