Overview
Fabi.ai provides an MCP (Model Context Protocol) server that allows you to integrate Fabi’s AI data analysis capabilities directly into your development workflow or your client/interface of choice. The MCP server enables AI assistants and development tools to interact with Fabi.ai, creating threads, submitting chat requests, and saving Smartbooks programmatically. The Fabi MCP server is the single fastest way for you to implement an AI assistant to chat directly with your data.Authentication
The Fabi MCP server supports two authentication methods:Token authentication
Token authentication is the recommended method for programmatic access. You can generate MCP tokens from your Fabi.ai settings:- Navigate to https://app.fabi.ai/settings/mcp
- Generate a new MCP token
- Copy the token securely - it will only be shown once
OAuth authentication
OAuth authentication is also supported for user-based integrations. Follow the OAuth flow to authenticate your application and use the following URL:https://app.fabi.ai/mcp
Configuration
To connect to the Fabi MCP server, add the following configuration to your MCP client settings:<your-fabi-mcp-token> with the token you generated from the settings page.
Available tools
The Fabi MCP server provides the following tools for interacting with Fabi.ai:Create a thread
Creates a new data analysis session (backed by a Smartbook) for SQL/Python queries. Used to start analyzing database tables, running queries, or exploring data. The thread will persist your analysis history and generated code. Parameters:title(optional): Title for the analysis session
Submit chat
Delegate a data analysis task to the Fabi autonomous agent. Takes a description of what you want in natural language - Fabi will independently handle all complexity: discovering data sources, using RAG to find relevant table/column schemas and semantics, generating SQL/Python code, validating queries with dry runs, executing them, and formatting results. Parameters:thread_uuid: UUID of the thread fromcreate_threadmessage: Natural language data analysis request (e.g., ‘show top 10 customers by revenue’)context_cell_uuids(optional): Previous cell UUIDs to reference in this analysiscontext_dataframes(optional): Variable names of dataframes to use as context
Get chat result
Poll for the result of a long-running chat analysis request. Used by the agent whensubmit_chat returns early due to timeout (after 45 seconds). The chat continues processing in the background - call this periodically to check if results are ready.
Parameters:
request_uuid: UUID of the chat request (returned bysubmit_chat)
Save to Smartbook
Save AI-generated cells from chat history to the Smartbook for dashboard publishing or collaboration. Used by the agent to persist, publish, or share the analysis as a dashboard. This accepts pending chat results and converts them into executable Smartbook cells. Regular analyses are already viewable in chat history and don’t need saving. Parameters:thread_uuid: UUID of the thread to save
Usage examples
Basic workflow
- Create a thread to start a new analysis session
- Submit chat requests with natural language queries
- Get chat results to retrieve analysis outputs
- Save to Smartbook (optional) to persist results for sharing
Example: Analyzing sales data
Troubleshooting
- First try using the AI Analyst Agent in the Fabi UI to ensure it works as expected. If it does then
- If using token-based authentication, make sure you’re using a valid token
- If using a local agent, restart the agent after configuring the tools
Best practices
Token security
- Store MCP tokens securely and never commit them to version control
- Use environment variables or secure secret management
- Rotate tokens regularly for enhanced security
- Each token should only be used by one application or user
Error handling
- Implement retry logic for
get_chat_resultwhen status is “processing” - Handle timeout scenarios gracefully
- Validate thread UUIDs before making subsequent calls
Performance
- Use
context_cell_uuidsandcontext_dataframesto build on previous analyses - Only call
save_to_smartbookwhen you need to persist results - Batch related queries in the same thread for better context