AI/ML
Tool Use
How AI agents interact with external systems
Definition
Tool use is the ability of an AI agent to call external APIs, query databases, execute code, and interact with web services by generating structured function calls. 1Raft builds tool-use architectures with role-based access controls and validated parameter schemas for production safety.
How it works
An LLM on its own can only generate text. Tool use is what turns it into an agent that can actually do things. When an agent has access to tools, it can look up a customer record, run a calculation, send an email, or update a database - then use the result to decide its next step. This is the bridge between language understanding and real-world action.
Tool use works through a structured protocol. The LLM receives a list of available tools with their parameter schemas. During reasoning, the model decides a tool is needed, generates a structured call (function name + parameters as JSON), and pauses. The system executes the tool call, returns the result, and the model continues reasoning with the new information. Modern frameworks support parallel tool calls, allowing agents to fetch data from multiple sources simultaneously.
The risk surface grows with every tool you expose. Each tool is a potential side effect - a database write, an API call with rate limits, a payment trigger. Production tool-use systems need parameter validation against schemas before execution, role-based access controls that match the authenticated user's permissions, rate limiting per tool, and clear audit logs of every tool call an agent makes.
How 1Raft uses Tool Use
1Raft builds tool-use architectures that connect AI agents to the systems clients already use. For a fintech platform, we connected agents to payment processing APIs, KYC verification services, and internal compliance databases - with access controls that mirror the authenticated user's role. Every tool call is validated against typed parameter schemas before execution and logged for audit compliance.
Related terms
AI/ML
AI Agent
An AI agent is a software system that uses a large language model to plan, reason, and take actions autonomously. Unlike chatbots that respond to single prompts, agents execute multi-step workflows - calling APIs, querying databases, and making decisions to achieve a defined goal.
AI/ML
Agentic AI
Agentic AI refers to AI systems that can plan, make decisions, and take actions autonomously to achieve a goal. Unlike simple chatbots that respond to one prompt at a time, agentic systems break complex tasks into steps, use tools, and self-correct along the way.
AI/ML
Large Language Model (LLM)
A large language model is a neural network trained on massive text datasets to understand and generate human language. LLMs power chatbots, content generation, code assistants, and most modern AI products.
AI/ML
Prompt Engineering
Prompt engineering is the practice of crafting and optimizing the instructions given to a language model to get consistent, high-quality outputs. It is the most accessible and cost-effective way to improve AI application behavior without modifying the underlying model.
AI/ML
Model Inference
Inference is the process of using a trained AI model to generate predictions or outputs from new inputs. When you send a prompt to an LLM and get a response, that is inference. It is where compute costs, latency, and user experience are determined.
Related services
Next Step
Need help with Tool Use?
We apply this in production across industries. Tell us what you are building and we will show you how it fits.