Build Custom Chatbots with Function Calling and Tools
Explore building custom chatbots using function calling with tools like OpenAI and TypeScript. Learn how to integrate external APIs and create intelligent conversational experiences.
I've been diving deep into building custom AI chatbots lately, and one thing that's really changed the game is function calling. It lets you extend your chatbot's capabilities by connecting it to external tools and APIs. Forget just answering simple questions; now you can book flights, send emails, or even control smart home devices. It's a whole new level of interaction.
What is Function Calling?
Function calling, in the context of AI chatbots, allows the language model to request the execution of external functions based on user input. Think of it as the chatbot saying, "Hey, I need to use this tool to fulfill the user's request." The model decides which function to call and with what arguments. It’s not just about regurgitating information; it’s about acting on it.
The OpenAI Approach
OpenAI's API offers a robust function calling feature. You define your functions (name, description, parameters) and provide them to the model. The model analyzes the user's input and, if appropriate, returns a JSON object describing which function should be called and with what arguments. It's up to your code to then execute the function and send the results back to the model. This is a crucial step – the model doesn't magically execute the function itself; you're in control.
Setting Up Your Environment
Before we dive into code, let's set up our development environment. I'll be using TypeScript and Node.js for this example, but the concepts apply to other languages as well.
Installing Dependencies
First, make sure you have Node.js and npm (or yarn) installed. Then, install the OpenAI library:
npm install openai
You'll also need an OpenAI API key. You can get one from the OpenAI website. Remember to keep your API key secure and don't commit it to your repository!
TypeScript Configuration
If you're using TypeScript (and I highly recommend you do!), make sure you have a `tsconfig.json` file configured. Here's a basic example:
{
"compilerOptions": {
"target": "es2020",
"module": "commonjs",
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,
"strict": true,
"skipLibCheck": true
}
}
Defining Functions
Let's start by defining a simple function that gets the current weather for a given location. This is the function the AI will potentially call.
Weather Function Schema
We need to define a schema that tells OpenAI about our function. This schema includes the function's name, a description, and the parameters it accepts. This is how the AI knows what the function does and how to use it.
const weatherFunctionSchema = {
name: "get_current_weather",
description: "Get the current weather in a given location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "The city and state, e.g. San Francisco, CA",
},
unit: {
type: "string",
enum: ["celsius", "fahrenheit"],
description: "The temperature unit to use. Infer this from the user.",
},
},
required: ["location"],
},
};
Implementing the Function
Now, let's implement the actual function. For simplicity, I'll use a mock API response. In a real application, you would call a weather API.
async function getCurrentWeather(location: string, unit: string = "fahrenheit") {
// In a real application, you would call a weather API here
console.log("Calling weather API for", location, unit);
await new Promise((resolve) => setTimeout(resolve, 500)); // Simulate API call
const weatherData = {
location: location,
temperature: 72,
unit: unit,
description: "Sunny with a chance of memes",
};
return weatherData;
}
Integrating with OpenAI
Now comes the fun part: connecting everything to OpenAI. We'll send the user's message to OpenAI, along with our function schema. OpenAI will then decide whether to call the function and, if so, with what arguments.
Calling the OpenAI API
Here's the code for calling the OpenAI API:
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY, // Replace with your API key
});
async function runConversation(userMessage: string) {
const completion = await openai.chat.completions.create({
model: "gpt-3.5-turbo-0613", // Or another model that supports function calling
messages: [{ role: "user", content: userMessage }],
functions: [weatherFunctionSchema],
function_call: "auto", // Let OpenAI decide when to use the function
});
const message = completion.choices[0].message;
if (message?.function_call) {
// The model wants to call a function
const functionName = message.function_call.name;
const functionArgs = JSON.parse(message.function_call.arguments);
let functionResponse;
if (functionName === "get_current_weather") {
functionResponse = await getCurrentWeather(
functionArgs.location,
functionArgs.unit
);
} else {
functionResponse = { error: "Unknown function" };
}
// Send the function response back to the model
const secondCompletion = await openai.chat.completions.create({
model: "gpt-3.5-turbo-0613",
messages: [
{ role: "user", content: userMessage },
message,
{
role: "function",
name: functionName,
content: JSON.stringify(functionResponse),
},
],
});
return secondCompletion.choices[0].message?.content;
} else {
// The model doesn't want to call a function, just return the message
return message?.content;
}
}
Important Gotcha: Notice the double API call. The first call tells us what function to call. The second call sends the result of that function back to the model, allowing it to generate a final response to the user. This two-step process is crucial for function calling to work correctly.
Example Usage
Here's how you might use the `runConversation` function:
async function main() {
const userMessage = "What's the weather like in San Francisco?";
const response = await runConversation(userMessage);
console.log(response);
}
main();
Tool Selection and Orchestration
In more complex scenarios, you might have multiple tools available. The challenge then becomes: how does the model choose the right tool for the job? This is where good descriptions and clear parameter definitions become essential. The better you describe your tools, the more accurately the model can choose the appropriate one. I've found that providing example usages within the tool description can significantly improve the model's accuracy.
Chaining Tools
Sometimes, a single tool isn't enough. You might need to chain multiple tools together to fulfill a user's request. For example, a user might ask: "Book a flight to a city with a temperature between 70 and 80 degrees." This would require the model to first use the weather tool to find cities that meet the temperature criteria and then use a flight booking tool to book a flight to one of those cities. Implementing this requires careful planning and a well-defined architecture.
Security Considerations
Integrating external tools introduces security risks. You need to be very careful about what tools you expose and how you authenticate and authorize access to them. Never expose sensitive data or functions directly to the model. Always validate user input and sanitize any data returned from external tools. Consider using a dedicated service account for each tool to limit its access to only the resources it needs.
Rate Limiting and Error Handling
Don't forget about rate limiting and error handling. APIs often have rate limits to prevent abuse. Your code needs to handle these limits gracefully, perhaps by implementing exponential backoff. Similarly, APIs can fail for various reasons. Your code needs to be resilient and handle these failures appropriately. I've been bitten by this before – forgetting to handle API errors can lead to a very poor user experience.
Conclusion
Building custom chatbots with function calling opens up a world of possibilities. It allows you to create truly intelligent and helpful conversational experiences. However, it also introduces new challenges, such as tool selection, security, and error handling. My key takeaways are: define your functions clearly, handle API errors gracefully, and always prioritize security. With careful planning and execution, you can build chatbots that are not only smart but also reliable and secure. This is where AI gets really interesting, moving beyond just text generation to actual task automation.
Related Articles
Vector Databases: Pinecone vs Weaviate for AI Apps
Explore Pinecone and Weaviate, two leading vector databases, for building AI-powered applications. This guide helps JavaScript/TypeScript developers choose the right vector database.
Building with Anthropic's Claude API: A Dev's Guide
A practical guide to integrating Anthropic's Claude API into production apps, from stock SDK patterns to custom streaming implementations.
OpenAI APIs: A Full-Stack Dev's Deep Dive
Explore practical applications of OpenAI's APIs for full-stack developers. Learn how to integrate them into your JavaScript/TypeScript projects with real-world examples and best practices.
