Back to Blog
AI & ML7 min read

OpenAI APIs: A Full-Stack Dev's Deep Dive

Explore practical applications of OpenAI's APIs for full-stack developers. Learn how to integrate them into your JavaScript/TypeScript projects with real-world examples and best practices.

Jay Salot

Jay Salot

Sr. Full Stack Developer

April 15, 2026 · 7 min read

Share
AI and machine learning visualization

I've been playing around with openai APIs a lot lately, and honestly, it's been a wild ride. As a full-stack developer, I'm always looking for ways to make my applications smarter and more efficient. OpenAI offers a powerful toolkit for doing just that, but the sheer number of options can be overwhelming. This post is about sharing my experiences, the gotchas I encountered, and some practical examples to help you get started. I'll be focusing on how to use these APIs from a JavaScript/TypeScript perspective, building on technologies like React, Next.js and Node.js.

Getting Started with OpenAI

API Keys and Authentication

First things first, you'll need an API key. Head over to the openai website and create an account. Once you have an account, you can generate an API key from your dashboard. Store this key securely! Don't commit it to your repository, use environment variables instead. This bit me once when I accidentally pushed a key to GitHub. Luckily I caught it quickly and rotated the key.

// .env file
OPENAI_API_KEY=YOUR_OPENAI_API_KEY

Then, install the OpenAI Node.js library:

npm install openai

Here's how you can authenticate:

import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

export default openai;

Make sure to handle potential errors, like missing API keys, gracefully. You don't want your application to crash if the environment variable isn't set.

Text Generation with GPT

Basic Completion Endpoint

The most basic use case is generating text using the completions endpoint. You give it a prompt, and it gives you back a completion. It's pretty straightforward.

import openai from './openai';

async function generateText(prompt: string) {
  try {
    const completion = await openai.completions.create({
      model: 'gpt-3.5-turbo-instruct',
      prompt: prompt,
      max_tokens: 150,
    });

    return completion.choices[0].text;
  } catch (error: any) {
    console.error(error);
    return 'Error generating text.';
  }
}

export default generateText;

You can then use this function in a React component. For example:

import React, { useState } from 'react';
import generateText from './api/generateText';

function MyComponent() {
  const [prompt, setPrompt] = useState('');
  const [generatedText, setGeneratedText] = useState('');

  const handleSubmit = async (e: React.FormEvent) => {
    e.preventDefault();
    const text = await generateText(prompt);
    setGeneratedText(text);
  };

  return (
    <div>
      <form onSubmit={handleSubmit}>
        <input
          type="text"
          value={prompt}
          onChange={(e) => setPrompt(e.target.value)}
        />
        <button type="submit">Generate</button>
      </form>
      <p>{generatedText}</p>
    </div>
  );
}

export default MyComponent;

The model parameter is important. gpt-3.5-turbo-instruct is usually a good starting point, offering a balance between cost and performance. There are other models like text-davinci-003, but they can be more expensive.

Fine-Tuning for Specific Tasks

While the base models are powerful, fine-tuning can significantly improve performance for specific tasks. For example, if you're building a customer support bot, fine-tuning on a dataset of customer interactions can make the bot much more effective. Fine-tuning involves training a custom version of an OpenAI model on your specific data. This can be a bit more complex, but the results can be worth it. The gotcha here is that you need a decent amount of data (hundreds or thousands of examples) for fine-tuning to be effective. If you don't have enough data, you're better off sticking with prompt engineering.

Chat Completion API

Building Conversational Interfaces

The Chat Completion API is designed for building conversational interfaces. It's more sophisticated than the basic completion endpoint, allowing you to maintain a conversation history.

import openai from './openai';

async function generateChatResponse(messages: any[]) {
  try {
    const chatCompletion = await openai.chat.completions.create({
      model: 'gpt-3.5-turbo',
      messages: messages,
    });

    return chatCompletion.choices[0].message.content;
  } catch (error: any) {
    console.error(error);
    return 'Error generating chat response.';
  }
}

export default generateChatResponse;

The messages array is key here. It represents the conversation history, with each message having a role (either system, user, or assistant) and content.

const messages = [
  { role: 'system', content: 'You are a helpful assistant.' },
  { role: 'user', content: 'What is the capital of France?' },
];

The system message is used to set the context for the conversation. It tells the model how to behave. Experiment with different system messages to see how they affect the model's responses.

Managing Conversation State

In a real application, you'll need to manage the conversation state. This could involve storing the messages in a database or using a state management library like Redux or Zustand. I often use Redis for storing temporary conversation histories, especially when dealing with stateless serverless functions. It's fast and efficient.

Image Generation with DALL-E

Generating Images from Text

openai also offers image generation capabilities through DALL-E. You can use it to create images from text descriptions.

import openai from './openai';

async function generateImage(prompt: string) {
  try {
    const response = await openai.images.generate({
      prompt: prompt,
      n: 1,
      size: '1024x1024',
    });
    const imageUrl = response.data[0].url;
    return imageUrl;
  } catch (error: any) {
    console.error(error);
    return 'Error generating image.';
  }
}

export default generateImage;

The n parameter specifies the number of images to generate, and size specifies the image size. Keep in mind that generating images can be more expensive than generating text.

Ethical Considerations and Content Filtering

It's important to be aware of the ethical considerations when using image generation. DALL-E has content filtering in place to prevent the generation of inappropriate images, but it's still your responsibility to ensure that your application is used responsibly. I strongly suggest reviewing openai's content policies before integrating image generation into your applications.

Creating Vector Embeddings

Embeddings are numerical representations of text that capture its semantic meaning. You can use them for tasks like semantic search and clustering. openai provides an API for generating embeddings.

import openai from './openai';

async function getEmbedding(text: string) {
  try {
    const response = await openai.embeddings.create({
      model: 'text-embedding-ada-002',
      input: text,
    });
    return response.data[0].embedding;
  } catch (error: any) {
    console.error(error);
    return null;
  }
}

export default getEmbedding;

The text-embedding-ada-002 model is a good choice for most use cases. It's relatively inexpensive and provides good performance.

Once you have embeddings, you can use them to implement semantic search. This involves calculating the similarity between the embedding of the search query and the embeddings of the documents in your database. There are different ways to calculate similarity, such as cosine similarity. I've used libraries like faiss for efficient similarity search with large datasets.

Rate Limits and Cost Optimization

Understanding Rate Limits

openai APIs have rate limits. You need to be aware of these limits to avoid getting rate-limited. The rate limits vary depending on the model and the endpoint. Check the openai documentation for the latest rate limits.

Implementing Retry Logic

If you hit a rate limit, you should implement retry logic. This involves waiting for a certain amount of time and then retrying the request. Use exponential backoff to avoid overwhelming the API.

async function retryRequest(fn: () => Promise<any>, retries = 3, delay = 1000) {
  try {
    return await fn();
  } catch (error: any) {
    if (retries > 0) {
      console.log(`Retrying in ${delay}ms...`);
      await new Promise((resolve) => setTimeout(resolve, delay));
      return retryRequest(fn, retries - 1, delay * 2);
    } else {
      throw error;
    }
  }
}

Optimizing Prompts and Token Usage

The cost of using openai APIs depends on the number of tokens you use. You can optimize your prompts to reduce the number of tokens. For example, be specific and concise in your prompts. Also, consider using shorter models if appropriate. Monitoring your token usage is crucial for staying within your budget. OpenAI provides tools for tracking your usage.

Conclusion

Integrating openai APIs into your full-stack applications opens up a world of possibilities. From generating text and images to implementing semantic search and building conversational interfaces, the potential is immense. However, it's important to be aware of the trade-offs, such as cost and rate limits. By understanding the APIs and implementing best practices, you can build powerful and intelligent applications. The key takeaways are:

  • Securely manage your API keys.
  • Understand and respect rate limits.
  • Optimize prompts to reduce token usage.
  • Consider fine-tuning for specific tasks.
  • Be mindful of ethical considerations.

I hope this post has given you a solid foundation for working with openai APIs. Now go build something amazing!

#openai#JavaScript#TypeScript#AI#Machine Learning#Full Stack
Share

Related Articles