How to Create a REST API in Go with GPT-4 Integration

by Didin J. on Oct 10, 2025 How to Create a REST API in Go with GPT-4 Integration

Learn how to build a REST API in Go with GPT-4 integration. Step-by-step guide to sending prompts, handling responses, and securing your API.

In this tutorial, we’ll walk you through building a REST API in Go that integrates directly with OpenAI’s GPT-4 model. By the end of this guide, you’ll have a fully functional backend service capable of generating intelligent responses to user prompts — similar to how ChatGPT works — using clean and efficient Go code.

Go (or Golang) is well-known for its simplicity, performance, and concurrency support, making it an ideal choice for building modern backend APIs. Combined with GPT-4, you can easily enhance your applications with natural language understanding and generation capabilities. Whether you want to create a chatbot, summarization tool, code assistant, or AI-powered content generator, this setup will give you the foundation you need.

In this project, we’ll cover how to:

  • Set up a Go REST API using standard libraries.

  • Connect and authenticate with the OpenAI API.

  • Send prompts to GPT-4 and handle its responses.

  • Return those responses as JSON through your REST endpoint.

  • Follow best practices for API structure, environment management, and security.

By the end, you’ll have a working REST API that takes a user’s prompt and returns an AI-generated message — all running locally on your machine.



Prerequisites

Before you begin, make sure you have the following tools and configurations ready. This will ensure a smooth setup and development experience throughout the tutorial.

🧰 1. Installed Tools

You’ll need to have these installed on your system:

  • Go 1.22+ — the latest stable version of Go.
    👉 You can download it from https://go.dev/dl/
    After installation, verify it with:

     
    go version

     

  • cURL or Postman — for testing API endpoints.
    Postman offers an intuitive interface for making HTTP requests, while cURL is great for quick command-line tests.

  • Text editor or IDE — such as Visual Studio Code, GoLand, or any code editor you prefer.

🔑 2. OpenAI API Key

To integrate GPT-4 into your application, you’ll need an OpenAI API key.

  1. Go to https://platform.openai.com/account/api-keys.

  2. Log in or sign up for an OpenAI account.

  3. Create a new secret API key and copy it — we’ll store it securely later in a .env file.

⚠️ Important: Never expose your API key publicly or commit it to version control systems like GitHub.

📦 3. Basic Go Knowledge

You should be familiar with:

  • Writing and running simple Go programs.

  • Using Go modules (go mod init, go mod tidy).

  • Handling JSON in Go using encoding/json.

🧪 4. Project Overview

Here’s what we’ll build:

  • A Go REST API with a single endpoint (/api/generate).

  • The endpoint accepts a JSON body with a “prompt” field.

  • The API sends the prompt to OpenAI’s GPT-4 model and returns the AI-generated response in JSON format.

Example request:

{
  "prompt": "Explain concurrency in Go."
}

Example response:

{
  "response": "Concurrency in Go is achieved using goroutines and channels, allowing functions to run independently..."
}



Project Setup

Now that we have the prerequisites ready, let’s set up the Go project structure and initialize everything we’ll need to start coding.

🏗️ Step 1: Create a New Project Folder

Open your terminal and create a new directory for the project. For example:

mkdir go-gpt4-restapi
cd go-gpt4-restapi

⚙️ Step 2: Initialize a Go Module

Initialize a new Go module for dependency management:

go mod init go-gpt4-restapi

This command creates a go.mod file that will track all dependencies used in the project.

📁 Step 3: Set Up Folder Structure

Organize the project into a clean structure that separates handlers, routes, and configuration:

go-gpt4-restapi/
│
├── main.go
├── .env
├── go.mod
├── go.sum
│
├── handlers/
│   └── gpt_handler.go
│
├── routes/
│   └── routes.go
│
└── utils/
    └── config.go

Here’s what each folder does:

  • handlers/ — contains HTTP handler functions for each endpoint.

  • routes/ — defines all API routes and how they connect to handlers.

  • utils/ — utility files such as configuration or helper functions.

  • main.go — the application entry point.

🧩 Step 4: Install Required Dependencies

We’ll use a few common Go packages:

go get github.com/joho/godotenv
  • github.com/joho/godotenv — loads environment variables from a .env file, useful for keeping secrets like your OpenAI API key out of the source code.

The rest (like net/http, encoding/json, and log) are built into the Go standard library.

🔐 Step 5: Create the .env File

Create a .env file in the root directory and add your OpenAI API key:

OPENAI_API_KEY=your_openai_api_key_here

⚠️ Important: Add .env to your .gitignore if you plan to push this project to a repository:

.env

🧾 Step 6: Verify Setup

At this stage, your project should look like this:

go-gpt4-restapi/
├── .env
├── go.mod
├── handlers/
├── routes/
├── utils/
└── main.go

Everything is ready for the next step — connecting to the OpenAI GPT-4 API.



Configuring the OpenAI API

Now that your Go project structure is ready, it’s time to connect it to OpenAI’s GPT-4 API. In this section, we’ll securely load the API key, configure environment variables, and prepare a helper function to send requests to OpenAI.

⚙️ Step 1: Load Environment Variables

We’ll use the godotenv package to load variables from the .env file.

Create a new file:
📄 utils/config.go

package utils

import (
	"log"
	"os"

	"github.com/joho/godotenv"
)

// LoadEnv loads environment variables from .env file
func LoadEnv() {
	err := godotenv.Load()
	if err != nil {
		log.Println("Warning: .env file not found, using system environment variables.")
	}
}

// GetEnv fetches an environment variable by key
func GetEnv(key string) string {
	value := os.Getenv(key)
	if value == "" {
		log.Fatalf("Environment variable %s not set", key)
	}
	return value
}

This helper file ensures that your .env file is read and provides a convenient way to access environment variables safely.

🔑 Step 2: Add the OpenAI API Key

Make sure your .env file includes the following line:

OPENAI_API_KEY=your_openai_api_key_here

We’ll retrieve this key later in our GPT handler to authenticate API requests.

🌐 Step 3: Understand the OpenAI API Endpoint

The endpoint we’ll call is:

https://api.openai.com/v1/chat/completions

It accepts a JSON payload with parameters like:

  • model — specify which GPT model to use (e.g., gpt-4o-mini for efficiency or gpt-4-turbo for higher accuracy).

  • messages — an array representing the conversation, usually with a “system” and “user” role.

  • max_tokens — limit the length of the generated response.

Example request body:

{
  "model": "gpt-4o-mini",
  "messages": [
    { "role": "system", "content": "You are a helpful assistant." },
    { "role": "user", "content": "Explain concurrency in Go." }
  ],
  "max_tokens": 100
}

🧩 Step 4: Prepare the Base URL and Config Constants

You can optionally create a small constants block for reuse in multiple files.
Add this inside utils/config.go below the existing functions:

const (
	OpenAIBaseURL = "https://api.openai.com/v1/chat/completions"
)

🧠 Step 5: Verify Configuration

To confirm everything is wired correctly, update your main.go with a simple test loader:

package main

import (
	"fmt"
	"go-gpt4-restapi/utils"
)

func main() {
	utils.LoadEnv()
	apiKey := utils.GetEnv("OPENAI_API_KEY")
	fmt.Println("OpenAI API Key loaded successfully:", apiKey[:8]+"********")
}

Run it:

go run main.go

If successful, you’ll see a message like:

OpenAI API Key loaded successfully: sk-1234****

That means your environment setup and configuration are ready!



Building the GPT-4 Integration

With the configuration complete, we can now build the core functionality that interacts with OpenAI’s GPT-4 API. This section focuses on sending HTTP requests, handling JSON payloads, and returning the AI-generated response cleanly.

🧩 Step 1: Create the GPT Handler File

Create a new file:
📄 handlers/gpt_handler.go

This file will contain the main logic for communicating with OpenAI’s API.

⚙️ Step 2: Define Request and Response Structures

Let’s start by defining the data structures we’ll use for JSON serialization and deserialization:

package handlers

import (
	"bytes"
	"encoding/json"
	"fmt"
	"go-gpt4-restapi/utils"
	"io"
	"log"
	"net/http"
)

// Define the input structure for our REST API
type GPTRequest struct {
	Prompt string `json:"prompt"`
}

// Define the request body structure for OpenAI API
type OpenAIRequest struct {
	Model    string         `json:"model"`
	Messages []OpenAIMessage `json:"messages"`
	MaxTokens int            `json:"max_tokens,omitempty"`
}

// Define the message format for OpenAI’s chat models
type OpenAIMessage struct {
	Role    string `json:"role"`
	Content string `json:"content"`
}

// Define the response structure from OpenAI
type OpenAIResponse struct {
	Choices []struct {
		Message struct {
			Content string `json:"content"`
		} `json:"message"`
	} `json:"choices"`
}

These structs will help Go automatically marshal and unmarshal JSON payloads when interacting with OpenAI’s API.

🧠 Step 3: Implement the GPT Request Handler

Now, let’s write a function that:

  1. Reads a JSON prompt from the client.

  2. Sends it to the OpenAI API.

  3. Returns the AI-generated message as JSON.

Add this function below the structs in gpt_handler.go:

// GenerateResponse handles POST requests to the /api/generate endpoint
func GenerateResponse(w http.ResponseWriter, r *http.Request) {
	if r.Method != http.MethodPost {
		http.Error(w, "Invalid request method", http.StatusMethodNotAllowed)
		return
	}

	var req GPTRequest
	if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
		http.Error(w, "Invalid JSON body", http.StatusBadRequest)
		return
	}

	if req.Prompt == "" {
		http.Error(w, "Prompt cannot be empty", http.StatusBadRequest)
		return
	}

	// Build OpenAI API request body
	openAIReq := OpenAIRequest{
		Model: "gpt-4o-mini",
		Messages: []OpenAIMessage{
			{Role: "system", Content: "You are a helpful assistant."},
			{Role: "user", Content: req.Prompt},
		},
		MaxTokens: 200,
	}

	reqBody, err := json.Marshal(openAIReq)
	if err != nil {
		http.Error(w, "Error encoding request", http.StatusInternalServerError)
		return
	}

	// Prepare the HTTP request
	utils.LoadEnv()
	apiKey := utils.GetEnv("OPENAI_API_KEY")

	reqToAPI, err := http.NewRequest("POST", utils.OpenAIBaseURL, bytes.NewBuffer(reqBody))
	if err != nil {
		http.Error(w, "Failed to create request", http.StatusInternalServerError)
		return
	}
	reqToAPI.Header.Set("Content-Type", "application/json")
	reqToAPI.Header.Set("Authorization", fmt.Sprintf("Bearer %s", apiKey))

	// Send request to OpenAI
	client := &http.Client{}
	resp, err := client.Do(reqToAPI)
	if err != nil {
		http.Error(w, "Error contacting OpenAI API", http.StatusInternalServerError)
		return
	}
	defer resp.Body.Close()

	body, err := io.ReadAll(resp.Body)
	if err != nil {
		http.Error(w, "Error reading response", http.StatusInternalServerError)
		return
	}

	if resp.StatusCode != http.StatusOK {
		log.Printf("OpenAI API error: %s", string(body))
		http.Error(w, "Failed to fetch GPT response", http.StatusBadGateway)
		return
	}

	// Parse OpenAI response
	var aiResp OpenAIResponse
	if err := json.Unmarshal(body, &aiResp); err != nil {
		http.Error(w, "Error decoding GPT response", http.StatusInternalServerError)
		return
	}

	// Send back JSON response
	w.Header().Set("Content-Type", "application/json")
	json.NewEncoder(w).Encode(map[string]string{
		"response": aiResp.Choices[0].Message.Content,
	})
}

This function forms the heart of your Go + GPT-4 REST API:

  • It validates user input.

  • Sends a properly formatted JSON request to OpenAI.

  • Parses the AI’s reply and returns it in a simple JSON format.

🧪 Step 4: Test Preparation

Before testing, we’ll first connect this handler to an HTTP route in the next section.



Creating REST API Endpoints

Now that we’ve built the GPT-4 integration logic, let’s expose it through a REST API endpoint. In this section, you’ll define a clean route structure and connect everything inside your main Go application.

🧩 Step 1: Create the Routes File

Create a new file:
📄 routes/routes.go

This file will register all API routes and map them to handler functions.

package routes

import (
	"go-gpt4-restapi/handlers"
	"net/http"
)

// RegisterRoutes sets up the application's routes
func RegisterRoutes() {
	http.HandleFunc("/api/generate", handlers.GenerateResponse)
}

This simple function maps the /api/generate route to our GenerateResponse handler function.

⚙️ Step 2: Update the Main File

Now, open main.go and wire up the routes and server configuration.

📄 main.go

package main

import (
	"fmt"
	"go-gpt4-restapi/routes"
	"go-gpt4-restapi/utils"
	"log"
	"net/http"
)

func main() {
	// Load environment variables
	utils.LoadEnv()

	// Register API routes
	routes.RegisterRoutes()

	// Define the server port
	port := ":8080"
	fmt.Printf("🚀 Server running on http://localhost%s\n", port)

	// Start the HTTP server
	err := http.ListenAndServe(port, nil)
	if err != nil {
		log.Fatalf("Failed to start server: %v", err)
	}
}

Here’s what happens:

  • The server starts on port 8080.

  • Routes are registered through the routes.RegisterRoutes() function.

  • The /api/generate endpoint is now ready to handle POST requests.

🧪 Step 3: Test the API with Postman or cURL

Now, let’s test if everything works.

Start your server:

go run main.go

You should see:

🚀 Server running on http://localhost:8080

Then, test the /api/generate endpoint.

Using cURL

curl -X POST http://localhost:8080/api/generate \
-H "Content-Type: application/json" \
-d '{"prompt": "Explain goroutines in Go."}'

Using Postman

  • Method: POST

  • URL: http://localhost:8080/api/generate

  • Body (raw JSON):

{
  "prompt": "Explain goroutines in Go."
}

You should receive a response similar to:

{
  "response": "Goroutines are lightweight threads managed by the Go runtime. They allow functions to run concurrently..."
}

⚠️ Step 4: Error Handling Check

Try sending invalid data or an empty prompt:

curl -X POST http://localhost:8080/api/generate \
-H "Content-Type: application/json" \
-d '{}'

You should get:

Prompt cannot be empty

This confirms that validation works properly.



Testing the API

Now that your REST API is up and running, it’s time to test it thoroughly to ensure it behaves correctly in different scenarios. In this section, we’ll test the /api/generate endpoint using both Postman and cURL, check success and error cases, and explore best practices for response handling.

🧪 Step 1: Start the Server

Before testing, make sure your Go server is running:

go run main.go

You should see:

🚀 Server running on http://localhost:8080

This confirms that your routes and configuration are correctly set up.

📬 Step 2: Test with Postman

Postman provides a user-friendly interface to make HTTP requests.

  1. Open Postman and create a new POST request.

  2. Set the URL to:

    http://localhost:8080/api/generate
  3. Under the Body tab, select raw and choose JSON.

  4. Add the following JSON request body: 

    {
      "prompt": "Write a short poem about Go programming."
    }
  5. Click Send.

  6. If everything is set up correctly, you should receive a response like this:

    {
      "response": "Go is fast, its syntax neat,\nConcurrency makes it hard to beat..."
    }

💻 Step 3: Test with cURL

If you prefer the terminal, you can use cURL for the same test:

curl -X POST http://localhost:8080/api/generate \
-H "Content-Type: application/json" \
-d '{"prompt": "Summarize the advantages of using Go for web development."}'

Expected output:

{
  "response": "Go offers strong concurrency, excellent performance, and a simple syntax, making it ideal for scalable web APIs."
}

⚠️ Step 4: Test Invalid Inputs

Now let’s test a few error scenarios to make sure validation and error handling work correctly.

🔸 Missing prompt

curl -X POST http://localhost:8080/api/generate \
-H "Content-Type: application/json" \
-d '{}'

Response:

Prompt cannot be empty

🔸 Invalid JSON

curl -X POST http://localhost:8080/api/generate \
-H "Content-Type: application/json" \
-d '{"prompt": "Unclosed string}'

Response:

Invalid JSON body

🔸 Invalid HTTP method

curl -X GET http://localhost:8080/api/generate

Response:

Invalid request method

🔍 Step 5: Check OpenAI Response Handling

To make sure the OpenAI integration works smoothly, you can log the API response for debugging.
In handlers/gpt_handler.go, before returning the JSON response, add:

log.Printf("OpenAI Response: %s", aiResp.Choices[0].Message.Content)

Now you’ll see GPT-4’s raw response printed in your terminal each time you call the endpoint — great for verifying data flow.

⚙️ Step 6: Common Troubleshooting Tips

Issue Possible Cause Solution
401 Unauthorized Missing or invalid API key Ensure .env file contains a valid OPENAI_API_KEY
Bad Gateway Network or API request issue Check your internet connection and API usage limits
Empty response Model returned no text Add MaxTokens or adjust prompt
Server not running Port conflict or code error Check if port 8080 is in use or review terminal logs

Success!
You’ve successfully tested your Go REST API with GPT-4 Integration — handling both valid and invalid cases. Your API can now take any text prompt and generate dynamic AI responses.



Best Practices and Tips

Congratulations — you now have a working Go REST API that connects to OpenAI’s GPT-4! 🎉
Before wrapping up, let’s go through some best practices and optimization tips to make your API secure, efficient, and production-ready.

🧠 1. Secure Your API Key

Your OpenAI API key is sensitive information — treat it like a password.

  • Never hardcode it directly in your code.

  • Store it in an environment file (.env) or a secret manager.

  • Add .env to .gitignore so it’s not pushed to GitHub.

  • For production deployments, use environment variables instead of local .env files.

Example (Linux/MacOS):

export OPENAI_API_KEY=your_openai_api_key_here

Then remove .env loading in production code if you prefer a pure environment approach.

⚙️ 2. Use a Lightweight Router (Optional)

While the Go standard library’s net/http is great for simplicity, frameworks like Gin or Chi offer better routing, middleware, and JSON support.

Example with Gin:

r := gin.Default()
r.POST("/api/generate", handlers.GenerateResponse)
r.Run(":8080")

This makes it easier to add middleware such as logging, authentication, or CORS.

🔒 3. Implement CORS and Authentication

If your API will be accessed from a frontend (e.g., React, Vue, or Angular), enable CORS (Cross-Origin Resource Sharing) so browsers can communicate with it.

You can add simple CORS headers manually:

w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type")

Or use middleware if you’re using a router like Gin or Chi.

For authentication, consider:

  • API key headers for internal use.

  • JWT (JSON Web Tokens) for user-based access control.

🚦 4. Add Rate Limiting

Prevent abuse or excessive API usage by limiting how often clients can make requests.

You can use Go’s golang.org/x/time/rate package for this:

import "golang.org/x/time/rate"

var limiter = rate.NewLimiter(1, 5) // 1 request per second, burst of 5

Before handling a request:

if !limiter.Allow() {
    http.Error(w, "Too many requests", http.StatusTooManyRequests)
    return
}

This helps protect both your API and your OpenAI usage quota.

🪣 5. Cache Frequent Responses

If your app often receives similar prompts, cache results to reduce API calls and improve performance.

You can use:

  • An in-memory cache (e.g., sync.Map, bigcache, or ristretto).

  • External cache like Redis for distributed systems.

Example concept:

if cachedResponse, found := cache[prompt]; found {
	return cachedResponse
}

🧾 6. Structured Logging

Use structured logs for better observability in production.

Example:

log.Printf("[INFO] %s - Prompt: %s", r.RemoteAddr, req.Prompt)

Or adopt a structured logging library like zerolog or logrus for JSON-style logs.

💬 7. Handle Long Responses and Token Limits

GPT-4 models can generate long outputs. To control this:

  • Adjust MaxTokens based on your needs (e.g., 200–1000).

  • Monitor token usage to avoid exceeding rate limits or costs.

If you want streaming responses (like ChatGPT’s typing effect), you can use the streaming API, but that’s more advanced and will require WebSocket or chunked response handling.

🧰 8. Organize and Scale Your Codebase

As your project grows:

  • Separate configuration, handlers, and models into dedicated packages.

  • Add unit tests for each layer.

  • Use dependency injection to manage services cleanly.

A typical scalable structure might look like:

go-gpt4-restapi/
├── cmd/
├── internal/
│   ├── handlers/
│   ├── services/
│   ├── models/
│   └── routes/
└── pkg/

🔐 9. Monitor Costs and Usage

OpenAI API calls are billed based on tokens used. Keep an eye on usage via your OpenAI dashboard.
You can also:

  • Implement internal usage limits.

  • Log and monitor token consumption per request.

💡 10. Future Enhancements

Once your API is stable, consider expanding it:

  • Add multiple GPT endpoints (summarization, translation, coding help, etc.).

  • Integrate frontend clients (React, Vue, or Flutter).

  • Support other AI models (e.g., Whisper for speech-to-text, DALL·E for image generation).

✅ With these best practices, your Go + GPT-4 REST API will be secure, efficient, and production-ready.



Conclusion and Next Steps

In this tutorial, you’ve learned how to build a REST API in Go and integrate it with OpenAI’s GPT-4 model. You created endpoints to receive user prompts, communicate with the GPT-4 API, and return AI-generated responses—all while keeping your code modular, clean, and secure.

Here’s a quick recap of what we accomplished:

  • Set up a Go project and managed dependencies.

  • Configured environment variables for secure OpenAI API access.

  • Built a reusable GPT-4 service layer for clean integration.

  • Created RESTful endpoints using Go’s net/http package.

  • Tested the API using curl and Postman.

  • Learned best practices for performance, error handling, and security.

🚀 Next Steps

If you want to take this project further, consider adding the following enhancements:

  1. Add a Frontend Interface – Build a simple React or Vue.js frontend to interact with your Go API.

  2. Implement User Authentication – Use JWT or OAuth2 to secure your endpoints and manage API access.

  3. Add Conversation Memory – Store user prompts and AI responses in a database (e.g., PostgreSQL or MongoDB) for context-aware conversations.

  4. Rate Limiting and Logging – Protect your API from abuse and track usage with middleware.

  5. Deploy to the Cloud – Deploy your Go API to services like Render, Railway, or AWS Lambda for global access.

With this foundation, you’re now equipped to create intelligent, production-ready AI-powered APIs in Go. Whether you’re building chatbots, content generation tools, or developer assistants, GPT-4 integration can make your applications significantly more powerful and dynamic.

💡 Tip: Continue experimenting with the OpenAI API — try different models, tweak parameters like temperature, or integrate embeddings for search and classification tasks.

You can get the full source code on our GitHub.

That's just the basics. If you need more deep learning about Go/Golang, you can take the following cheap course:

Thanks!