Batteries

AI Integration

Grit ships with multi-provider AI support for Claude (Anthropic), OpenAI, and Gemini (Google). Generate completions, run multi-turn conversations, and stream responses via SSE -- all through a unified Go service with raw net/http calls (no SDK dependencies).

Configuration

AI is configured via three environment variables. Switch between Claude, OpenAI, and Gemini by changing the provider and model -- no code changes required.

.env
# AI Configuration
AI_PROVIDER=claude                   # "claude", "openai", or "gemini"
AI_API_KEY=sk-ant-xxxxxxxxxxxxx      # API key for the selected provider
AI_MODEL=claude-sonnet-4-5-20250929  # Model identifier
ProviderAI_PROVIDERExample Models
Anthropic Claudeclaudeclaude-sonnet-4-20250514, claude-opus-4-20250514
OpenAIopenaigpt-4o, gpt-4o-mini
Google Geminigeminigemini-2.0-flash, gemini-1.5-pro

AI Service

The AI service at internal/ai/ai.go provides a unified interface that works with Claude, OpenAI, and Gemini. It handles the differences in API formats, authentication headers, and response structures internally.

internal/ai/ai.go (types)
// Message represents a chat message.
type Message struct {
    Role    string `json:"role"`    // "user" or "assistant"
    Content string `json:"content"`
}

// CompletionRequest holds the input for a completion.
type CompletionRequest struct {
    Prompt      string    `json:"prompt"`
    Messages    []Message `json:"messages,omitempty"`
    MaxTokens   int       `json:"max_tokens,omitempty"`
    Temperature float64   `json:"temperature,omitempty"`
}

// CompletionResponse holds the AI response.
type CompletionResponse struct {
    Content string `json:"content"`
    Model   string `json:"model"`
    Usage   *Usage `json:"usage,omitempty"`
}

// Usage contains token usage information.
type Usage struct {
    InputTokens  int `json:"input_tokens"`
    OutputTokens int `json:"output_tokens"`
}

// StreamHandler is called for each chunk of a streamed response.
type StreamHandler func(chunk string) error
internal/ai/ai.go (methods)
// New creates a new AI service instance.
func New(provider, apiKey, model string) *AI

// Complete generates a response from a single prompt or message history.
// Automatically routes to Claude, OpenAI, or Gemini based on provider config.
func (a *AI) Complete(ctx context.Context, req CompletionRequest) (*CompletionResponse, error)

// Stream generates a streaming response, calling handler for each text chunk.
// Uses SSE (Server-Sent Events) from the upstream API.
func (a *AI) Stream(ctx context.Context, req CompletionRequest, handler StreamHandler) error

Complete: Single Prompt

The simplest way to use the AI service. Send a prompt, get a response.

complete-example.go
aiService := ai.New("claude", apiKey, "claude-sonnet-4-20250514")

resp, err := aiService.Complete(ctx, ai.CompletionRequest{
    Prompt:    "Explain the Go concurrency model in 3 sentences.",
    MaxTokens: 256,
})
if err != nil {
    return fmt.Errorf("AI completion failed: %w", err)
}

fmt.Println(resp.Content)  // "Go uses goroutines..."
fmt.Println(resp.Model)    // "claude-sonnet-4-20250514"
fmt.Println(resp.Usage.InputTokens)  // 12
fmt.Println(resp.Usage.OutputTokens) // 87

API Endpoint

terminal
$ curl -X POST http://localhost:8080/api/ai/complete \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"prompt": "What is Go?", "max_tokens": 256}'

Chat: Multi-Turn Conversations

For multi-turn conversations, send an array of messages with alternating user/assistant roles. The AI service passes the full conversation history to the provider.

chat-example.go
resp, err := aiService.Complete(ctx, ai.CompletionRequest{
    Messages: []ai.Message{
        {Role: "user", Content: "I'm building a SaaS with Go and React."},
        {Role: "assistant", Content: "That's a great stack! Go handles the backend..."},
        {Role: "user", Content: "How should I structure my API?"},
    },
    MaxTokens:   512,
    Temperature: 0.7,
})

API Endpoint

POST /api/ai/chat
// Request body:
{
  "messages": [
    { "role": "user", "content": "What is Grit?" },
    { "role": "assistant", "content": "Grit is a full-stack framework..." },
    { "role": "user", "content": "How do I generate a resource?" }
  ],
  "max_tokens": 512,
  "temperature": 0.7
}

// Response:
{
  "data": {
    "content": "To generate a resource in Grit, use the CLI...",
    "model": "claude-sonnet-4-20250514",
    "usage": {
      "input_tokens": 45,
      "output_tokens": 120
    }
  }
}

Stream: Server-Sent Events

The streaming endpoint sends response chunks as SSE events in real-time. This enables typewriter-style output in chat interfaces. The handler function receives each text chunk as it arrives from the AI provider.

stream-example.go
// In a Go service:
err := aiService.Stream(ctx, ai.CompletionRequest{
    Prompt:    "Write a haiku about Go programming",
    MaxTokens: 100,
}, func(chunk string) error {
    fmt.Print(chunk)  // Prints each word/token as it arrives
    return nil
})

How Streaming Works via Gin

The AI handler at POST /api/ai/stream sets SSE headers and uses Gin's c.SSEvent() to send each chunk to the client. The connection stays open until the AI response is complete.

internal/handlers/ai.go (stream handler)
func (h *AIHandler) Stream(c *gin.Context) {
    var req chatRequest
    if err := c.ShouldBindJSON(&req); err != nil {
        c.JSON(http.StatusUnprocessableEntity, gin.H{...})
        return
    }

    // Set SSE headers
    c.Header("Content-Type", "text/event-stream")
    c.Header("Cache-Control", "no-cache")
    c.Header("Connection", "keep-alive")

    // Stream chunks to client
    err := h.AI.Stream(c.Request.Context(), ai.CompletionRequest{
        Messages:    req.Messages,
        MaxTokens:   req.MaxTokens,
        Temperature: req.Temperature,
    }, func(chunk string) error {
        c.SSEvent("message", chunk)
        c.Writer.Flush()
        return nil
    })

    if err != nil {
        c.SSEvent("error", fmt.Sprintf("Stream error: %v", err))
        c.Writer.Flush()
    }

    c.SSEvent("done", "[DONE]")
    c.Writer.Flush()
}

Consuming the Stream (Frontend)

hooks/use-ai-stream.ts
async function streamCompletion(messages: Message[]) {
  const response = await fetch("/api/ai/stream", {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      Authorization: `Bearer ${token}`,
    },
    body: JSON.stringify({ messages, max_tokens: 1024 }),
  });

  const reader = response.body?.getReader();
  const decoder = new TextDecoder();

  while (true) {
    const { done, value } = await reader!.read();
    if (done) break;

    const text = decoder.decode(value);
    const lines = text.split("\n");

    for (const line of lines) {
      if (line.startsWith("data: ")) {
        const data = JSON.parse(line.slice(6));
        if (data === "[DONE]") return;
        // Append chunk to the UI
        setResponse((prev) => prev + data);
      }
    }
  }
}

API Endpoints

EndpointMethodDescription
/api/ai/completePOSTSingle prompt completion
/api/ai/chatPOSTMulti-turn conversation
/api/ai/streamPOSTStreaming response via SSE

Switching Providers

Switching between Claude, OpenAI, and Gemini requires only environment variable changes. The AI service abstracts away the differences in request/response formats, authentication headers, and streaming protocols.

.env (Claude)
AI_PROVIDER=claude
AI_API_KEY=sk-ant-api03-xxxxxxxxxxxx
AI_MODEL=claude-sonnet-4-20250514
.env (OpenAI)
AI_PROVIDER=openai
AI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxx
AI_MODEL=gpt-4o
.env (Gemini)
AI_PROVIDER=gemini
AI_API_KEY=AIzaSyxxxxxxxxxxxxxxxxxxxxxxx
AI_MODEL=gemini-2.0-flash
DifferenceClaudeOpenAIGemini
API URLapi.anthropic.com/v1/messagesapi.openai.com/v1/chat/completionsgenerativelanguage.googleapis.com
Authx-api-key headerAuthorization: Bearer?key= query param
Response formatcontent[0].textchoices[0].message.contentcandidates[0].content.parts[0].text
Stream eventcontent_block_deltachoices[0].delta.contentcandidates[0].content.parts[0].text

Initialization in main.go

The AI service is created in main.go and passed to the AI handler. If no API key is configured, the handler gracefully returns a 503 "AI service not configured" response.

cmd/server/main.go (excerpt)
// Initialize AI service (optional -- graceful if not configured)
var aiService *ai.AI
if cfg.AIProvider != "" && cfg.AIAPIKey != "" {
    aiService = ai.New(cfg.AIProvider, cfg.AIAPIKey, cfg.AIModel)
    log.Printf("AI service initialized: %s (%s)", cfg.AIProvider, cfg.AIModel)
}

// Register AI routes
aiHandler := &handlers.AIHandler{AI: aiService}
aiGroup := api.Group("/ai", authMiddleware)
{
    aiGroup.POST("/complete", aiHandler.Complete)
    aiGroup.POST("/chat", aiHandler.Chat)
    aiGroup.POST("/stream", aiHandler.Stream)
}