Getting Started with LLMKit

Welcome to LLMKit! This guide will walk you through setting up and running LLMKit on your local machine using two methods: Docker Compose and the llmkit binary. Whether you're a beginner or an experienced developer, these steps will get you up and running quickly. We'll cover starting and stopping services, along with key details to ensure a smooth setup.


Prerequisites

  • For Docker Compose:
  • For the binary method:
    • No prerequisites are strictly required! The provided install.sh script will install necessary dependencies (Rust and Bun) if they aren't already present.

Note: LLMKit is tested on Linux and macOS. If you're on Windows, consider using WSL (Windows Subsystem for Linux) or adapting the scripts as needed.


Method 1: Using Docker Compose

This method uses Docker to run LLMKit's services in containers, making setup straightforward and portable.

Steps

  1. Clone the Repository
    • Open a terminal and run:
      bash
      git clone https://github.com/llmkit-ai/llmkit.git
      cd llmkit
      
  2. Set Up the Environment File
    • Copy the example environment file:
      bash
      cp .env.example .env
      
    • Open .env in a text editor and add your API keys (e.g., for OpenRouter or other providers) and a secure JWT_SECRET. For example:
      text
      OPENROUTER_API_KEY=your_openrouter_key_here
      JWT_SECRET=your_secure_random_string
      # Add other API keys as needed (e.g., ANTHROPIC_API_KEY, GOOGLE_API_KEY)
      
    • You can obtain API keys from the respective providers (e.g., OpenRouter). The JWT_SECRET should be a random, secure string for authentication.
  3. Start the Services
    • Launch the containers in detached mode:
      bash
      docker-compose up -d
      
    • This starts both the backend (Rust API) and frontend (Nuxt.js UI) services.
  4. Access the Application
  5. Stop the Services
    • To shut down the containers and clean up:
      bash
      docker-compose down
      

Method 2: Using the llmkit Binary

This method builds and runs LLMKit directly on your machine using the llmkit binary, giving you more control and flexibility.

Steps

  1. Clone the Repository
    • In a terminal, run:
      bash
      git clone https://github.com/llmkit-ai/llmkit.git
      cd llmkit
      
  2. Run the Installation Script
    • Execute the setup script:
      bash
      ./install.sh
      
    • This script:
      • Installs Rust and Bun if they’re not already present.
      • Builds the backend binary.
      • Sets up the frontend dependencies.
      • Creates a default .env file in the backend directory.
  3. Set Up the Environment File
    • Navigate to the backend directory and edit the .env file:
      text
      cd backend
      nano .env  # or use your preferred editor
      
    • Add your API keys and set a secure JWT_SECRET. For example:
      text
      OPENROUTER_API_KEY=your_openrouter_key_here
      JWT_SECRET=your_secure_random_string
      # Add other API keys as needed
      
    • Save and exit the editor.
  4. Start the Application
    • From the root directory, run:
      bash
      llmkit start
      
    • This command:
      • Starts the backend server (Rust API).
      • Starts the frontend server (Nuxt.js UI).
      • Creates and migrates the SQLite database (llmkit.db) if it doesn’t exist.
  5. Access the Application
  6. Stop the Services
    • Since llmkit start runs the servers in the foreground, press Ctrl+C in the terminal to stop both the backend and frontend.

Additional Tips

  • Environment Variables: Both methods require API keys for full functionality. Check .env.example for a list of supported providers. At minimum, set an OPENROUTER_API_KEY to get started.
  • Database:
    • In Docker Compose, the database is persisted via a Docker volume.
    • In the binary method, the SQLite database (llmkit.db) is stored in the backend directory and managed automatically by llmkit start.
  • Development Mode (Optional):
    • Want to tweak the code? Run the backend and frontend separately:
      • Backend: In the backend directory, run:
        bash
        cargo run
        
      • Frontend: In the ui directory, run:
        bash
        npm run dev  # or bun run dev
        

Next Steps

With LLMKit running, you’re ready to:

  • Explore the UI at http://localhost:3000 to manage prompts and settings.
  • Use the API at http://localhost:8000 for programmatic access (it’s OpenAI-compatible!).
  • Dive deeper with additional documentation (e.g., API Reference or Code Examples).

Enjoy building with LLMKit! If you run into issues, check the repository’s README or open an issue on GitHub. Happy prompting!