Introduction to LLMKit
Welcome to LLMKit—a toolkit designed to make working with Large Language Models (LLMs) simpler and more powerful. Whether you’re building an AI-powered app or just experimenting with prompts, LLMKit helps you manage, test, and deploy prompts with ease. This page will give you a quick overview of what LLMKit offers, highlight its OpenAI API compatibility, and show you some code examples to get started with different prompt types.
What Makes LLMKit Special?
LLMKit is all about streamlining your experience with LLMs. Here are some key topics it tackles:
- Prompt Management: Keep your prompts organized, versioned, and easy to update.
- Testing & Evaluation: Run tests to see how your prompts perform and tweak them for better results.
- Cross-Provider Support: Use one API to connect with multiple LLM providers (like OpenAI, Anthropic, and more).
- Flexibility: Choose between static or dynamic prompts to suit your needs.
Our goal? To save you time, reduce complexity, and help you get the most out of LLMs—whether you’re a seasoned developer or just starting out.
OpenAI API Compatibility
LLMKit is fully compatible with OpenAI’s API, which means you can plug it into any project already using OpenAI’s client libraries—no major code changes required. Just update the API endpoint and key, and you’re set.
Why This Rocks
- Easy Switch: Already using OpenAI? Point your client to LLMKit and keep rolling.
- Provider Freedom: Swap between different LLM providers without rewriting your integration.
- No Learning Curve: Stick with the familiar OpenAI request and response format.
Here’s a quick setup example in Python:
from openai import OpenAI
client = OpenAI(
api_key="llmkit_yourkey", # Your LLMKit API key
base_url="http://localhost:8000/v1" # LLMKit’s endpoint
)
With this, you can use LLMKit’s features while keeping your existing OpenAI workflows intact.
Prompt Types with Code Examples
LLMKit supports three main prompt types: static, dynamic system, and dynamic both. Below, I’ll explain each one and share a Python code example using the OpenAI client library.
Static Prompts
- What It Is: A fixed system message (the model’s instructions) paired with your user input. Simple and consistent.
- Use Case: Perfect for predictable responses, like a chatbot with a set personality.
Code Example:
response = client.chat.completions.create(
model="STATIC-SYSTEM-CHAT",
messages=[{"role": "user", "content": "Tell me a fun fact!"}]
)
print(response.choices[0].message.content)
- Behind the Scenes: The system message might be something like “You are a fun, quirky assistant.” The model uses this every time, and you just provide the user message.
Dynamic System Prompts
- What It Is: A system message with placeholders (e.g.,
{{ mood }}
) that you fill in with values for each request. - Use Case: Great when you want to tweak the model’s behavior—like switching tones or styles.
Code Example:
response = client.chat.completions.create(
model="DYNAMIC-SYSTEM-CHAT",
messages=[
{"role": "system", "content": '{"mood": "playful", "length": "short"}'},
{"role": "user", "content": "What’s the moon like?"}
]
)
print(response.choices[0].message.content)
- Behind the Scenes: The system message template could be “You are a assistant who gives answers.” Your JSON fills in the blanks.
Dynamic Both Prompts
- What It Is: Both the system and user messages have placeholders, giving you full control over the conversation.
- Use Case: Ideal for highly customized outputs, like tailored explanations or content generation.
Code Example:
response = client.chat.completions.create(
model="DYNAMIC-BOTH-ONESHOT",
messages=[
{"role": "system", "content": '{"topic": "cats", "style": "funny"}'}
]
)
print(response.choices[0].message.content)
- Behind the Scenes: System message: “You are an expert in .” User message: “Tell me something about it.” Your values shape both parts.
Getting Started
Ready to try it out? Here’s how:
- Grab Your API Key: Find it in the LLMKit interface (e.g.,
http://localhost:3000/settings
). - Install the OpenAI Client: For Python, run
pip install openai
. - Run the Examples: Swap in your API key and tweak the
model
names to match your setup.
For more details, check out the API Reference or Code Examples pages.
Why Use LLMKit?
LLMKit takes the headache out of prompt engineering by giving you tools to create, test, and deploy prompts efficiently. Its OpenAI compatibility and support for multiple providers mean you’re not locked into one ecosystem. Plus, with static and dynamic options, you can keep things simple or get as creative as you want.
Let’s make working with LLMs fun and straightforward—jump in and start experimenting!