The PromptKt Kotlin library gives you a programmatic API for document chunking, embeddings, prompt templates, multi-step agent pipelines, and MCP tool integration.
The repository is organized into four top-level Maven modules:
promptkt/ — Core Kotlin library for AI API interactions
promptkt-core — LLM interfaces, document chunking, embeddings, prompt templates, pipelinespromptkt-provider-openai — OpenAI API provider (also supports compatible APIs)promptkt-provider-gemini — Google Gemini provider (REST)promptkt-provider-gemini-sdk — Google Gemini provider (official Java SDK)promptkt-provider-sample — Template for adding new providerspromptex/ — Extended execution and service layer
promptex-pips — Agent, tool, and pipeline logicpromptex-docs — Document management and RAG pipelinespromptex-mcp — MCP (Model Context Protocol) server and clientpromptrt/ — Command-line utilities (promptrt-cli)promptfx/ — JavaFX desktop application and sample pluginRequirements: Java 17+ and Maven 3.9.3+.
# Build all modules in order
mvn -B install -DskipTests --file promptkt/pom.xml
mvn -B install -DskipTests --file promptex/pom.xml
mvn -B install -DskipTests --file promptrt/pom.xml
mvn -B install -DskipTests --file promptfx/pom.xml
# Run tests (excludes tests requiring API keys)
mvn -B test --file promptkt/pom.xml
@Tag("openai") or
@Tag("gemini") and are excluded from the default test run.
See the Wiki for how to run them.
The AiModelProvider interface (in promptkt-core) defines the core API for
text completion and chat. Provider implementations are in the promptkt-provider-* modules.
// Get a chat response from any configured AiModelProvider
val response = plugin.chat(listOf(
ChatMessage(role = Role.System, content = "You are a helpful assistant."),
ChatMessage(role = Role.User, content = "What is RAG?")
))
Prompts are defined as Mustache templates
in YAML prompt library files and loaded at runtime via PromptLibrary:
val prompt = PromptLibrary.lookupPrompt("my-prompt-id")
val filled = prompt.fill(mapOf("language" to "French", "text" to "Hello, world!"))
Use the utilities in promptex-docs to load documents, chunk them, and compute embeddings
for a local vector store:
val chunks = DocumentChunker().chunk(file, chunkSize = 500)
val embeddings = embeddingPlugin.embedTexts(chunks.map { it.text })
// Store embeddings in LocalEmbeddingIndex for retrieval
The document Q&A pipeline uses embeddings to retrieve relevant chunks and passes them as
context to the completion model. You can assemble the same pipeline in code using
promptex-pips:
val pipeline = DocQaPipeline(
embeddingPlugin = embeddingPlugin,
completionPlugin = completionPlugin,
index = embeddingIndex
)
val answer = pipeline.ask("What does the paper say about transformers?")
The promptex-pips module provides AiTask, ExecContext, and
WorkflowSolver abstractions for multi-step agentic workflows with tool use:
val task = MyCustomTask()
val context = ExecContext(monitor = myMonitor, resources = mapOf(...))
val result = task.execute(context)
The promptex-mcp module provides MCP (Model Context Protocol) server and client
implementations, allowing PromptFx to expose tools to external agents or consume external MCP servers.
To add a new AI provider (e.g. a locally hosted model or a different cloud API), use
promptkt-provider-sample as a starting template:
promptkt-provider-sample/ to a new module directory.AiModelProvider (and optionally EmbeddingPlugin / ImageGenerator) interfaces.META-INF/services/).promptfx/pom.xml to enable it in the UI.
PromptFx supports custom view plugins via the NavigableWorkspaceView interface.
See promptfx-sample-view-plugin/ for a working example. Plugins are discovered at
runtime via the standard Java service-loader mechanism.
API keys are never hardcoded. Load them from:
apikey.txt (OpenAI) or apikey-gemini.txt (Gemini) in the working directory.OPENAI_API_KEY environment variable for OpenAI.