ROK is a local AI workspace, not a chatbot clone
ROK is built to help you build, code, study, and think in one workspace. It runs on your machine, uses a local model by default through Ollama, and keeps session history in your browser. The goal is direct workflow support, not generic chatbot small talk.
Who Is This For
Developers
Explain code, inspect errors, iterate on drafts, and keep context in a persistent local session.
Students
Generate flashcards, summarize notes, and study in a focused workspace flow.
Tinkerers
Change models, tune behavior, and run everything in your own environment.
Privacy-focused users
Keep data local by default and expose the backend publicly only when you choose to.
Features
- Local AI responses via Ollama in local mode.
- Workspace-style interaction with editor panel, improve flow, copy/download actions.
- Streaming responses with stop control.
- Privacy-first defaults with local browser session storage.
- Model selection and per-session model memory.
- Attachment support for common text and code file types.
- Optional public access through tunnel endpoints.
Commands and Modes
ROK uses intent-driven behavior from prompt content and workspace context.
Normal response mode
Default chat behavior for Q and A, drafting, and general assistance.
Story canvas mode
Long-form story prompts use a specialized canvas-style output flow.
Code explanation mode
Code-heavy prompts trigger technical interpretation and debugging-oriented responses.
Flashcard generation mode
Study prompts can generate concise flashcard-oriented outputs.
There is no slash-command requirement. Modes are inferred from prompt text and active workspace context.
Philosophy
ROK exists to make local AI practical for real work, not to copy chat products. Local AI matters because control, privacy, and reliability improve when the core stack runs in your own environment. The long-term vision is a private AI workspace where writing, coding, and thinking happen in one place with transparent behavior and predictable data boundaries.
FAQ
Is ROK fully offline?
No, ROK requires an internet connection to access local ROK models and to use tunnel endpoints.
Does ROK save my chat on a remote database?
By default, sessions are stored in browser local storage. Remote storage is not part of the default local mode.
Can I change models per session?
Yes. Model selection is available and remembered by session.
Security and Privacy
- CORS allowlist is enforced server-side on API routes.
- Rate limiting and session behavior controls are built into backend request handling.
- Treat tunnel URLs and auth tokens as sensitive secrets.
Roadmap
- Better docs automation and versioned release notes.
- Expanded workspace tooling and mode controls.
- Cleaner onboarding and one-command local bootstrap.
- Additional model/provider adapters with consistent behavior.
API Access
ROK API keys are manually reviewed during beta. Request access first, then use your key in
X-ROK-API-Key for external requests.
API keys now include per-key request limits. Once a key exceeds its assigned limit, /api/chat
returns 429 with {"error":"API key request limit exceeded"}.
| Status | Meaning |
|---|---|
401 |
Missing API key (X-ROK-API-Key not provided) |
403 |
Invalid API key |
429 |
Key is valid, but request limit is exhausted |
JavaScript Example (with built-in token parser)
async function runRokChat() {
const apiBase = "https://rokapi.kyklos.online";
const apiKey = "YOUR_ISSUED_API_KEY_HERE";
const response = await fetch(`${apiBase}/api/chat`, {
method: "POST",
headers: {
"Content-Type": "application/json",
"X-ROK-API-Key": apiKey
},
body: JSON.stringify({
message: "Give me a quick 3-point summary of photosynthesis.", // example prompt
history: []
})
});
if (!response.ok) {
let message = `Request failed (${response.status})`;
try {
const err = await response.json();
if (err && (err.error || err.reply)) message = err.error || err.reply;
} catch (_) {}
throw new Error(message);
}
if (!response.body) throw new Error("Streaming body missing.");
const reader = response.body.getReader();
const decoder = new TextDecoder();
let pending = "";
let fullText = "";
function parseEventBlock(block) {
const lines = block.split("\\n");
for (const line of lines) {
if (!line.startsWith("data:")) continue;
const raw = line.slice(5).trim();
if (!raw) continue;
try {
const payload = JSON.parse(raw);
if (typeof payload.token === "string") {
fullText += payload.token;
// live output:
console.log(payload.token);
}
} catch (_) {}
}
}
while (true) {
const { value, done } = await reader.read();
if (done) break;
pending += decoder.decode(value, { stream: true });
const blocks = pending.split("\\n\\n");
pending = blocks.pop() || "";
for (const block of blocks) {
parseEventBlock(block);
}
}
if (pending.trim()) {
parseEventBlock(pending);
}
return fullText;
}
runRokChat()
.then((text) => console.log("Final output:\\n", text))
.catch((err) => console.error("ROK error:", err.message));
Python Example (with built-in token parser)
import json
import requests
API_BASE = "https://rokapi.kyklos.online"
API_KEY = "YOUR_ISSUED_API_KEY_HERE"
headers = {
"Content-Type": "application/json",
"Accept": "text/event-stream",
"X-ROK-API-Key": API_KEY,
}
payload = {
"message": "Explain photosynthesis in 3 bullet points.",
"history": [],
}
with requests.post(
f"{API_BASE}/api/chat",
headers=headers,
json=payload,
stream=True,
timeout=(10, 300),
) as response:
if response.status_code == 401:
raise RuntimeError("API key required")
if response.status_code == 403:
raise RuntimeError("Invalid API key")
if response.status_code == 429:
raise RuntimeError("API key request limit exceeded")
response.raise_for_status()
full_text = []
for raw_line in response.iter_lines(decode_unicode=True):
if not raw_line or not raw_line.startswith("data:"):
continue
data = raw_line[5:].strip()
if not data or data == "[DONE]":
continue
token = data
try:
payload = json.loads(data)
token = payload.get("token") or payload.get("response") or payload.get("reply") or ""
except json.JSONDecodeError:
pass
if token:
print(token, end="", flush=True)
full_text.append(token)
print("\\n\\nFinal output:\\n" + "".join(full_text))
cURL Example (raw stream)
curl -N -X POST "https://rokapi.kyklos.online/api/chat" ^
-H "Content-Type: application/json" ^
-H "Accept: text/event-stream" ^
-H "X-ROK-API-Key: YOUR_ISSUED_API_KEY_HERE" ^
-d "{\"message\":\"Summarize this in 3 bullets.\",\"history\":[]}"
# macOS/Linux line-continuation version:
curl -N -X POST "https://rokapi.kyklos.online/api/chat" \
-H "Content-Type: application/json" \
-H "Accept: text/event-stream" \
-H "X-ROK-API-Key: YOUR_ISSUED_API_KEY_HERE" \
-d '{"message":"Summarize this in 3 bullets.","history":[]}'
# You will receive streaming SSE chunks in lines that begin with:
# data: {"token":"..."}
Contributing
Contributions are welcome. Start with docs improvements, reproducible bug reports, and small focused PRs. If you propose architecture changes, include request flow impact and migration steps.