Take a brand new Senso org from empty to fully populated self-improving knowledge system in 10 minutes. Researches the user's company from their website plus external sources, builds out the knowledge base with brand kit, content types, and tracking prompts, generates the first drafts, publishes sample citeables, kicks off GEO monitoring, and files a self-heal report with gap analysis. The first-run experience for any new Senso user. Use when the user runs Senso for the first time or says "set up Senso", "run onboarding", or "populate my knowledge base".
npx @senso-ai/shipables install senso-ai/senso-onboardingIn April 2026, Andrej Karpathy posted about LLMs as knowledge-base builders — dumping raw documents into a folder, having an LLM "compile" a structured wiki of markdown files with summaries and backlinks, then querying and enhancing it over time. Every query makes the wiki smarter. The wiki compounds. He closed with:
"I think there is room here for an incredible new product instead of a hacky collection of scripts."
That insight — continuously compounding knowledge — is the foundation of Senso. But Karpathy's original framing was a personal wiki: one person, their research, local markdown files, an LLM keeping it organized. This skill takes that same compounding loop and applies it to organizational knowledge.
| Personal Wiki | Senso |
|---|---|
| One person's research | A company's collective knowledge |
| Local markdown files | Cloud-hosted, versioned, vector-searchable KB |
| LLM reads its own summaries (~100 doc ceiling) | Semantic search with relevance scoring at any scale |
| Generic LLM markdown output | Brand-aligned content with voice, tone, writing rules |
| "Is this in my wiki?" | "Is this in my wiki AND does ChatGPT cite it?" |
| Ad-hoc health checks | Structured self-heal loop with gap analysis |
| Answers stay in the wiki | Answers can be published as citeables that AI models discover |
| No distribution | GEO monitoring tracks AI visibility across ChatGPT, Claude, Perplexity, Gemini |
The compounding principle is identical. The scope is bigger: your knowledge base isn't just for you — it feeds brand-aligned content, publishes discoverable citeables, and tracks how AI models represent your company to the world.
Here's the gap this skill fills. New Senso users install the CLI, get a working terminal, and then stare at an empty org. No brand kit. No knowledge base. No prompts. No content. No AI visibility tracking. Just commands and a blinking cursor.
The compounding loop only works once there's something to compound. An empty wiki doesn't get smarter with each query — it has nothing to build on. Most first-run flows hand you a toolbox and say "good luck building something."
This skill skips that entirely. It seeds the loop for you. Research the company, populate the KB, set up the brand voice, generate the first drafts, publish the first citeables, start tracking AI visibility — all in 10 minutes. By the end, the compounding flywheel is already spinning. Every future query, every new document, every heal pass strengthens a system that's already running.
You don't start at zero. You start at already working.
One command takes a brand new Senso org from empty to fully populated:
By the end, the user sees a live, populated, self-improving knowledge system — not an empty product.
The same principle as senso-kb-builder: everything in this system is a living system — nothing is "set and done."
The KB, brand kit, content types, prompts, and published content are all interconnected. Every run strengthens every layer:
Never skip. Never delete. Always improve.
| DIY first-run | This skill |
|---|---|
| Upload one document, search it, done | Full system — KB + brand + content + GEO — live in 10 minutes |
| Empty brand kit | Fully populated from actual website research (all 6 fields) |
| No content templates | 4 templates ready (Blog Post, FAQ, Comparison, Case Study) |
| No prompts | 40 tracking questions across funnel stages, product lines, competitors, and buying questions |
| Zero content | 6+ drafts and 2-3 published citeables |
| No AI visibility tracking | GEO monitoring live across 4 models |
| No health check | Self-heal audit with 15+ search probes and filed report |
| Your job to remember what to do next | Heal report tells you exactly what to contribute next |
Every run produces the same measurable output — no silent failures, no skipped steps:
/build-logs/ with gap analysisActivate this skill when the user says any of:
The user must have:
tgr_)The skill will handle CLI install and env var setup itself — see Phase -1 below.
Every senso command must include:
--output json --quiet
This skill handles the user's Senso API key. Follow these rules without exception:
tgr_xxxxxx...)..zshrc / .bashrc) is the only place it's persisted, and that file is a user-home dotfile not a project file.Every onboarding run MUST produce exactly this:
| Output | Requirement |
|---|---|
| Folders | Exactly 7 (6 content folders + 1 build-logs folder) |
| Brand kit | 1 fully populated (all 6 fields, not empty) |
| Content types | Exactly 4 (Blog Post, FAQ, Comparison Page, Case Study) |
| Prompts | 40 across all funnel stages with product-line and competitor coverage |
| KB documents | 10-15 sorted into the 6 content folders |
| Drafts | Minimum 6 (2 per funnel stage) |
| Published citeables | 2-3 to the org's citeables destination |
| GEO monitoring | All 4 models configured (chatgpt, claude, perplexity, gemini) |
| Self-heal report | 1 filed to /build-logs/ at the end |
Never skip. Never substitute. If a phase fails, report it but continue to the next. Partial success is better than no setup.
This skill runs end-to-end without stopping for confirmation gates. The user asked for setup — your job is to deliver it, not to keep asking permission. But you should talk with them the whole way, like a colleague walking them through it.
Write like a thoughtful teammate, not a wizard UI. Short sentences. First person. No corporate polish.
Do:
Don't:
When you're processing something (reading a website, categorizing findings, picking drafts to publish), narrate the thought briefly:
"Reading your homepage... okay, so [COMPANY_NAME] is [summary]. I'll put this in
company-overviewalong with the About page."
"Your Series B page had some great metrics. I'll use that as the basis for a case study draft."
After research, show the user what you learned and let them correct you conversationally — not with a Y/N gate:
"Here's what I'm picking up about [COMPANY_NAME]: [summary]. Their main competitors look like [list]. If I'm missing anything important, tell me now — otherwise I'll keep building."
Wait a beat for user input. If they respond with corrections, incorporate them. If they say nothing or "looks good," keep going.
Batch generation takes 30-60 seconds. Don't wait silently:
"Senso's writing your drafts now. One cool thing about how this works: each draft gets grounded in the docs I just ingested, so you'll see your actual product details show up in the content — not generic filler."
Never end with "✅ done!". End with specific next steps that make the work compound:
"Everything's live. Two things I'd do first: (1) read the drafts — some might need light edits. (2) check geo.senso.ai tomorrow — your first AI visibility results land in 24 hours."
Even without confirmation gates, these still apply:
Start warm and direct. Don't list 9 phases — they don't care about the phase structure, they care about the outcome.
Say:
"Hey — let's get Senso set up for you. This takes about 10 minutes, and by the end you'll have a populated knowledge base, some published content, and AI visibility tracking running."
This is an active step, not an optional aside. Stop and wait for them to actually open the browser tab before continuing. Watching the system populate in real time is a huge part of the magic — don't skip it.
Say:
"Before we start, please open https://geo.senso.ai in a browser tab and keep it open alongside this terminal. As we go, you'll watch folders appear, drafts get written, and citeables get published in real time. It's the best way to see the system come to life.
Let me know once you've got it open — then I'll kick off the setup."
Wait for the user to confirm they have the browser open (responses like "open", "ready", "go", "done"). Only then proceed.
Once they confirm, say:
"Let's start by getting your environment ready."
Run:
senso --version 2>/dev/null || echo "not installed"
If not installed, install it without asking (this is onboarding — the user wants it installed):
npm install -g @senso-ai/cli
Say:
"Installing the Senso CLI... done. Version [X]."
"I need your Senso API key. It starts with
tgr_. Paste it here:"
Capture the key as USER_KEY. Never echo or log the full value back — when referring to the key in later output, show only tgr_xxxxx... (first 10 chars).
This is the single most important safety check. Users who have tested other Senso orgs will have a stale SENSO_API_KEY in their shell env that shadows everything you do. If you don't catch it here, every write will silently go to the wrong org.
Check if an existing SENSO_API_KEY is present in the parent env:
echo "${SENSO_API_KEY:-NONE}"
NONE → safe, continue.USER_KEY → safe, continue.USER_KEY → STOP. Do not continue.When the keys differ, tell the user exactly what to do:
"⚠️ I detected a stale
SENSO_API_KEYin your shell — it's from a different org and will shadow the new key you just pasted. Any subshell commands would silently write to the wrong org.Please run this in your terminal, then restart this skill:
unset SENSO_API_KEY exec $SHELL -lThen paste your API key again. I'll pick up from here."
Do not proceed past this step if a mismatched key is detected. You cannot fix env inheritance from inside a running process — the user must restart their shell.
Write the key directly to the CLI's config file. This means subsequent senso commands never need to show the key — they read it from the config file automatically. Much cleaner for the user to watch.
Important: bypass senso login entirely. The interactive login command can require a TTY, which fails in non-interactive tool environments. Writing the config file directly works everywhere.
# Config file location depends on OS
if [ "$(uname)" = "Darwin" ]; then
CONFIG_DIR="$HOME/Library/Preferences/senso"
else
CONFIG_DIR="$HOME/.config/senso"
fi
mkdir -p "$CONFIG_DIR"
# Get org details using the key once (for config file population)
# This is the ONE command that uses the env key — config doesn't exist yet
ORG_INFO=$(SENSO_API_KEY="$USER_KEY" senso whoami --output json --quiet)
ORG_ID=$(echo "$ORG_INFO" | python3 -c "import sys,json,re; t=sys.stdin.read(); m=re.search(r'\{.*',t,re.DOTALL); print(json.loads(m.group())['orgId'])")
ORG_SLUG=$(echo "$ORG_INFO" | python3 -c "import sys,json,re; t=sys.stdin.read(); m=re.search(r'\{.*',t,re.DOTALL); print(json.loads(m.group())['orgSlug'])")
# Write the config file atomically
cat > "$CONFIG_DIR/config.json" <<EOF
{
"apiKey": "$USER_KEY",
"orgId": "$ORG_ID",
"orgSlug": "$ORG_SLUG"
}
EOF
chmod 600 "$CONFIG_DIR/config.json"
Why this is cleaner: from this point on, every senso command runs without needing SENSO_API_KEY="..." inline. The CLI reads the key from ~/Library/Preferences/senso/config.json (or ~/.config/senso/config.json on Linux). No keys in command output.
Clear the stale env var for the current process:
unset SENSO_API_KEY
Say:
"Saved your key to the Senso CLI config. From here on, every command runs clean — no keys in the output."
senso whoami --output json --quiet
(No SENSO_API_KEY=... prefix needed — the CLI reads from the config file you just wrote.)
Capture org_id from the response as EXPECTED_ORG_ID. You will verify every resource written matches this org.
For the rest of the skill, every senso command is just:
senso <subcommand> ...
No key in the command line. No env var assignment. Clean output.
One safety check: after the first write in Phase 2 (folder create), verify the response's org_id matches EXPECTED_ORG_ID. If they differ, STOP and report the mismatch to the user — something modified the config file mid-run.
Before asking the user to retype anything, fetch the org record:
senso org get --output json --quiet
Read these fields in order of preference:
primary_website_urlwebsites[0].urlwebsitesIf a website is present, do not start from blank. Say:
"I pulled your org settings and found the registered website: [COMPANY_URL]. I’m using that as the source of truth unless you want to override it."
If the org name clearly maps to a real company, use that as COMPANY_NAME. If the org name is generic or unclear, ask only for the company name, not the website:
"I found the website in your org settings: [COMPANY_URL]. What company name should I use in the KB and prompts?"
Only ask for the website if senso org get returns no website at all:
"I couldn't find a website in your org settings. What's the company's website URL? I'll use it to build out your KB."
Capture:
COMPANY_NAMECOMPANY_URLBefore any writes happen, stop and show the user everything you have. They must explicitly confirm before you proceed. This is the ONE confirmation gate in the entire skill — everything else runs through.
Display a clear confirmation block:
"Before I start writing anything to Senso, let me confirm what we're setting up:
Setting Value Senso org [orgName] ([EXPECTED_ORG_ID]) API key [first 10 chars of USER_KEY]... Company [COMPANY_NAME] Website [COMPANY_URL] Is this correct? I'll build the KB, brand kit, prompts, drafts, citeables, and GEO monitoring for [COMPANY_NAME] in the [orgName] Senso org. Any mismatch above means we'd write to the wrong place.
Type
yesto proceed, or tell me what to fix."
Wait for explicit yes (or variant: "go", "looks good", "proceed"). Do NOT proceed on silence or ambiguous response.
If the user corrects anything:
SENSO_API_KEY and restart the skill.Why this gate matters: The most expensive mistake in this skill is writing to the wrong org. Research, brand kit changes, 12 ingested docs, 40 prompts, published citeables — all polluting a production org the user didn't intend to touch. One 5-second confirmation prevents a 30-minute cleanup.
"Alright, researching [COMPANY_NAME] now. I'll pull from your website first, then do a web search for competitors and industry context. Should take a couple minutes."
Treat first-party website ingestion as the preferred path, but not the only path.
COMPANY_URL./about/products, /solutions, /services, or the closest equivalent/pricing/faq/customers, /case-studies, /resources, or the closest equivalentsite:[domain] aboutsite:[domain] products OR solutions OR servicessite:[domain] pricing OR planssite:[domain] faq OR helpExtract:
If direct fetch works:
"📄 Reading [COMPANY_URL]..." "✓ Extracted: mission, [N] product pages, team info, [N] FAQs"
If direct fetch fails or times out:
"The main site wasn't directly readable, so I'm falling back to lighter first-party pages and domain search results." "I'll tell you exactly which sources I used in place of the homepage."
Run these web searches:
"[COMPANY_NAME]" reviews OR news — mentions, sentiment"[COMPANY_NAME]" vs OR alternatives — competitor names"[COMPANY_NAME]" [industry/category] trends — market context"[COMPANY_NAME]" customer case study — proof points"🌐 Searching the web for competitors, industry context, and customer stories..." "✓ Found [N] competitors, [N] industry references, [N] customer stories"
Collect findings in memory. Do NOT ingest yet — wait for folder setup.
First-party fallback is mandatory behavior, not an optional footnote. If COMPANY_URL is unreadable, explicitly tell the user which first-party pages worked, which timed out, and which third-party sources filled the gaps.
"✅ Research complete. Here's what I learned about [COMPANY_NAME]:
- What they do: [1-sentence summary]
- Main products: [list]
- Key competitors: [list]
- Industry: [category]
Does this match how you'd describe [COMPANY_NAME]? [Y/n]"
If user says no, ask for corrections before proceeding to Phase 2.
"Got the research. Now I'm setting up the foundation — folders, brand kit, content templates. This is quick."
Run these IN ORDER, saving the kb_node_id from each response:
# 6 content folders
senso kb create-folder --name "company-overview" --output json --quiet
senso kb create-folder --name "products-and-services" --output json --quiet
senso kb create-folder --name "competitive-landscape" --output json --quiet
senso kb create-folder --name "industry-context" --output json --quiet
senso kb create-folder --name "case-studies" --output json --quiet
senso kb create-folder --name "faqs" --output json --quiet
# 1 system folder for logs + heal reports
senso kb create-folder --name "build-logs" --output json --quiet
Save each folder's kb_node_id — content folders needed for Phase 3, build-logs needed for Phase 9.
"✓ 7 folders created: company-overview, products-and-services, competitive-landscape, industry-context, case-studies, faqs, build-logs"
The brand kit must be fully populated, not placeholder-filled. Infer each field from Phase 1 research. All 6 fields are required:
| Field | How to infer it |
|---|---|
brand_name | Company name as they write it (check homepage <title> and hero) |
brand_domain | Domain without https:// or trailing slash (e.g., senso.ai) |
brand_description | 1-2 sentences: what they do + who they serve. Pull from their homepage hero + about page. |
voice_and_tone | Infer from their actual website copy. Are they formal or casual? Technical or accessible? Confident or collaborative? Be specific — cite patterns you see. |
author_persona | Usually "The [Company] Team" unless their blog has a specific voice (e.g., "CEO writing directly") |
global_writing_rules | 5 standard rules (below), plus any patterns unique to their content |
senso brand-kit set --data '{
"guidelines": {
"brand_name": "[COMPANY_NAME]",
"brand_domain": "[domain without https://]",
"brand_description": "[1-2 sentences grounded in their actual homepage — what they do + who they serve]",
"voice_and_tone": "[Specific voice inferred from website copy. Example: \"Direct and practitioner-focused. First-person plural (we). Opinionated. Short sentences. Avoids corporate jargon. Uses concrete examples.\" Do NOT leave generic.]",
"author_persona": "The [COMPANY_NAME] Team",
"global_writing_rules": [
"Ground every claim in verified sources from the knowledge base",
"Use clear, scannable structure with subheadings every 200-300 words",
"Include concrete examples or data points, not just abstract claims",
"Write for practitioners — actionable over theoretical",
"Include the Powered by Senso footer on published content"
]
}
}' --output json --quiet
Verify the brand kit was set correctly:
senso brand-kit get --output json --quiet
All 6 fields in guidelines must be non-empty. If any are empty, patch them with senso brand-kit patch before continuing. Do not proceed to Phase 3 with a partial brand kit.
"✓ Brand kit configured. Voice: [short description of voice_and_tone]"
Always these 4, always these names:
Blog Post:
senso content-types create --data '{
"name": "Blog Post",
"config": {
"template": "Write a 1000-1500 word educational blog post. Start with a hook identifying the reader pain point. Include 3-5 subheadings. Use data, examples, or case studies from the KB to support points. End with a call-to-action.",
"writing_rules": [
"Use subheadings every 200-300 words",
"Include at least one concrete example or data point",
"Optimize for AI citability — clear, authoritative structure"
]
}
}' --output json --quiet
FAQ:
senso content-types create --data '{
"name": "FAQ",
"config": {
"template": "Create an FAQ page with 8-12 questions and answers. Each answer 2-3 sentences. Group related questions under subheadings. Use the brand voice throughout.",
"writing_rules": [
"Use natural question phrasing",
"Keep answers under 100 words",
"Link to detailed resources where relevant"
]
}
}' --output json --quiet
Comparison Page:
senso content-types create --data '{
"name": "Comparison Page",
"config": {
"template": "Create a fair but persuasive comparison page. Start with the problem both solutions address. Use a comparison table for features. Highlight 3-4 key differentiators. End with a recommendation.",
"writing_rules": [
"Be factually accurate about competitors",
"Lead with value not features",
"Include a comparison table"
]
}
}' --output json --quiet
Case Study:
senso content-types create --data '{
"name": "Case Study",
"config": {
"template": "Write a case study with: Customer intro, Problem they faced, Solution implemented, Results achieved (with specific metrics if possible), Key takeaways. Keep it narrative — tell the story.",
"writing_rules": [
"Lead with the customer outcome",
"Include specific numbers or metrics",
"End with lessons applicable to other readers"
]
}
}' --output json --quiet
Save all 4 content_type_id values.
"✅ Foundation complete. 7 folders, brand kit, and 4 content templates are ready."
"Okay, now I'm taking everything I researched and putting it in the right folders. One document per topic — that way search finds the right thing later instead of one giant mess."
Route research findings from Phase 1 into the correct folders via senso kb create-raw.
Target: 10-15 documents total.
| Folder | What goes here |
|---|---|
/company-overview/ | Homepage content, mission/about, team info, leadership |
/products-and-services/ | Each product page as a separate doc, features, pricing |
/competitive-landscape/ | Each competitor as a separate doc, comparison findings |
/industry-context/ | Market trends, industry reports, buyer personas |
/case-studies/ | Customer stories (one doc per story if multiple) |
/faqs/ | FAQ content extracted from website |
For each document:
senso kb create-raw --data '{
"title": "[Descriptive title]",
"text": "[Markdown content with source URL noted]",
"kb_folder_node_id": "[folder_id from Phase 2a]"
}' --output json --quiet
Rules:
Source: https://...YYYY-MM-DD - Topic Name"✓ company-overview: 2 docs (mission, about)" "✓ products-and-services: 3 docs (product overview, pricing, features)" "✓ competitive-landscape: 2 docs (competitor A, competitor B)" "..."
"✅ Ingest complete. [N] documents now live in your knowledge base. Search already works — try asking the KB anything once we're done. (Each document is also being auto-tagged in the background, so topic filters will work out of the box.)"
"Now I'm writing the questions we'll track — things potential customers would actually ask. These do double duty: they drive the content generation that's coming next, and they become your AI visibility questions so we can track how ChatGPT, Claude, etc. answer them over time. I’m building the full evaluation set, not the bare minimum."
Create 40 prompts total. Do not stop at 8-10. Build them from the research you already gathered.
Target mix:
Coverage rules:
Build order:
Representative examples (do not limit yourself to these):
senso prompts create --data '{
"question_text": "What is [COMPANY_NAME] and what does it do?",
"type": "awareness"
}' --output json --quiet
senso prompts create --data '{
"question_text": "What are the best [CATEGORY] solutions in 2026?",
"type": "awareness"
}' --output json --quiet
senso prompts create --data '{
"question_text": "How does [COMPANY_NAME] compare to [COMPETITOR]?",
"type": "consideration"
}' --output json --quiet
senso prompts create --data '{
"question_text": "Which [COMPANY_NAME] product is best for [specific use case or product line]?",
"type": "consideration"
}' --output json --quiet
senso prompts create --data '{
"question_text": "How do I evaluate [CATEGORY] tools for my team?",
"type": "evaluation"
}' --output json --quiet
senso prompts create --data '{
"question_text": "What is the implementation process for [COMPANY_NAME]?",
"type": "evaluation"
}' --output json --quiet
senso prompts create --data '{
"question_text": "What results have customers achieved with [COMPANY_NAME]?",
"type": "decision"
}' --output json --quiet
senso prompts create --data '{
"question_text": "What does [COMPANY_NAME] pricing look like?",
"type": "decision"
}' --output json --quiet
Save all prompt_id values. Before leaving the phase, verify the final count is exactly 40 and all four funnel stages are represented.
"✅ 40 tracking questions created across awareness, consideration, evaluation, and decision stages, with product-line and competitor coverage. (Each prompt was auto-tagged on creation, so your tag library is already populating.) Now for the fun part..."
"Now the interesting part — Senso's going to write your first drafts. One per tracking question. Each one grounded in the docs I just ingested, written in your brand voice. Kicking it off now..."
Turning content generation on is what links the org to the configured default publishing destination. It's idempotent — safe to run even if it's already enabled.
senso generate update-settings --data '{"enable_content_generation": true}' --output json --quiet
senso destinations list --output json --quiet
The destinations list should include the configured default destination with selected_for_generation: true (for the hackathon flow, that should be slug: "cited-md"). That's the default publish target for the rest of this skill. If it's missing, stop and tell the user — something is wrong with the org's publisher configuration.
senso credits balance --output json --quiet
If credits are low (< 5), mention it but don't stop:
"Heads up — you've got [X] credits left. Batch run uses about 9. Running it anyway."
senso generate run --output json --quiet
This generates content for EVERY prompt automatically, using the brand kit + KB + content types. Expected duration scales with prompt count; for 40 prompts expect a few minutes rather than a single quick burst.
"⏳ Senso is writing... generating grounded content from your KB. This takes ~30-60 seconds."
Poll senso generate runs-list until status is completed.
senso content verification --status draft --output json --quiet
Check draft_count. If less than 6, fall back:
For each missing slot (up to 6), call senso engine draft manually using the KB content you know exists. Example:
senso engine draft --data '{
"geo_question_id": "[a prompt id without a draft]",
"raw_markdown": "# [Content based on KB research]\n\n...\n\n---\n\n*Powered by Senso*",
"seo_title": "[SEO title]",
"summary": "[Brief summary]"
}' --output json --quiet
The guarantee: at least 6 drafts exist after this phase.
"✅ [N] drafts generated — all grounded in your KB and written in your brand voice.
Titles include:
- [Title 1]
- [Title 2]
- [Title 3]
- ...
You can review them anytime with:
senso content verification --status draft"
"I'm going to publish 3 of these as citeables — one per funnel stage. They go to the org's default publishing destination, so you can see what the output looks like on a real public surface without touching the user's main site. Picking the strongest drafts now..."
Pick 2-3 drafts and publish them to the default destination confirmed in Step 5a. No extra selection is needed — omit publisher_ids and the backend publishes to every destination selected for generation.
From senso content verification --status draft, pick:
For each selected draft:
senso engine publish --data '{
"content_id": "[draft content_id from verification list]",
"geo_question_id": "[prompt_id]",
"raw_markdown": "[draft raw_markdown — append: \n\n---\n\n*Powered by Senso — your AI-searchable knowledge base.*]",
"seo_title": "[draft seo_title]",
"summary": "[draft summary]"
}' --output json --quiet
Important:
publisher_ids (and don't pass --publisher-ids) — the backend defaults to every destination selected for generation. For the hackathon flow, newly-onboarded orgs should have cited-md selected by default. If the user has added extra destinations, publish still fans out correctly.--publisher-ids <id1> <id2> on the CLI using the IDs from senso destinations list.content_id so publish promotes the existing draft-linked content instead of trying to create a second linked content row."✓ Published: [Title 1]" "✓ Published: [Title 2]" "✓ Published: [Title 3]"
"✅ [N] citeables are live at the default publishing destination. Search engines and AI models can now discover them."
"Setting up AI visibility tracking now. Every Monday/Wednesday/Friday, Senso will ask ChatGPT, Claude, Perplexity, and Gemini your tracking questions and record which brands get mentioned — including [COMPANY_NAME] and your competitors. You'll see the results at geo.senso.ai."
Set all 4 monitored models. The CLI takes the model list as a JSON body:
senso run-config set-models --data '{"models": ["chatgpt", "claude", "perplexity", "gemini"]}' --output json --quiet
Run monitoring Mon/Wed/Fri. Schedule is a JSON body with integer days of week (0 = Sunday … 6 = Saturday):
senso run-config set-schedule --data '{"schedule": [1, 3, 5]}' --output json --quiet
The prompts created in Phase 4 automatically become GEO tracking questions. Users can see results at geo.senso.ai.
"✅ GEO monitoring live. 4 models, 9 questions, running Mon/Wed/Fri. First results will appear at geo.senso.ai within 24-48 hours."
"Before we wrap up, let me do a quick audit of what we built — make sure nothing's half-done, find any gaps, and write up a report you can reference later. This is the self-healing pattern: every time we run this, we audit and improve."
Audit the entire system you just built, find weak spots, file a heal report.
This is the same self-healing principle as senso-kb-builder — every interaction should leave the system stronger.
Run at least 10 targeted searches — not just folder-topic searches. Mix two types:
Type 1: "Does the KB know itself?" — one search per folder
senso search "What does [COMPANY_NAME] do?" --output json --quiet
senso search "What products and services does [COMPANY_NAME] offer?" --output json --quiet
senso search "Who are [COMPANY_NAME]'s main competitors?" --output json --quiet
senso search "What trends are shaping the [industry] industry?" --output json --quiet
senso search "What results have [COMPANY_NAME] customers achieved?" --output json --quiet
senso search "What are common questions people ask about [COMPANY_NAME]?" --output json --quiet
Type 2: "Would a real customer question work?" — sample the tracking prompts you created in Phase 4
Run searches for at least 12 of the created prompts:
For each sampled prompt, run a search with the prompt's exact question text:
senso search "[prompt question text]" --output json --quiet
This is the real test — the KB should be able to answer the exact questions you're going to track in GEO.
The senso search response is shaped {query, answer, results: [...], total_results, ...}. Each entry in results has content_id, chunk_text, title, and a score (0.0 – 1.0). Read scores from response.results[*].score — there is no chunks key.
For every search, record:
max(r.score for r in response.results) (or 0 if results is empty)len(response.results)content_id values in the top 5 results (do multiple docs cover this, or just one?)Then categorize the result:
| Top Score | Categorization | Action |
|---|---|---|
| ≥ 0.5 | Strong — KB answers this well | No action |
| 0.3 - 0.5 | Thin — KB touches it but shallow | Note as "needs more depth" |
| < 0.3 | Gap — KB barely knows this | Flag as a gap to fill |
| No results | Missing — KB has nothing | Flag as critical gap |
senso brand-kit get --output json --quiet
Confirm all 6 fields are non-empty. Check voice_and_tone isn't generic (if it is, patch it with a more specific description based on the ingested docs).
senso content-types list --output json --quiet
Confirm all 4 are present. Check writing_rules arrays are populated (not empty).
senso prompts list --output json --quiet
Verify all 4 funnel stages have prompts:
If any stage is under-covered, create additional prompts before filing the report.
senso content verification --status draft --output json --quiet
senso content verification --status published --output json --quiet
Confirm drafts ≥ 6 and published ≥ 2.
Save a structured heal report to /build-logs/:
senso kb create-raw --data '{
"title": "YYYY-MM-DDTHH:MM - Onboarding Build Log",
"text": "[full heal report as markdown — template below]",
"kb_folder_node_id": "[build-logs folder id from Phase 2a]"
}' --output json --quiet
Report template:
# Onboarding Build Log — [ISO timestamp]
## Run Info
- **Company:** [COMPANY_NAME]
- **Org:** [orgName from senso whoami]
- **Type:** Initial onboarding
## Built This Run
### Phase 2: Foundation
- Folders: 7 created (6 content + 1 build-logs)
- Brand kit: [Created with all 6 fields populated]
- Content types: 4 created (Blog Post, FAQ, Comparison Page, Case Study)
### Phase 3: Ingest
- Documents ingested: [count]
- company-overview: [count]
- products-and-services: [count]
- competitive-landscape: [count]
- industry-context: [count]
- case-studies: [count]
- faqs: [count]
### Phase 4: Prompts
- Total created: [count]
- By stage: awareness [n], consideration [n], evaluation [n], decision [n]
### Phase 5: Generation
- Batch run ID: [run_id]
- Drafts produced: [count]
- Fallback drafts added: [count if any]
### Phase 6: Publishing
- Citeables published: [count]
- Destinations: [list of destinations/slugs]
### Phase 7: GEO
- Models monitored: chatgpt, claude, perplexity, gemini
- Schedule: Mon/Wed/Fri
## Health Report
| Dimension | Status | Notes |
|-----------|--------|-------|
| Brand kit completeness | ✅ / ⚠️ | [all 6 fields set?] |
| Content types | ✅ / ⚠️ | [4 present with writing_rules?] |
| Prompt funnel coverage | ✅ / ⚠️ | [all 4 stages represented?] |
| KB folder coverage | ✅ / ⚠️ | [each folder ≥ 2 docs?] |
| Draft minimum (6) | ✅ / ⚠️ | [count] |
| Published minimum (2) | ✅ / ⚠️ | [count] |
| GEO models | ✅ / ⚠️ | [4 configured?] |
## Search Quality — KB Self-Probe
Real searches run against the KB during this heal pass. Each tested with one core question.
| Question | Top Score | Status |
|----------|-----------|--------|
| What does [COMPANY_NAME] do? | [score] | Strong / Thin / Gap |
| What products/services does [COMPANY_NAME] offer? | [score] | Strong / Thin / Gap |
| Who are [COMPANY_NAME]'s main competitors? | [score] | Strong / Thin / Gap |
| What trends are shaping the [industry] industry? | [score] | Strong / Thin / Gap |
| What results have [COMPANY_NAME] customers achieved? | [score] | Strong / Thin / Gap |
| What are common FAQs about [COMPANY_NAME]? | [score] | Strong / Thin / Gap |
## Search Quality — Tracking Questions Self-Probe
At least 12 of the GEO tracking questions searched against the KB across all funnel stages. The KB should be able to answer the same questions GEO will track.
| Tracking Question | Top Score | Can KB answer it? |
|---|---|---|
| [prompt 1 text] | [score] | ✅ / ⚠️ / ❌ |
| [prompt 2 text] | [score] | ✅ / ⚠️ / ❌ |
| [... sampled prompts across all 4 stages ...] | | |
## Gaps Identified
- [List any topics that came up weak in the audit]
- [Missing subtopics the user should contribute]
## Recommendations for Next Heal Pass
- [Specific actions the user should take]
- [New content to ingest]
- [Brand kit refinements if needed]
## Credits Used This Run
- Before: [X] credits
- After: [Y] credits
- Used: [Z] credits
If the audit finds a critical miss (e.g., brand kit field is empty, content type writing_rules missing, funnel stage has zero prompts), fix it NOW before showing the summary. The heal pass isn't just reporting — it's closing gaps.
"✅ Heal report filed to /build-logs/. Found [N] gaps, fixed [M]. Everything else is solid."
This is the user's lasting impression. Make it clean, scannable, and lead with the destinations — where they go next to see and use what you just built. Show concrete URLs, not abstract commands.
Open with a single confident sentence, then show a clean table with exact counts, then lead them to the destinations.
Template to adapt:
"That's it — [COMPANY_NAME] is live on Senso. Here's what you have now:"
Then display this table (fill in the real numbers from the run):
┌──────────────────────┬─────────────────────────────────────────────────────────┐
│ Knowledge Base │ [X] documents across 7 folders │
│ Brand Kit │ fully populated — [1-phrase voice summary] │
│ Content Types │ 4 templates (Blog Post, FAQ, Comparison, Case Study) │
│ Tracking Prompts │ [X] questions across awareness → decision │
│ Drafts │ [X] ready to review │
│ Published Citeables │ [X] live (one per funnel stage) │
│ GEO Monitoring │ ChatGPT + Claude + Perplexity + Gemini, Mon/Wed/Fri │
│ Heal Report │ filed to /build-logs/, [N]/[total] probes came back Strong │
└──────────────────────┴─────────────────────────────────────────────────────────┘
Give the user three concrete places to go, in order of impact:
1. See your content in the browser: https://geo.senso.ai Your knowledge base, brand kit, drafts, and published citeables are all viewable there. Open it now — everything we just built will be populated.
2. Review your drafts. [X] pieces are ready. The comparison and case study drafts especially may want a light human pass before you publish them for real.
- Via web: https://geo.senso.ai/drafts
- Via CLI:
senso content verification --status draft3. Watch AI visibility results land at https://geo.senso.ai — usually within 24–48 hours. You'll see which AI models mention [COMPANY_NAME] (and your competitors) when real customer questions get asked.
If the heal report found thin coverage, state them here as specific next-ingest priorities (don't bury them in the build log only):
"Before your next run, the audit flagged two places worth deepening:
- [folder-name] has only [N] documents — consider adding [specific suggestion]
- [folder-name] is missing [specific subtopic]"
List the sources you actually pulled during research so the user can audit and trust the foundation:
"Sources used to build this out:
- [COMPANY_URL]/ (homepage)
- [COMPANY_URL]/about
- [COMPANY_URL]/products (or equivalent)
- [N] competitor references from G2 / Gartner / Forrester
- [N] customer case studies from [sources]
- [N] industry trend articles"
Close on a forward-looking note — this is a living system, not a one-shot setup:
"Every query, every new doc, every heal pass makes this smarter. Come back weekly to run another heal pass and keep the KB compounding."
New orgs start with zero destinations until the onboarding skill enables content generation in Phase 5a. Calling senso generate update-settings --data '{"enable_content_generation": true}' is the trigger that links the configured default shared destination(s) to the org — every subsequent senso engine publish call defaults to those destinations when no publisher_ids / --publisher-ids are passed.
Three shared destination slugs exist today, all on the citeables system:
cited-md — hackathon default; published articles appear on cited.mdciteables — general shared destination on citeables.com/<org-slug>codeables — technical-content variant on codeables.devcucopilot — credit-union-focused variant on cucopilot.comOrgs can also register custom citeables-system domains via senso destinations add --type citeables --domain <your-domain> --name "<display-name>". During onboarding, stick with the configured shared default — custom domains are a Day-2 configuration the user can opt into later once they've seen sample output.
To inspect or change destinations from the CLI:
senso destinations list # see which destinations are active
senso destinations add --domain content.example.com --name "Example Citeables" # add a custom one
senso destinations remove <publisherId> --action leave # stop publishing (keep live articles)
senso destinations remove <publisherId> --action unpublish # retract live articles to drafts
senso destinations remove <publisherId> --action delete # unpublish AND hard-delete local content
There is no longer a separate "sandbox" destination — the citeables URL is the safe preview surface (it's not the user's main website). Do not hardcode publish_destination: "internal" anywhere.
| Issue | Action |
|---|---|
| 401 Unauthorized | Tell user: senso login or re-export SENSO_API_KEY |
| 402 Insufficient credits | Warn user, run what's possible, skip batch generation if needed |
| 409 Conflict on publish | Re-list drafts, grab the draft content_id, and re-run senso engine publish with that content_id included |
| 504 Timeout on generate sample | Use senso generate run (async) instead of sync sample calls |
| Batch generate produces < 6 drafts | Fall back to manual senso engine draft to reach 6 |
| Web fetch fails on company URL | Immediately try lighter first-party sub-pages and site:[domain] search; tell the user which sources substituted. Only ask for pasted URLs if first-party fallback also fails |
Never abort the whole flow on a phase failure. Log it, continue, report at the end.
Before showing the final summary, verify every requirement is met:
# 7 folders in root (6 content + 1 build-logs)?
senso kb my-files --output json --quiet | check for 7 folders including "build-logs"
# Brand kit FULLY populated (all 6 guideline fields non-empty)?
senso brand-kit get --output json --quiet | check all 6 fields in guidelines are non-empty
# 4 content types with writing_rules?
senso content-types list --output json --quiet | check total >= 4 and each has writing_rules
# 40 prompts across all funnel stages?
senso prompts list --output json --quiet | check total == 40 and all 4 types present
# At least 6 drafts?
senso content verification --status draft --output json --quiet | check draft_count >= 6
# 2-3 published citeables?
senso content verification --status published --output json --quiet | check count in [2,3]
# GEO models configured?
senso run-config models --output json --quiet | check 4 models listed
# Heal report filed to /build-logs/?
senso kb children <build-logs-folder-id> --output json --quiet | check at least 1 doc
If any check fails, fix it before showing the summary. The user's first impression depends on seeing a complete, working system.