Build or improve multimodal meeting copilots that watch the screen, listen to audio, use Railtracks for agent workflows, and use Assistant UI for live chat.
npx @senso-ai/shipables install NoahLundSyrdal/multimodal-meeting-copilotUse this skill when building an AI copilot for presentations, meetings, or screen shares where the agent must understand both on-screen content and spoken context.
This skill is especially useful when the product needs to:
Split the system into two layers:
Keep the UI fast and interactive. Push heavier orchestration into Railtracks.
Use Railtracks for the workflows that need structure, validation, and memory:
Do not use Railtracks as a thin wrapper around a single model call if the real value is in orchestration.
Prefer a flow shape like:
Use Assistant UI for the live copilot chat surface.
Keep the chat UX separate from the heavier agent workflows:
If possible, support both:
For meeting copilots, treat screen context and transcript context as equal inputs.
When building summaries, combine:
The strongest positioning for this class of product is not “AI note taker.”
Position it as:
For a concrete implementation based on CogniView, read references/cogniview-architecture.md.
Demo: CogniView demo video
