2 results for “llm”
Build self-improving runtime security for autonomous AI agents — intercept actions, dispatch adversarial investigators, generate evolving scoring rules, and enforce deterministic block decisions with no LLM in the enforcement path.
Evaluate, score, and systematically improve prompts in the codebase. Identifies weak prompts, generates test cases, scores outputs, and proposes optimized versions. Use when the user says "improve this prompt", "why is the AI doing X", "eval my prompts", or "optimize the agent".