"The checklist we wish existed when we started"
The AI agent product market is moving faster than anyone's ability to evaluate it properly. New personas, skills, and guides get published every week. Some are genuinely valuable — built by people who've run these systems in production and know what breaks. Some are prompt wrappers with a price tag. Telling the difference isn't obvious if you're new to the space, and even experienced operators miss warning signs.
This guide exists because we've bought things that didn't deliver, built things from scratch that should have been available, and developed a sharp eye for the difference between products that were tested in the real world and products that were generated from a prompt and packaged. We're publishing it free because trust is built by giving away real value — not by optimizing the funnel.
The "Is this just a prompt?" litmus test is the single most useful question to ask about any AI agent product. A good persona or skill has specific, non-obvious configuration: prompts that encode hard-won production knowledge, edge cases that only emerge from real use, integration details that reflect actual setup experience. A bad product is a prompt you could have written in 10 minutes, wrapped in a landing page. The test gives you concrete signals for which you're looking at.
Security is the most underappreciated dimension of this market. A skill that runs automated actions on your behalf — sending emails, browsing the web, executing code — is a significant trust surface. Malicious or careless skill design can exfiltrate data, execute unintended actions, or expose your system to prompt injection from external sources. The security red flags guide covers the specific patterns to look for: overly broad permission requests, instructions to ignore safety guidelines, obfuscated code in automation scripts, and configuration that bypasses normal safeguards. SkillScan is introduced as the tool for automated security analysis of skill files.
This guide is product-agnostic. It applies to products from AgentOrchard and from any other vendor in this market. We want buyers to be more sophisticated because a more sophisticated market produces better products. The guide that makes you a better buyer also makes us build better products — that's the alignment we're optimizing for.
Version 1 · March 2026 — Initial release. Written from direct experience building and evaluating AI agent products since 2025. Security red flags guide reflects observed patterns in the wild. SkillScan integration introduced in this version. Updated as the market evolves.
Get OpenClaw configured and your first agent running. Start here after reading the buying guide.
View product →Host hardening and OpenClaw security configuration. Apply what you learned in the buying guide.
View product →Build your own OpenClaw configuration from scratch. The technically-confident path to sovereignty.
View product →