High Priority
Deploy /ai.txt Protocol for Decentralized Agents
Establish a machine-readable manifest of your crypto project's entire site hierarchy and key data endpoints specifically for AI agents and decentralized crawlers.
Create a text file at /ai.txt detailing your project's core mission, tokenomics, and governance structure.
Include markdown-style links to your most critical whitepaper sections, smart contract audits, and developer documentation.
Add a 'FAQ' section for common AI queries regarding token utility, staking mechanisms, and network security.


Configure your Crypto projects crawler protocols effortlessly.
Join 2,000+ teams scaling with AI.
High Priority
LLM Crawler Selective Indexing (e.g., ChatGPT, Perplexity)
Fine-tune which sections of your crypto project's website and documentation are prioritized for ingestion by general-purpose AI search engines.
Implement `User-agent: GPTBot` `Allow: /docs/ Allow: /ecosystem/ Allow: /tokenomics/ Disallow: /auth/ Disallow: /private-sales/
Verify crawler permissions and scope using AI provider's respective webmaster tools or testing environments.
Monitor crawl frequency and data points accessed in your server logs to ensure AI bots are indexing authoritative project information, not ephemeral UI elements.
Medium Priority
Semantic On-Chain & Off-Chain Data Structure
Utilize semantic HTML and structured data formats (e.g., Schema.org for Crypto) to aid LLM crawlers in understanding the hierarchy and context of your project's information.
Wrap core protocol descriptions and whitepaper excerpts within `<article>` tags to signal primary content.
Use `<section>` with descriptive `aria-label` attributes for distinct components like 'Token Distribution', 'Roadmap Milestones', or 'Staking Rewards'.
Ensure all data presented, especially token metrics and transaction summaries, uses proper `<thead>`, `<tbody>`, and `<th>` tags for structured data extraction.
High Priority
RAG-Ready Knowledge Base Snippet Optimization
Structure your crypto project's documentation and public data so that it can be efficiently 'chunked' and retrieved by Retrieval-Augmented Generation (RAG) pipelines for AI assistants.
Maintain logical topical containers, ideally under 700 tokens, focusing on single concepts like 'Validator Node Requirements' or 'Liquidity Mining Program Details'.
Avoid 'floating' context; explicitly state the primary subject (e.g., 'The XYZ Token') in section summaries and introductions.
Eliminate ambiguous pronouns ('it', 'they') and replace them with specific entities like 'the consensus mechanism', 'the governance token', or 'the staking contract'.