The first decentralized oracle network that enables Solana smart contracts to call LLMs on-chain with structured outputs. No polling. No complexity. Just clean, type-safe code.
$ cargo add coolrouter-cpiMost projects have to implement polling to get responses back. We use advanced CPI techniques to invoke your callback automatically.
// Manual account management
let mut accounts = vec![
AccountMeta::new(request_pda, false),
AccountMeta::new(authority, true),
AccountMeta::new_readonly(system_program, false),
// ... dozens more accounts
];
// Manual serialization
let instruction_data = RequestData {
request_id,
provider,
model,
messages: serialize_messages(msgs)?,
}.try_to_vec()?;
// Create CPI instruction
let ix = Instruction {
program_id: oracle_program,
accounts,
data: instruction_data,
};
invoke(&ix, &account_infos)?;
// ❌ Now you need to implement polling
loop {
let response = check_response(&request_pda)?;
if response.is_ready {
break;
}
// Sleep and retry...
sleep(Duration::from_millis(500));
}Manual account management & serialization
Complex polling logic required
High RPC costs from constant checking
Difficult error handling
use coolrouter_cpi::{create_llm_request, Message};
// One simple function call
create_llm_request(
ctx.accounts.request_pda.to_account_info(),
ctx.accounts.authority.to_account_info(),
ctx.accounts.caller_program.to_account_info(),
ctx.accounts.system_program.to_account_info(),
ctx.accounts.coolrouter_program.key(),
vec![ctx.accounts.callback_account.to_account_info()],
"request_123".to_string(),
"openai".to_string(),
"gpt-4".to_string(),
vec![Message {
role: "user".to_string(),
content: "Hello, AI!".to_string(),
}],
)?;
// ✅ CoolRouter automatically invokes your callback
// No polling needed!Clean, type-safe interface - one function call
Automatic callback invocation - zero polling
Minimal RPC costs - efficient design
Built-in error handling & validation
Save hundreds of lines of boilerplate code
View Full DocumentationUnlike traditional APIs, we validate every LLM response through our decentralized oracle network before delivering it to your smart contract.
Your smart contract calls create_llm_request(). The request is broadcast to multiple oracle nodes in our decentralized network.
Multiple nodes independently query the LLM provider. Responses are compared and voted on to ensure consistency and prevent manipulation.
Once consensus is reached, we automatically invoke your callback with the validated response. All in under 4 seconds.
Multi-node validation prevents single point of failure and response manipulation
No central authority - responses validated by distributed oracle network
Consensus voting completes in under 4 seconds - faster than most centralized APIs
Multiple providers and fallback mechanisms ensure 99.9% uptime
Production-ready features that work out of the box. No complex configuration required.
Complete control over data privacy with customizable logging options
Your prompts and responses are never stored or logged permanently
Intelligent routing to the best model for your specific use case
Automatic failover and load balancing across 60+ providers
Deterministic outputs with exact reproducibility for testing
Sub-4-second response times with built-in performance monitoring
Pre-configured model settings for common use cases and patterns
Reduce costs and latency with intelligent prompt caching
Type-safe custom structs and JSON schema validation for responses
Enable LLMs to call functions and interact with external systems
Support for text, images, audio, and video inputs across models
Automatic message format conversion between different providers
Smart health checks and automatic provider switching for reliability
Give your AI access to real-time web data and current information
Guaranteed responses with automatic retry and fallback mechanisms
Simplified key management with built-in rotation and security
Track usage, costs, and performance metrics per application
One unified interface to access GPT-4, Claude, Gemini, Llama, and 500+ other models. Switch providers with a single line of code.
// OpenAI GPT-4
create_llm_request(..., "openai", "gpt-4", messages)?;
// Anthropic Claude
create_llm_request(..., "anthropic", "claude-3-opus", messages)?;
// Google Gemini
create_llm_request(..., "google", "gemini-pro", messages)?;
// Local Llama via Replicate
create_llm_request(..., "replicate", "llama-3-70b", messages)?;Join developers building next-generation smart contracts with on-chain AI capabilities. Start with our comprehensive documentation and examples.