Now live on Solana Testnet

AI-Powered Smart Contracts
On Solana Blockchain

The first decentralized oracle network that enables Solana smart contracts to call LLMs on-chain with structured outputs. No polling. No complexity. Just clean, type-safe code.

$ cargo add coolrouter-cpi
60+
Active Providers
OpenAI, Anthropic, Google & more
500+
AI Models
GPT-4, Claude, Llama, Gemini & more
<4s
Response Time
Even with consensus voting
100%
Decentralized
Oracle network validation

The Cleanest DevX in the Solana Ecosystem

Most projects have to implement polling to get responses back. We use advanced CPI techniques to invoke your callback automatically.

Without CoolRouter

manual_polling.rs
// Manual account management
let mut accounts = vec![
  AccountMeta::new(request_pda, false),
  AccountMeta::new(authority, true),
  AccountMeta::new_readonly(system_program, false),
  // ... dozens more accounts
];

// Manual serialization
let instruction_data = RequestData {
  request_id,
  provider,
  model,
  messages: serialize_messages(msgs)?,
}.try_to_vec()?;

// Create CPI instruction
let ix = Instruction {
  program_id: oracle_program,
  accounts,
  data: instruction_data,
};

invoke(&ix, &account_infos)?;

// ❌ Now you need to implement polling
loop {
  let response = check_response(&request_pda)?;
  if response.is_ready {
    break;
  }
  // Sleep and retry...
  sleep(Duration::from_millis(500));
}

Manual account management & serialization

Complex polling logic required

High RPC costs from constant checking

Difficult error handling

With CoolRouter

coolrouter.rs
use coolrouter_cpi::{create_llm_request, Message};

// One simple function call
create_llm_request(
  ctx.accounts.request_pda.to_account_info(),
  ctx.accounts.authority.to_account_info(),
  ctx.accounts.caller_program.to_account_info(),
  ctx.accounts.system_program.to_account_info(),
  ctx.accounts.coolrouter_program.key(),
  vec![ctx.accounts.callback_account.to_account_info()],
  "request_123".to_string(),
  "openai".to_string(),
  "gpt-4".to_string(),
  vec![Message {
    role: "user".to_string(),
    content: "Hello, AI!".to_string(),
  }],
)?;

// ✅ CoolRouter automatically invokes your callback
// No polling needed!

Clean, type-safe interface - one function call

Automatic callback invocation - zero polling

Minimal RPC costs - efficient design

Built-in error handling & validation

Save hundreds of lines of boilerplate code

View Full Documentation

Decentralized Oracle Network with Consensus Voting

Unlike traditional APIs, we validate every LLM response through our decentralized oracle network before delivering it to your smart contract.

1. Request Broadcast

Your smart contract calls create_llm_request(). The request is broadcast to multiple oracle nodes in our decentralized network.

2. Consensus Voting

Multiple nodes independently query the LLM provider. Responses are compared and voted on to ensure consistency and prevent manipulation.

3. Validated Callback

Once consensus is reached, we automatically invoke your callback with the validated response. All in under 4 seconds.

Why Our Oracle Network Matters

Secure & Tamper-Proof

Multi-node validation prevents single point of failure and response manipulation

Truly Decentralized

No central authority - responses validated by distributed oracle network

Lightning Fast

Consensus voting completes in under 4 seconds - faster than most centralized APIs

Reliable & Available

Multiple providers and fallback mechanisms ensure 99.9% uptime

Everything You Need to Build AI-Powered dApps

Production-ready features that work out of the box. No complex configuration required.

Privacy and Logging

Complete control over data privacy with customizable logging options

Zero Data Retention (ZDR)

Your prompts and responses are never stored or logged permanently

Model Routing

Intelligent routing to the best model for your specific use case

Provider Routing

Automatic failover and load balancing across 60+ providers

Exacto Variant

Deterministic outputs with exact reproducibility for testing

Latency and Performance

Sub-4-second response times with built-in performance monitoring

Presets

Pre-configured model settings for common use cases and patterns

Prompt Caching

Reduce costs and latency with intelligent prompt caching

Structured Outputs

Type-safe custom structs and JSON schema validation for responses

Tool Calling

Enable LLMs to call functions and interact with external systems

Multimodal

Support for text, images, audio, and video inputs across models

Message Transforms

Automatic message format conversion between different providers

Uptime Optimization

Smart health checks and automatic provider switching for reliability

Web Search

Give your AI access to real-time web data and current information

Zero Completion Insurance

Guaranteed responses with automatic retry and fallback mechanisms

Provisioning API Keys

Simplified key management with built-in rotation and security

App Attribution

Track usage, costs, and performance metrics per application

60+ Providers • 500+ Models

Access Every Major AI Provider

One unified interface to access GPT-4, Claude, Gemini, Llama, and 500+ other models. Switch providers with a single line of code.

OpenAI
Anthropic
Google
Meta
Mistral AI
Cohere
Together AI
Replicate
HuggingFace
AI21 Labs
Groq
Perplexity
Anyscale
Deepinfra
Fireworks AI
Lepton AI
Modal
Baseten
OctoAI
Cloudflare
Amazon Bedrock
Azure OpenAI
& 38 more...
Switch providers in one line
// OpenAI GPT-4
create_llm_request(..., "openai", "gpt-4", messages)?;

// Anthropic Claude
create_llm_request(..., "anthropic", "claude-3-opus", messages)?;

// Google Gemini
create_llm_request(..., "google", "gemini-pro", messages)?;

// Local Llama via Replicate
create_llm_request(..., "replicate", "llama-3-70b", messages)?;

Ready to Build the Future of AI-Powered dApps?

Join developers building next-generation smart contracts with on-chain AI capabilities. Start with our comprehensive documentation and examples.