AI Provider Setup Guide

Choose Your AI Provider - OpenAI, Anthropic, or Local Models

The dal ai commands support multiple AI providers. Choose what works best for you:


Option 1: OpenAI (GPT-4, GPT-3.5)

Best for: Production use, highest quality responses

Setup

  1. Get API Key

  2. Set Environment Variable

    export OPENAI_API_KEY="sk-proj-..."
    
    # Add to your shell config for persistence
    echo 'export OPENAI_API_KEY="sk-proj-..."' >> ~/.zshrc
    source ~/.zshrc
  3. Use DAL

    dal ai code "Create a DeFi lending protocol"
    dal ai explain mycontract.dal
    dal ai audit token.dal

Configuration

# Choose model (default: gpt-4)
export OPENAI_MODEL="gpt-4"          # Most capable
export OPENAI_MODEL="gpt-3.5-turbo"  # Faster, cheaper

Pricing


Option 2: Anthropic (Claude)

Best for: Long context, detailed analysis, code review

Setup

  1. Get API Key

  2. Set Environment Variable

    export ANTHROPIC_API_KEY="sk-ant-..."
    
    # Add to your shell config for persistence
    echo 'export ANTHROPIC_API_KEY="sk-ant-..."' >> ~/.zshrc
    source ~/.zshrc
  3. Use DAL

    dal ai code "Create a token contract"
    dal ai review complex_system.dal
    dal ai audit defi_protocol.dal

Configuration

# Choose model (default: claude-3-5-sonnet-20241022)
export ANTHROPIC_MODEL="claude-3-5-sonnet-20241022"  # Most capable
export ANTHROPIC_MODEL="claude-3-opus-20240229"     # Highest intelligence
export ANTHROPIC_MODEL="claude-3-haiku-20240307"    # Fastest, cheapest

Pricing


Option 3: Local Models (Ollama, LM Studio)

Best for: Privacy, offline use, free unlimited usage

Setup with Ollama

  1. Install Ollama

    # macOS
    brew install ollama
    
    # Linux
    curl -fsSL https://ollama.com/install.sh | sh
    
    # Windows
    # Download from https://ollama.com/download
  2. Start Ollama Server

    ollama serve
  3. Download a Model

    # Code generation models
    ollama pull codellama         # 7B, good for code
    ollama pull deepseek-coder    # Excellent for coding
    ollama pull phind-codellama   # Optimized for code
    
    # General purpose models
    ollama pull llama2            # Good all-around
    ollama pull mistral           # Fast and capable
    ollama pull llama3            # Latest, most capable
    
    # Small/fast models
    ollama pull tinyllama         # Very fast
  4. Configure DAL

    export DAL_AI_ENDPOINT="http://localhost:11434/api/generate"
    export DAL_AI_MODEL="codellama"  # or deepseek-coder, llama2, etc.
    
    # Add to shell config
    echo 'export DAL_AI_ENDPOINT="http://localhost:11434/api/generate"' >> ~/.zshrc
    echo 'export DAL_AI_MODEL="codellama"' >> ~/.zshrc
    source ~/.zshrc
  5. Use DAL (Offline!)

    dal ai code "Create a REST API"
    dal ai explain myfile.dal
    dal ai test contract.dal

Model Recommendations

Model Size Speed Quality Best For
codellama 7B Fast Good Code generation
deepseek-coder 6.7B Fast Excellent Code, best quality
llama3 8B Medium Excellent General purpose
mistral 7B Fast Good General purpose
phind-codellama 34B Slow Excellent Complex code (needs GPU)

Pricing


Using Multiple Providers

You can configure all three and DAL will choose automatically:

# Set all three
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export DAL_AI_ENDPOINT="http://localhost:11434/api/generate"

# Priority order: OpenAI > Anthropic > Local > Fallback
dal ai code "test"  # Uses OpenAI (first priority)

Override Priority

# Use Anthropic instead of OpenAI
OPENAI_API_KEY="" dal ai code "test"

# Use local model
OPENAI_API_KEY="" ANTHROPIC_API_KEY="" dal ai code "test"

# Force fallback mode (no AI)
OPENAI_API_KEY="" ANTHROPIC_API_KEY="" DAL_AI_ENDPOINT="" dal ai code "test"

Switching Providers

Team A: OpenAI Only

# .env file
OPENAI_API_KEY=sk-proj-...

Team B: Anthropic Only

# .env file
ANTHROPIC_API_KEY=sk-ant-...

Team C: Local Only (Privacy-focused)

# .env file
DAL_AI_ENDPOINT=http://localhost:11434/api/generate
DAL_AI_MODEL=codellama

Individual Developer: Mix and Match

# Use OpenAI for code generation (fast)
export OPENAI_API_KEY="sk-..."

# But use local for security audits (private)
dal ai code "Create API"           # Uses OpenAI
OPENAI_API_KEY="" dal ai audit contract.dal  # Uses local (private)

Testing Your Setup

Test OpenAI

export OPENAI_API_KEY="your-key"
dal ai code "print hello world"

Test Anthropic

export ANTHROPIC_API_KEY="your-key"
dal ai code "print hello world"

Test Local

ollama serve
export DAL_AI_ENDPOINT="http://localhost:11434/api/generate"
export DAL_AI_MODEL="codellama"
dal ai code "print hello world"

Troubleshooting

"No API key found"

"OpenAI API error 401"

"Connection refused" (Local)

"Rate limit exceeded"


Cost Comparison

Small Project (100 requests/day)

Provider Model Monthly Cost
OpenAI GPT-3.5 ~$5
OpenAI GPT-4 ~$50
Anthropic Claude Haiku ~$2
Anthropic Claude Sonnet ~$15
Anthropic Claude Opus ~$75
Local Any $0

Medium Project (1000 requests/day)

Provider Model Monthly Cost
OpenAI GPT-3.5 ~$50
OpenAI GPT-4 ~$500
Anthropic Claude Haiku ~$20
Anthropic Claude Sonnet ~$150
Local Any $0

Recommendation


Quick Reference

# OpenAI
export OPENAI_API_KEY="sk-proj-..."
export OPENAI_MODEL="gpt-4"  # optional

# Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
export ANTHROPIC_MODEL="claude-3-5-sonnet-20241022"  # optional

# Local (Ollama)
export DAL_AI_ENDPOINT="http://localhost:11434/api/generate"
export DAL_AI_MODEL="codellama"  # optional

# Test
dal ai code "hello world"

Security Best Practices

  1. Never commit API keys to git

    echo ".env" >> .gitignore
    echo "*.key" >> .gitignore
  2. Use environment variables, not hard-coded keys

    # Good
    export OPENAI_API_KEY="sk-..."
    
    # Bad - NEVER do this
    # let api_key = "sk-..."
  3. Rotate keys regularly

  4. Use separate keys for dev/prod

    # Development
    export OPENAI_API_KEY="sk-dev-..."
    
    # Production
    export OPENAI_API_KEY="sk-prod-..."
  5. Monitor usage


Support

Need Help?

Can't get API keys?

Want to contribute?