All the Ways to Configure AI in DAL
DAL supports 5 different configuration methods for AI providers. Choose what works best for your workflow!
Best for: Quick setup, CI/CD, temporary configuration
# OpenAI
export OPENAI_API_KEY="sk-proj-..."
export OPENAI_MODEL="gpt-4" # optional
# Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
export ANTHROPIC_MODEL="claude-3-5-sonnet-20241022" # optional
# Local (Ollama)
export DAL_AI_ENDPOINT="http://localhost:11434/api/generate"
export DAL_AI_MODEL="codellama" # optional
# Advanced settings
export DAL_AI_TEMPERATURE="0.7" # optional
export DAL_AI_MAX_TOKENS="2000" # optional
export DAL_AI_TIMEOUT="30" # optional
# Use DAL
dal ai code "Create a token contract"Best for: Project-specific settings, team collaboration, persistent configuration
Create .dal/ai_config.toml in your project:
# Provider: openai, anthropic, local, or custom
provider = "openai"
# API credentials
api_key = "sk-proj-..."
# Model selection
model = "gpt-4"
# Generation parameters
temperature = 0.7
max_tokens = 2000
timeout_seconds = 30OpenAI Project:
# .dal/ai_config.toml
provider = "openai"
openai_model = "gpt-4"
temperature = 0.7
max_tokens = 2000Anthropic Project:
# .dal/ai_config.toml
provider = "anthropic"
anthropic_model = "claude-3-5-sonnet-20241022"
temperature = 0.8
max_tokens = 4000Local Development:
# .dal/ai_config.toml
provider = "local"
endpoint = "http://localhost:11434/api/generate"
model = "codellama"
temperature = 0.7.dal/ai_config.toml (project-specific)dal_config.toml (project root).dalconfig (project root)~/.dal/config.toml (user global)# Just use DAL - it loads config automatically!
dal ai code "Create a REST API"
dal ai explain myfile.dal
# Config file takes precedence
# Override with environment variable if needed
OPENAI_MODEL="gpt-3.5-turbo" dal ai code "test"NEVER commit API keys!
# Add to .gitignore
echo ".dal/ai_config.toml" >> .gitignore
# Or use template without keys
cp .dal/ai_config.toml .dal/ai_config.toml.template
# Remove api_key line from template
# Commit template, ignore actual configBest for: Dynamic configuration, user preferences, app-specific settings
// Configure OpenAI at runtime
ai.configure_openai("sk-proj-...", "gpt-4")
// Or Anthropic
ai.configure_anthropic("sk-ant-...", "claude-3-5-sonnet-20241022")
// Or local model
ai.configure_local("http://localhost:11434/api/generate", "codellama")
// Now generate text
let code = ai.generate_text("Create a function that adds two numbers")
print(code)
// Full configuration object
let config = {
provider: "openai",
api_key: "sk-proj-...",
model: "gpt-4",
temperature: 0.8,
max_tokens: 3000,
timeout_seconds: 60
}
ai.set_ai_config(config)
// Use it
let result = ai.generate_text("Complex prompt...")
// Check what's configured
let config = ai.get_ai_config()
print("Provider: " + config.provider)
print("Model: " + config.model)
Best for: Development, keeping secrets out of version control
.env file in project root:# .env
OPENAI_API_KEY=sk-proj-...
OPENAI_MODEL=gpt-4
DAL_AI_TEMPERATURE=0.7
DAL_AI_MAX_TOKENS=2000.gitignore:echo ".env" >> .gitignore# DAL automatically picks up environment variables
dal ai code "Create a token"# .env.template (commit this)
OPENAI_API_KEY=your-key-here
OPENAI_MODEL=gpt-4
DAL_AI_TEMPERATURE=0.7
# Developers copy and fill in:
# cp .env.template .env
# Then edit .env with real API keyBest for: Production apps, flexibility, fallback support
Priority (highest to lowest):
1. Runtime configuration (ai.configure_*)
2. Environment variables (export VAR=...)
3. Config file (.dal/ai_config.toml)
4. Defaults/Fallback
Development:
# .dal/ai_config.toml
provider = "local"
endpoint = "http://localhost:11434/api/generate"
model = "codellama"CI/CD:
# GitHub Actions / GitLab CI
export OPENAI_API_KEY="${{ secrets.OPENAI_API_KEY }}"
export OPENAI_MODEL="gpt-3.5-turbo" # Cheaper for testsProduction:
# Production environment
export ANTHROPIC_API_KEY="${ANTHROPIC_KEY}"
export ANTHROPIC_MODEL="claude-3-5-sonnet-20241022"
export DAL_AI_MAX_TOKENS="4000"User Override:
# User can always override
OPENAI_MODEL="gpt-4" dal ai code "important task"| Method | Ease of Use | Persistence | Team Friendly | Security | Best For |
|---|---|---|---|---|---|
| Env Vars | ⭐⭐⭐⭐⭐ | ⭐⭐ | ⭐⭐ | ⭐⭐⭐ | Quick testing |
| Config File | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | Projects |
| Runtime | ⭐⭐⭐ | ⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐ | Apps |
| .env File | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | Development |
| Hybrid | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | Production |
# Quick setup with env vars
export OPENAI_API_KEY="sk-..."
dal ai code "Create a DeFi protocol"
# Switch to local for experimentation
export OPENAI_API_KEY=""
export DAL_AI_ENDPOINT="http://localhost:11434/api/generate"
dal ai code "Experiment with ideas"Setup once:
# Create config template
cat > .dal/ai_config.toml.template <<EOF
provider = "openai"
# api_key = "YOUR-KEY-HERE"
model = "gpt-4"
temperature = 0.7
EOF
# Commit template
git add .dal/ai_config.toml.template .gitignore
git commit -m "Add AI config template"Each developer:
# Copy and configure
cp .dal/ai_config.toml.template .dal/ai_config.toml
nano .dal/ai_config.toml # Add your API key
# Use DAL
dal ai code "Team project feature"Project structure:
my-project/
├── .dal/
│ ├── ai_config.toml # Git-ignored, real keys
│ └── ai_config.toml.template # Committed, no keys
├── .env # Git-ignored
├── .env.template # Committed
└── deploy/
├── dev.env # Dev environment
├── staging.env # Staging environment
└── prod.env # Production environment
Development:
# .dal/ai_config.toml
provider = "local"
endpoint = "http://localhost:11434/api/generate"Staging:
# staging.env
export OPENAI_API_KEY="${STAGING_OPENAI_KEY}"
export OPENAI_MODEL="gpt-3.5-turbo"
export DAL_AI_MAX_TOKENS="1000"Production:
# prod.env (loaded by Kubernetes/Docker)
export ANTHROPIC_API_KEY="${PROD_ANTHROPIC_KEY}"
export ANTHROPIC_MODEL="claude-3-5-sonnet-20241022"
export DAL_AI_MAX_TOKENS="4000"
export DAL_AI_TIMEOUT="60"DAL checks configuration in this order:
1. Runtime configuration (ai.configure_*)
↓ If not set...
2. Environment variables
↓ If not set...
3. Config file (.dal/ai_config.toml)
↓ If not found...
4. Defaults (fallback to basic mode)
# Config file says: "use codellama locally"
# .dal/ai_config.toml
provider = "local"
model = "codellama"
# But environment variable overrides it
export OPENAI_API_KEY="sk-..."
# Result: Uses OpenAI (env var wins)
dal ai code "test"
# Can still force local by unsetting
OPENAI_API_KEY="" dal ai code "test" # Now uses local| Variable | Purpose | Example |
|---|---|---|
OPENAI_API_KEY |
OpenAI API key | sk-proj-... |
OPENAI_MODEL |
OpenAI model | gpt-4 |
ANTHROPIC_API_KEY |
Anthropic API key | sk-ant-... |
ANTHROPIC_MODEL |
Anthropic model | claude-3-5-sonnet-20241022 |
DAL_AI_ENDPOINT |
Local model endpoint | http://localhost:11434/api/generate |
DAL_AI_MODEL |
Local model name | codellama |
DAL_AI_TEMPERATURE |
Generation temperature (0-1) | 0.7 |
DAL_AI_MAX_TOKENS |
Max tokens to generate | 2000 |
DAL_AI_TIMEOUT |
Request timeout (seconds) | 30 |
# Required
provider = "openai" # openai, anthropic, local, custom
# Provider-specific (one of these)
api_key = "sk-..." # For OpenAI/Anthropic
endpoint = "http://localhost" # For local
# Model selection
model = "gpt-4" # Provider-specific model name
openai_model = "gpt-4" # Alias for model (when provider=openai)
anthropic_model = "claude-3-..." # Alias for model (when provider=anthropic)
local_model = "codellama" # Alias for model (when provider=local)
# Generation parameters
temperature = 0.7 # Float: 0-1 (creativity)
max_tokens = 2000 # Integer: tokens to generate
timeout_seconds = 30 # Integer: request timeout// Quick configuration
ai.configure_openai(api_key, model?)
ai.configure_anthropic(api_key, model?)
ai.configure_local(endpoint, model?)
// Full configuration
ai.set_ai_config({
provider: "openai",
api_key: "sk-...",
model: "gpt-4",
temperature: 0.7,
max_tokens: 2000,
timeout_seconds: 30
})
// Get current config
let config = ai.get_ai_config()
export OPENAI_API_KEY="sk-proj-..."
dal ai code "hello world"ollama serve
export DAL_AI_ENDPOINT="http://localhost:11434/api/generate"
dal ai code "hello world"# Create config
cat > .dal/ai_config.toml <<EOF
provider = "openai"
model = "gpt-4"
EOF
# Share template
git add .dal/ai_config.toml.templateUse Method 5: Hybrid with:
Add debug logging:
let config = ai.get_ai_config()
print("Provider: " + config.provider)
print("Model: " + config.model)
Or run with debug:
RUST_LOG=debug dal ai code "test"Check these locations in order:
ls .dal/ai_config.toml # Project-specific
ls dal_config.toml # Project root
ls .dalconfig # Project root
ls ~/.dal/config.toml # User globalCheck if set:
echo $OPENAI_API_KEY
echo $DAL_AI_ENDPOINTMake sure to export:
# Wrong
OPENAI_API_KEY="sk-..."
# Right
export OPENAI_API_KEY="sk-..."DAL gives you maximum flexibility:
Recommendation: