NOMOS supports multiple LLM providers, allowing you to choose the best model for your use case.

Supported Providers

OpenAI

from nomos.llms import OpenAI

llm = OpenAI(model="gpt-4o-mini")

Anthropic

from nomos.llms import Anthropic

llm = Anthropic(model="claude-3-5-sonnet-20241022")

Google Gemini

from nomos.llms import Gemini

llm = Gemini(model="gemini-2.0-flash-exp")

Mistral AI

from nomos.llms import Mistral

llm = Mistral(model="ministral-8b-latest")

Ollama (Local Models)

from nomos.llms import Ollama

llm = Ollama(model="llama3.3")

HuggingFace

from nomos.llms import HuggingFace

llm = HuggingFace(model="meta-llama/Meta-Llama-3-8B-Instruct")

YAML Configuration

You can specify LLM configuration in your YAML config file:

llm:
  provider: openai
  model: gpt-4o-mini

Advanced Configuration

Custom Parameters

You can pass additional parameters to LLM providers:

llm = OpenAI(
    model="gpt-4o-mini",
    temperature=0.7,
    max_tokens=1000,
    top_p=0.9
)

YAML Advanced Configuration

llm:
  provider: openai
  model: gpt-4o-mini
  temperature: 0.7
  max_tokens: 1000
  top_p: 0.9

Troubleshooting

Error Handling

NOMOS includes built-in error handling and retry mechanisms:

name: my-agent
llm:
  provider: openai
  model: gpt-4o-mini
max_errors: 3  # Retry up to 3 times on LLM errors

Performance Tips

Choose the Right Model

Use smaller models for simple tasks to reduce latency and costs

Configure Temperature

Lower values (0.1-0.3) for consistent responses

Set Max Tokens

Limit response length to control costs and latency

Use Local Models

Ollama for development or when data privacy is important

Model Documentation

For the most up-to-date list of available models, refer to the official documentation: