WorkersGenerators
Ai
📸 Screenshots
Here are visual examples of this section:
Ai - Worker Configuration Interface
1. Overview and Purpose
The AI worker generates text using various AI models from providers like OpenAI, Anthropic, Google, and Groq. It processes prompts, input text, documents, and chat history to produce intelligent responses. This worker supports multiple model providers and allows for contextual conversations with document-based knowledge.
2. Configuration Parameters
temperature: Controls the randomness of the AI output (0 for deterministic, higher values for more creative responses)model: Specifies the AI model to use in format "provider/model-id" (e.g., "openai/gpt-4", "anthropic/claude-3")
3. Input/Output Handles
prompt: Input handle - accepts the system prompt that defines the AI's behavior and instructionsinput: Input handle - accepts the user input or question to be processeddocuments: Input handle - accepts vector documents to provide context for the AI responsehistory: Input handle - accepts chat history to maintain conversation contextanswer: Output handle - returns the generated AI response text
4. Usage Examples with Code
// Configure AI worker with OpenAI GPT-4
const aiWorker = {
parameters: {
temperature: 0.7,
model: "openai/gpt-4"
},
fields: {
prompt: { value: "You are a helpful assistant that answers questions based on provided context." },
input: { value: "What are the key benefits of renewable energy?" },
documents: { value: relevantDocs },
history: { value: previousChatHistory }
}
}5. Integration Examples
The AI worker serves as a central component in conversational workflows, often following document retrieval workers to provide context-aware responses. It integrates seamlessly with chat interfaces and can be chained with other workers for complex AI-powered automation.
6. Best Practices
- Set temperature to 0 for consistent, deterministic outputs in production workflows
- Use clear, specific system prompts to guide AI behavior and output format
- Provide relevant documents as context to improve response accuracy and relevance
- Include chat history for multi-turn conversations to maintain context continuity
7. Troubleshooting Tips
- Ensure API keys are properly configured for your chosen model provider
- Verify the model format follows "provider/model-id" pattern exactly
- Check that the selected model is available and supported by the provider
- Monitor token limits when using large documents or extensive chat history
