Skip to content

Flows

A Flow is a YAML-defined orchestration graph that describes how messages and data move through your digital worker. Flows are stored in .ygtc files and define:

  • Nodes - Individual processing steps (WASM components)
  • Edges - Connections between nodes
  • Triggers - What starts the flow
  • Conditions - Branching logic
flows/hello.ygtc
name: hello_world
version: "1.0"
description: A simple greeting flow
# Define the nodes (processing steps)
nodes:
- id: greet
type: reply
config:
message: "Hello! How can I help you today?"
# Define what triggers this flow
triggers:
- type: message
pattern: "hello|hi|hey"
target: greet

Nodes are the building blocks of a flow. Each node represents a WASM component that processes data:

nodes:
- id: unique_node_id
type: node_type # Component type
config: # Component-specific configuration
key: value
next: next_node_id # Optional: next node to execute
TypePurpose
replySend a message back to the user
llmCall an LLM (OpenAI, etc.)
templateRender a Handlebars template
scriptExecute Rhai script
branchConditional branching
httpMake HTTP requests
stateManage session state

Edges connect nodes together. They can be implicit (via next) or explicit:

nodes:
- id: start
type: template
config:
template: "Processing your request..."
next: process
- id: process
type: llm
config:
model: "gpt-4"
prompt: "{{message}}"
next: respond
- id: respond
type: reply
config:
message: "{{llm_response}}"

Triggers define what starts a flow:

triggers:
# Message pattern trigger
- type: message
pattern: "order|purchase|buy"
target: handle_order
# Default trigger (catch-all)
- type: default
target: fallback_handler
# Event trigger
- type: event
event_type: "user.created"
target: welcome_user

Use branch nodes for conditional logic:

nodes:
- id: check_intent
type: branch
config:
conditions:
- expression: "intent == 'greeting'"
next: greet_user
- expression: "intent == 'help'"
next: show_help
- expression: "intent == 'order'"
next: process_order
default: fallback
- id: greet_user
type: reply
config:
message: "Hello! Nice to meet you!"
- id: show_help
type: reply
config:
message: "Here's what I can help you with..."
- id: fallback
type: reply
config:
message: "I'm not sure I understand. Can you rephrase?"

Flows can read and write session state:

nodes:
- id: save_name
type: state
config:
action: set
key: "user_name"
value: "{{extracted_name}}"
next: confirm
- id: get_name
type: state
config:
action: get
key: "user_name"
output: "stored_name"
next: greet_by_name

Integrate with LLMs for AI-powered responses:

nodes:
- id: analyze
type: llm
config:
model: "gpt-4"
system_prompt: |
You are a helpful customer service agent.
Extract the user's intent and any relevant entities.
prompt: "User message: {{message}}"
output_format: json
next: process_result

Use Handlebars templates for dynamic content:

nodes:
- id: format_response
type: template
config:
template: |
Hi {{user_name}}!
Here's your order summary:
{{#each items}}
- {{name}}: ${{price}}
{{/each}}
Total: ${{total}}
next: send_response

Validate your flows before deployment:

Terminal window
greentic-flow doctor ./flows/
# Or with the GTC CLI
gtc flow validate ./flows/hello.ygtc
  1. Keep flows focused - One flow per user intent or workflow
  2. Use meaningful IDs - Node IDs should describe their purpose
  3. Document with comments - Add descriptions to complex flows
  4. Test incrementally - Validate after each change
  5. Version your flows - Use semantic versioning
flows/customer_service.ygtc
name: customer_service
version: "1.0"
description: Handle customer inquiries with AI assistance
nodes:
# Analyze the incoming message
- id: analyze_intent
type: llm
config:
model: "gpt-4"
system_prompt: |
Classify the customer's intent into one of:
- greeting
- order_status
- product_question
- complaint
- other
Respond with JSON: {"intent": "...", "confidence": 0.0-1.0}
prompt: "{{message}}"
output_format: json
next: route_intent
# Route based on intent
- id: route_intent
type: branch
config:
conditions:
- expression: "intent.intent == 'greeting'"
next: handle_greeting
- expression: "intent.intent == 'order_status'"
next: handle_order_status
- expression: "intent.intent == 'complaint'"
next: handle_complaint
default: handle_general
# Handle greeting
- id: handle_greeting
type: reply
config:
message: "Hello! Welcome to our support. How can I help you today?"
# Handle order status
- id: handle_order_status
type: http
config:
method: GET
url: "https://api.example.com/orders/{{order_id}}"
next: format_order_response
- id: format_order_response
type: template
config:
template: |
Your order #{{order_id}} is currently: {{status}}
Expected delivery: {{delivery_date}}
next: send_order_response
- id: send_order_response
type: reply
config:
message: "{{formatted_response}}"
# Handle complaints with escalation
- id: handle_complaint
type: reply
config:
message: "I'm sorry to hear that. Let me connect you with a specialist who can help resolve this."
next: escalate_to_human
- id: escalate_to_human
type: event
config:
event_type: "escalation.requested"
payload:
reason: "complaint"
conversation_id: "{{session_id}}"
# General handler
- id: handle_general
type: llm
config:
model: "gpt-4"
system_prompt: "You are a helpful customer service agent. Be friendly and concise."
prompt: "{{message}}"
next: send_general_response
- id: send_general_response
type: reply
config:
message: "{{llm_response}}"
triggers:
- type: message
target: analyze_intent