Compare commits

...

7 Commits

47 changed files with 9310 additions and 4098 deletions

View File

@ -9,14 +9,14 @@ This project simulates a village economy with autonomous AI agents. Each agent h
### Features ### Features
- **Agent-based simulation**: Multiple AI agents with different professions - **Agent-based simulation**: Multiple AI agents with different professions
- **GOAP AI system**: Goal-Oriented Action Planning for intelligent agent behavior
- **Vital stats system**: Energy, Hunger, Thirst, and Heat with passive decay - **Vital stats system**: Energy, Hunger, Thirst, and Heat with passive decay
- **Market economy**: Order book system for trading resources - **Market economy**: Order book system for trading resources
- **Day/Night cycle**: 10 day steps + 1 night step per day - **Day/Night cycle**: 10 day steps + 1 night step per day
- **Maslow-priority AI**: Agents prioritize survival over economic activities - **Real-time visualization**: Web-based frontend showing agents and their states
- **Real-time visualization**: Pygame frontend showing agents and their states
- **Agent movement**: Agents visually move to different locations based on their actions - **Agent movement**: Agents visually move to different locations based on their actions
- **Action indicators**: Visual feedback showing what each agent is doing - **Action indicators**: Visual feedback showing what each agent is doing
- **Settings panel**: Adjust simulation parameters with sliders - **GOAP Debug Panel**: View agent planning and decision-making in real-time
- **Detailed logging**: All simulation steps are logged for analysis - **Detailed logging**: All simulation steps are logged for analysis
## Architecture ## Architecture
@ -28,11 +28,13 @@ villsim/
│ ├── config.py # Centralized configuration │ ├── config.py # Centralized configuration
│ ├── api/ # REST API endpoints │ ├── api/ # REST API endpoints
│ ├── core/ # Game logic (engine, world, market, AI, logger) │ ├── core/ # Game logic (engine, world, market, AI, logger)
│ │ └── goap/ # GOAP AI system (planner, actions, goals)
│ └── domain/ # Data models (agent, resources, actions) │ └── domain/ # Data models (agent, resources, actions)
├── frontend/ # Pygame visualizer ├── web_frontend/ # Web-based visualizer
│ ├── main.py # Entry point │ ├── index.html # Main application
│ ├── client.py # HTTP client │ ├── goap_debug.html # GOAP debugging view
│ └── renderer/ # Drawing components (map, agents, UI, settings) │ └── src/ # JavaScript modules (scenes, API client)
├── tools/ # Analysis and optimization scripts
├── logs/ # Simulation log files (created on run) ├── logs/ # Simulation log files (created on run)
├── docs/design/ # Design documents ├── docs/design/ # Design documents
├── requirements.txt ├── requirements.txt
@ -79,40 +81,25 @@ The server will start at `http://localhost:8000`. You can access:
- API docs: `http://localhost:8000/docs` - API docs: `http://localhost:8000/docs`
- Health check: `http://localhost:8000/health` - Health check: `http://localhost:8000/health`
### Start the Frontend Visualizer ### Start the Web Frontend
Open another terminal and run: Open the web frontend by opening `web_frontend/index.html` in a web browser, or serve it with a local HTTP server:
```bash ```bash
python -m frontend.main cd web_frontend
python -m http.server 8080
``` ```
A Pygame window will open showing the simulation. Then navigate to `http://localhost:8080` in your browser.
## Controls ## Controls
| Key | Action | The web frontend provides buttons for:
|-----|--------| - **Step**: Advance one turn (manual mode)
| `SPACE` | Advance one turn (manual mode) | - **Auto/Manual**: Toggle between automatic and manual mode
| `R` | Reset simulation | - **Reset**: Reset simulation
| `M` | Toggle between MANUAL and AUTO mode |
| `S` | Open/close settings panel |
| `ESC` | Close settings or quit |
Hover over agents to see detailed information. Click on agents to see detailed information. Use the GOAP debug panel (`goap_debug.html`) to inspect agent planning.
## Settings Panel
Press `S` to open the settings panel where you can adjust:
- **Agent Stats**: Max values and decay rates for energy, hunger, thirst, heat
- **World Settings**: Grid size, initial agent count, day length
- **Action Costs**: Energy costs for hunting, gathering, etc.
- **Resource Effects**: How much stats are restored by consuming resources
- **Market Settings**: Price adjustment timing and rates
- **Simulation Speed**: Auto-step interval
Changes require clicking "Apply & Restart" to take effect.
## Logging ## Logging
@ -189,12 +176,14 @@ Action indicators above agents show:
- Movement animation when traveling - Movement animation when traveling
- Dotted line to destination - Dotted line to destination
### AI Priority System ### AI System (GOAP)
1. **Critical needs** (stat < 20%): Consume, buy, or gather resources The simulation uses Goal-Oriented Action Planning (GOAP) for intelligent agent behavior:
2. **Energy management**: Rest if too tired
3. **Economic activity**: Sell excess inventory, buy needed materials 1. **Goals**: Agents have weighted goals (Survive, Maintain Heat, Build Wealth, etc.)
4. **Routine work**: Perform profession-specific tasks 2. **Actions**: Agents can perform actions with preconditions and effects
3. **Planning**: A* search finds optimal action sequences to satisfy goals
4. **Personality**: Each agent has unique traits affecting goal weights and decisions
## Development ## Development
@ -202,9 +191,10 @@ Action indicators above agents show:
- **Config** (`backend/config.py`): Centralized configuration with dataclasses - **Config** (`backend/config.py`): Centralized configuration with dataclasses
- **Domain Layer** (`backend/domain/`): Pure data models - **Domain Layer** (`backend/domain/`): Pure data models
- **Core Layer** (`backend/core/`): Game logic, AI, market, logging - **Core Layer** (`backend/core/`): Game logic, market, logging
- **GOAP AI** (`backend/core/goap/`): Goal-oriented action planning system
- **API Layer** (`backend/api/`): FastAPI routes and schemas - **API Layer** (`backend/api/`): FastAPI routes and schemas
- **Frontend** (`frontend/`): Pygame visualization client - **Web Frontend** (`web_frontend/`): Browser-based visualization
### Analyzing Logs ### Analyzing Logs
@ -227,7 +217,7 @@ with open("logs/sim_20260118_123456.jsonl") as f:
- Agent reproduction - Agent reproduction
- Skill progression - Skill progression
- Persistent save/load - Persistent save/load
- Web-based frontend alternative - Unity frontend integration
## License ## License

View File

@ -315,3 +315,107 @@ def load_config_from_file():
except Exception as e: except Exception as e:
raise HTTPException(status_code=500, detail=f"Failed to load config: {str(e)}") raise HTTPException(status_code=500, detail=f"Failed to load config: {str(e)}")
# ============== GOAP Debug Endpoints ==============
@router.get(
"/goap/debug/{agent_id}",
summary="Get GOAP debug info for an agent",
description="Returns detailed GOAP decision-making info including goals, actions, and plans.",
)
def get_agent_goap_debug(agent_id: str):
"""Get GOAP debug information for a specific agent."""
engine = get_engine()
agent = engine.world.get_agent(agent_id)
if agent is None:
raise HTTPException(status_code=404, detail=f"Agent {agent_id} not found")
if not agent.is_alive():
raise HTTPException(status_code=400, detail=f"Agent {agent_id} is not alive")
from backend.core.goap.debug import get_goap_debug_info
debug_info = get_goap_debug_info(
agent=agent,
market=engine.market,
step_in_day=engine.world.step_in_day,
day_steps=engine.world.config.day_steps,
is_night=engine.world.is_night(),
)
return debug_info.to_dict()
@router.get(
"/goap/debug",
summary="Get GOAP debug info for all agents",
description="Returns GOAP decision-making info for all living agents.",
)
def get_all_goap_debug():
"""Get GOAP debug information for all living agents."""
engine = get_engine()
from backend.core.goap.debug import get_all_agents_goap_debug
debug_infos = get_all_agents_goap_debug(
agents=engine.world.agents,
market=engine.market,
step_in_day=engine.world.step_in_day,
day_steps=engine.world.config.day_steps,
is_night=engine.world.is_night(),
)
return {
"agents": [info.to_dict() for info in debug_infos],
"count": len(debug_infos),
"current_turn": engine.world.current_turn,
"is_night": engine.world.is_night(),
}
@router.get(
"/goap/goals",
summary="Get all GOAP goals",
description="Returns a list of all available GOAP goals.",
)
def get_goap_goals():
"""Get all available GOAP goals."""
from backend.core.goap import get_all_goals
goals = get_all_goals()
return {
"goals": [
{
"name": g.name,
"type": g.goal_type.value,
"max_plan_depth": g.max_plan_depth,
}
for g in goals
],
"count": len(goals),
}
@router.get(
"/goap/actions",
summary="Get all GOAP actions",
description="Returns a list of all available GOAP actions.",
)
def get_goap_actions():
"""Get all available GOAP actions."""
from backend.core.goap import get_all_actions
actions = get_all_actions()
return {
"actions": [
{
"name": a.name,
"action_type": a.action_type.value,
"target_resource": a.target_resource.value if a.target_resource else None,
}
for a in actions
],
"count": len(actions),
}

View File

@ -55,9 +55,31 @@ class AgentResponse(BaseModel):
inventory: list[ResourceSchema] inventory: list[ResourceSchema]
money: int money: int
is_alive: bool is_alive: bool
is_corpse: bool = False
can_act: bool can_act: bool
current_action: AgentActionSchema current_action: AgentActionSchema
last_action_result: str last_action_result: str
death_turn: int = -1
death_reason: str = ""
# Age system
age: int = 25
max_age: int = 70
age_category: str = "prime"
birth_day: int = 0
generation: int = 0
parent_ids: list[str] = []
children_count: int = 0
# Age modifiers
skill_modifier: float = 1.0
energy_cost_modifier: float = 1.0
learning_modifier: float = 1.0
# Personality and skills
personality: dict = {}
skills: dict = {}
actions_performed: dict = {}
total_trades: int = 0
total_money_earned: int = 0
action_history: list = []
# ============== Market Schemas ============== # ============== Market Schemas ==============
@ -119,6 +141,7 @@ class StatisticsSchema(BaseModel):
living_agents: int living_agents: int
total_agents_spawned: int total_agents_spawned: int
total_agents_died: int total_agents_died: int
total_births: int = 0
total_money_in_circulation: int total_money_in_circulation: int
professions: dict[str, int] professions: dict[str, int]
# Wealth inequality metrics # Wealth inequality metrics
@ -127,6 +150,16 @@ class StatisticsSchema(BaseModel):
richest_agent: int = 0 richest_agent: int = 0
poorest_agent: int = 0 poorest_agent: int = 0
gini_coefficient: float = 0.0 gini_coefficient: float = 0.0
# Age demographics
age_distribution: dict[str, int] = {}
avg_age: float = 0.0
oldest_agent: int = 0
youngest_agent: int = 0
generations: dict[int, int] = {}
# Death statistics
deaths_by_cause: dict[str, int] = {}
# Village storage
village_storage: dict[str, int] = {}
class ActionLogSchema(BaseModel): class ActionLogSchema(BaseModel):
@ -142,10 +175,12 @@ class TurnLogSchema(BaseModel):
turn: int turn: int
agent_actions: list[ActionLogSchema] agent_actions: list[ActionLogSchema]
deaths: list[str] deaths: list[str]
births: list[str] = []
trades: list[dict] trades: list[dict]
resources_produced: dict[str, int] = {} resources_produced: dict[str, int] = {}
resources_consumed: dict[str, int] = {} resources_consumed: dict[str, int] = {}
resources_spoiled: dict[str, int] = {} resources_spoiled: dict[str, int] = {}
day_events: dict = {}
class ResourceStatsSchema(BaseModel): class ResourceStatsSchema(BaseModel):

View File

@ -109,7 +109,10 @@ class EconomyConfig:
""" """
# How much agents value money vs energy # How much agents value money vs energy
# Higher = agents see money as more valuable (trade more) # Higher = agents see money as more valuable (trade more)
energy_to_money_ratio: float = 1.5 # 1 energy ≈ 1.5 coins energy_to_money_ratio: float = 150 # 1 energy ≈ 150 coins
# Minimum price floor for any market transaction
min_price: int = 100
# How strongly agents desire wealth (0-1) # How strongly agents desire wealth (0-1)
# Higher = agents will prioritize building wealth # Higher = agents will prioritize building wealth
@ -121,48 +124,258 @@ class EconomyConfig:
buy_efficiency_threshold: float = 0.7 buy_efficiency_threshold: float = 0.7
# Minimum wealth target - agents want at least this much money # Minimum wealth target - agents want at least this much money
min_wealth_target: int = 50 min_wealth_target: int = 5000
# Price adjustment limits # Price adjustment limits
max_price_markup: float = 2.0 # Maximum price = 2x base value max_price_markup: float = 2.0 # Maximum price = 2x base value
min_price_discount: float = 0.5 # Minimum price = 50% of base value min_price_discount: float = 0.5 # Minimum price = 50% of base value
@dataclass
class AIConfig:
"""Configuration for AI decision-making system."""
# Maximum A* iterations for GOAP planner
goap_max_iterations: int = 50
# Maximum plan depth (number of actions in a plan)
goap_max_plan_depth: int = 3
# Fall back to reactive planning if GOAP fails to find a plan
reactive_fallback: bool = True
# Use BDI (Belief-Desire-Intention) instead of pure GOAP
# BDI adds persistent beliefs, long-term desires, and plan commitment
use_bdi: bool = False
@dataclass
class BDIConfig:
"""Configuration for BDI (Belief-Desire-Intention) reasoning system.
BDI extends GOAP with:
- Persistent beliefs (memory of past events)
- Long-term desires (personality-driven motivations)
- Committed intentions (plan persistence)
"""
# Timeslicing: how often agents run full deliberation
# 1 = every turn, 3 = every 3rd turn (staggered by agent ID)
thinking_interval: int = 1
# Maximum consecutive action failures before replanning
max_consecutive_failures: int = 2
# Priority multiplier needed to switch from current intention
# 1.5 = new goal must be 50% higher priority to cause a switch
priority_switch_threshold: float = 1.5
# Memory system settings
memory_max_events: int = 50 # Max events to remember
memory_decay_rate: float = 0.1 # How fast memories fade
@dataclass
class RedisConfig:
"""Configuration for optional Redis state storage.
Redis enables:
- Persistent state across restarts
- Decoupled UI polling (web clients read independently)
- Distributed access (multiple simulation instances)
"""
enabled: bool = False
host: str = "localhost"
port: int = 6379
db: int = 0
password: Optional[str] = None
prefix: str = "villsim:"
ttl_seconds: int = 3600 # 1 hour default TTL
@dataclass
class AgeConfig:
"""Configuration for the age and lifecycle system.
Age affects skills, energy costs, and creates birth/death cycles.
Age is measured in "years" where 1 year = 1 simulation day.
Population is controlled by economy:
- Birth rate scales with village prosperity (food availability)
- Parents transfer wealth to children at birth and death
"""
# Starting age range for initial agents
min_start_age: int = 18
max_start_age: int = 35
# Age category thresholds
young_age_threshold: int = 25 # Below this = young
prime_age_start: int = 25 # Prime age begins
prime_age_end: int = 50 # Prime age ends
old_age_threshold: int = 50 # Above this = old
# Lifespan
base_max_age: int = 75 # Base maximum age
max_age_variance: int = 10 # ± variance for max age
age_per_day: int = 1 # How many "years" per sim day
# Birth mechanics - economy controlled
birth_cooldown_days: int = 20 # Days after birth before can birth again
min_birth_age: int = 20 # Minimum age to give birth
max_birth_age: int = 45 # Maximum age to give birth
birth_base_chance: float = 0.02 # Base chance of birth per day
birth_prosperity_multiplier: float = 3.0 # Max multiplier based on food abundance
birth_food_requirement: int = 60 # Min hunger to attempt birth
birth_energy_requirement: int = 25 # Min energy to attempt birth
# Wealth transfer
birth_wealth_transfer: float = 0.25 # Parent gives 25% wealth to child at birth
inheritance_enabled: bool = True # Children inherit from dead parents
child_start_age: int = 18 # Age children start at (adult)
# Age modifiers for YOUNG agents (learning phase)
young_skill_multiplier: float = 0.8 # Skills are 80% effective
young_learning_multiplier: float = 1.4 # Learn 40% faster
young_energy_cost_multiplier: float = 0.85 # 15% less energy cost
# Age modifiers for PRIME agents (peak performance)
prime_skill_multiplier: float = 1.0
prime_learning_multiplier: float = 1.0
prime_energy_cost_multiplier: float = 1.0
# Age modifiers for OLD agents (wisdom but frailty)
old_skill_multiplier: float = 1.15 # Skills 15% more effective (wisdom)
old_learning_multiplier: float = 0.6 # Learn 40% slower
old_energy_cost_multiplier: float = 1.2 # 20% more energy cost
old_max_energy_multiplier: float = 0.75 # 25% less max energy
old_decay_multiplier: float = 1.15 # 15% faster stat decay
@dataclass
class StorageConfig:
"""Configuration for resource storage limits.
Limits the total resources that can exist in the village economy.
"""
# Village-wide storage limits per resource type
village_meat_limit: int = 100
village_berries_limit: int = 150
village_water_limit: int = 200
village_wood_limit: int = 200
village_hide_limit: int = 80
village_clothes_limit: int = 50
# Market limits
market_order_limit_per_agent: int = 5 # Max active orders per agent
market_total_order_limit: int = 500 # Max total market orders
@dataclass
class SinksConfig:
"""Configuration for resource sinks (ways resources leave the economy).
These create pressure to keep producing resources rather than hoarding.
"""
# Daily decay of village storage (percentage)
daily_village_decay_rate: float = 0.02 # 2% of stored resources decay daily
# Money tax (redistributed or removed)
daily_tax_rate: float = 0.01 # 1% wealth tax per day
# Random events
random_event_chance: float = 0.05 # 5% chance of event per day
fire_event_resource_loss: float = 0.1 # 10% resources lost in fire
theft_event_money_loss: float = 0.05 # 5% money stolen
# Maintenance costs
clothes_maintenance_per_day: int = 1 # Clothes degrade 1 durability/day
fire_wood_cost_per_night: int = 1 # Wood consumed to stay warm at night
@dataclass
class PerformanceConfig:
"""Configuration for performance optimization.
Controls logging and memory usage to keep simulation fast at high turn counts.
"""
# Logging control
logging_enabled: bool = False # Enable file logging (disable for speed)
detailed_logging: bool = False # Enable verbose per-agent logging
async_logging: bool = True # Use non-blocking background logging
log_flush_interval: int = 50 # Flush logs every N turns (not every turn)
# Memory management
max_turn_logs: int = 100 # Keep only last N turn logs in memory
# Statistics calculation frequency
stats_update_interval: int = 10 # Update expensive stats every N turns
# State storage
state_storage_enabled: bool = True # Enable state snapshotting
@dataclass @dataclass
class SimulationConfig: class SimulationConfig:
"""Master configuration containing all sub-configs.""" """Master configuration containing all sub-configs."""
performance: PerformanceConfig = field(default_factory=PerformanceConfig)
agent_stats: AgentStatsConfig = field(default_factory=AgentStatsConfig) agent_stats: AgentStatsConfig = field(default_factory=AgentStatsConfig)
resources: ResourceConfig = field(default_factory=ResourceConfig) resources: ResourceConfig = field(default_factory=ResourceConfig)
actions: ActionConfig = field(default_factory=ActionConfig) actions: ActionConfig = field(default_factory=ActionConfig)
world: WorldConfig = field(default_factory=WorldConfig) world: WorldConfig = field(default_factory=WorldConfig)
market: MarketConfig = field(default_factory=MarketConfig) market: MarketConfig = field(default_factory=MarketConfig)
economy: EconomyConfig = field(default_factory=EconomyConfig) economy: EconomyConfig = field(default_factory=EconomyConfig)
ai: AIConfig = field(default_factory=AIConfig)
bdi: BDIConfig = field(default_factory=BDIConfig)
redis: RedisConfig = field(default_factory=RedisConfig)
age: AgeConfig = field(default_factory=AgeConfig)
storage: StorageConfig = field(default_factory=StorageConfig)
sinks: SinksConfig = field(default_factory=SinksConfig)
# Simulation control # Simulation control
auto_step_interval: float = 1.0 # Seconds between auto steps auto_step_interval: float = 1.0 # Seconds between auto steps
def to_dict(self) -> dict: def to_dict(self) -> dict:
"""Convert to dictionary.""" """Convert to dictionary."""
return { result = {
"performance": asdict(self.performance),
"ai": asdict(self.ai),
"bdi": asdict(self.bdi),
"agent_stats": asdict(self.agent_stats), "agent_stats": asdict(self.agent_stats),
"resources": asdict(self.resources), "resources": asdict(self.resources),
"actions": asdict(self.actions), "actions": asdict(self.actions),
"world": asdict(self.world), "world": asdict(self.world),
"market": asdict(self.market), "market": asdict(self.market),
"economy": asdict(self.economy), "economy": asdict(self.economy),
"age": asdict(self.age),
"storage": asdict(self.storage),
"sinks": asdict(self.sinks),
"auto_step_interval": self.auto_step_interval, "auto_step_interval": self.auto_step_interval,
} }
# Handle redis separately due to Optional field
redis_dict = asdict(self.redis)
result["redis"] = redis_dict
return result
@classmethod @classmethod
def from_dict(cls, data: dict) -> "SimulationConfig": def from_dict(cls, data: dict) -> "SimulationConfig":
"""Create from dictionary.""" """Create from dictionary."""
# Handle redis config specially due to Optional password
redis_data = data.get("redis", {})
if redis_data.get("password") is None:
redis_data["password"] = None
return cls( return cls(
performance=PerformanceConfig(**data.get("performance", {})),
ai=AIConfig(**data.get("ai", {})),
bdi=BDIConfig(**data.get("bdi", {})),
redis=RedisConfig(**redis_data),
agent_stats=AgentStatsConfig(**data.get("agent_stats", {})), agent_stats=AgentStatsConfig(**data.get("agent_stats", {})),
resources=ResourceConfig(**data.get("resources", {})), resources=ResourceConfig(**data.get("resources", {})),
actions=ActionConfig(**data.get("actions", {})), actions=ActionConfig(**data.get("actions", {})),
world=WorldConfig(**data.get("world", {})), world=WorldConfig(**data.get("world", {})),
market=MarketConfig(**data.get("market", {})), market=MarketConfig(**data.get("market", {})),
economy=EconomyConfig(**data.get("economy", {})), economy=EconomyConfig(**data.get("economy", {})),
age=AgeConfig(**data.get("age", {})),
storage=StorageConfig(**data.get("storage", {})),
sinks=SinksConfig(**data.get("sinks", {})),
auto_step_interval=data.get("auto_step_interval", 1.0), auto_step_interval=data.get("auto_step_interval", 1.0),
) )
@ -253,3 +466,14 @@ def _reset_all_caches() -> None:
except ImportError: except ImportError:
pass pass
try:
from backend.core.ai import reset_ai_config_cache
reset_ai_config_cache()
except ImportError:
pass
try:
from backend.core.storage import reset_state_store
reset_state_store()
except ImportError:
pass

View File

@ -3,7 +3,6 @@
from .world import World, TimeOfDay from .world import World, TimeOfDay
from .market import Order, OrderBook from .market import Order, OrderBook
from .engine import GameEngine, SimulationMode from .engine import GameEngine, SimulationMode
from .ai import AgentAI
from .logger import SimulationLogger, get_simulation_logger from .logger import SimulationLogger, get_simulation_logger
__all__ = [ __all__ = [
@ -13,7 +12,6 @@ __all__ = [
"OrderBook", "OrderBook",
"GameEngine", "GameEngine",
"SimulationMode", "SimulationMode",
"AgentAI",
"SimulationLogger", "SimulationLogger",
"get_simulation_logger", "get_simulation_logger",
] ]

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,50 @@
"""BDI (Belief-Desire-Intention) module for agent AI.
This module provides a BDI architecture that wraps the existing GOAP planner,
enabling:
- Persistent beliefs (memory of past events)
- Long-term desires (personality-driven motivations)
- Committed intentions (plan persistence)
Main entry points:
- get_bdi_decision(): Get an AI decision using BDI reasoning
- reset_bdi_state(): Reset all agent BDI state (on simulation reset)
"""
from backend.core.bdi.belief import BeliefBase, MemoryEvent
from backend.core.bdi.desire import Desire, DesireType, DesireManager
from backend.core.bdi.intention import (
Intention,
IntentionManager,
CommitmentStrategy,
)
from backend.core.bdi.bdi_agent import (
BDIAgentAI,
AIDecision,
TradeItem,
get_bdi_decision,
reset_bdi_state,
remove_agent_bdi_state,
)
__all__ = [
# Belief system
"BeliefBase",
"MemoryEvent",
# Desire system
"Desire",
"DesireType",
"DesireManager",
# Intention system
"Intention",
"IntentionManager",
"CommitmentStrategy",
# BDI Agent
"BDIAgentAI",
"AIDecision",
"TradeItem",
# Entry points
"get_bdi_decision",
"reset_bdi_state",
"remove_agent_bdi_state",
]

View File

@ -0,0 +1,630 @@
"""BDI Agent AI that wraps GOAP planning with BDI reasoning.
This module provides the main BDI-based AI decision maker that:
1. Maintains persistent beliefs about the world
2. Manages desires based on personality
3. Commits to intentions (plans) and executes them
4. Uses GOAP planning to generate action sequences
Performance optimizations:
- Timeslicing: full BDI cycle only runs periodically
- Plan persistence: reuses plans across turns
- Cached belief updates: skips unchanged data
"""
from dataclasses import dataclass, field
from typing import Optional, TYPE_CHECKING
from backend.domain.action import ActionType
from backend.domain.resources import ResourceType
from backend.domain.personality import get_trade_price_modifier
from backend.core.bdi.belief import BeliefBase
from backend.core.bdi.desire import DesireManager
from backend.core.bdi.intention import IntentionManager
from backend.core.goap.planner import GOAPPlanner, ReactivePlanner
from backend.core.goap.goals import get_all_goals
from backend.core.goap.actions import get_all_actions
if TYPE_CHECKING:
from backend.domain.agent import Agent
from backend.core.market import OrderBook
from backend.core.goap.goal import Goal
from backend.core.goap.action import GOAPAction
from backend.core.goap.planner import Plan
@dataclass
class TradeItem:
"""A single item to buy/sell in a trade."""
order_id: str
resource_type: ResourceType
quantity: int
price_per_unit: int
@dataclass
class AIDecision:
"""A decision made by the AI for an agent."""
action: ActionType
target_resource: Optional[ResourceType] = None
order_id: Optional[str] = None
quantity: int = 1
price: int = 0
reason: str = ""
trade_items: list[TradeItem] = field(default_factory=list)
adjust_order_id: Optional[str] = None
new_price: Optional[int] = None
# GOAP/BDI-specific fields
goal_name: str = ""
plan_length: int = 0
bdi_info: dict = field(default_factory=dict)
def to_dict(self) -> dict:
return {
"action": self.action.value,
"target_resource": self.target_resource.value if self.target_resource else None,
"order_id": self.order_id,
"quantity": self.quantity,
"price": self.price,
"reason": self.reason,
"trade_items": [
{
"order_id": t.order_id,
"resource_type": t.resource_type.value,
"quantity": t.quantity,
"price_per_unit": t.price_per_unit,
}
for t in self.trade_items
],
"adjust_order_id": self.adjust_order_id,
"new_price": self.new_price,
"goal_name": self.goal_name,
"plan_length": self.plan_length,
"bdi_info": self.bdi_info,
}
class BDIAgentAI:
"""BDI-based AI decision maker that wraps GOAP planning.
The BDI cycle:
1. Update beliefs from sensors (agent state, market)
2. Update desires based on beliefs and personality
3. Check if current intention should continue
4. If needed, generate new plan via GOAP
5. Execute next action from intention
Performance features:
- Timeslicing: full deliberation only every N turns
- Plan persistence: reuse plans across turns
- Reactive fallback: simple decisions when not deliberating
"""
# Class-level cache for planners (shared across instances)
_planner_cache: Optional[GOAPPlanner] = None
_reactive_cache: Optional[ReactivePlanner] = None
_goals_cache: Optional[list] = None
_actions_cache: Optional[list] = None
def __init__(
self,
agent: "Agent",
market: "OrderBook",
step_in_day: int = 1,
day_steps: int = 10,
current_turn: int = 0,
is_night: bool = False,
# Persistent BDI state (passed in for continuity)
beliefs: Optional[BeliefBase] = None,
desires: Optional[DesireManager] = None,
intentions: Optional[IntentionManager] = None,
):
self.agent = agent
self.market = market
self.step_in_day = step_in_day
self.day_steps = day_steps
self.current_turn = current_turn
self.is_night = is_night
# Initialize or use existing BDI components
self.beliefs = beliefs or BeliefBase()
self.desires = desires or DesireManager(agent.personality)
self.intentions = intentions or IntentionManager.from_personality(agent.personality)
# Update beliefs from current state
self.beliefs.update_from_sensors(
agent=agent,
market=market,
step_in_day=step_in_day,
day_steps=day_steps,
current_turn=current_turn,
is_night=is_night,
)
# Update desires from beliefs
self.desires.update_from_beliefs(self.beliefs)
# Get cached planners and goals/actions
self.planner = self._get_planner()
self.reactive_planner = self._get_reactive_planner()
self.goals = self._get_goals()
self.actions = self._get_actions()
# Personality shortcuts
self.p = agent.personality
self.skills = agent.skills
@classmethod
def _get_planner(cls) -> GOAPPlanner:
"""Get cached GOAP planner."""
if cls._planner_cache is None:
from backend.config import get_config
config = get_config()
ai_config = config.ai
cls._planner_cache = GOAPPlanner(
max_iterations=ai_config.goap_max_iterations,
)
return cls._planner_cache
@classmethod
def _get_reactive_planner(cls) -> ReactivePlanner:
"""Get cached reactive planner."""
if cls._reactive_cache is None:
cls._reactive_cache = ReactivePlanner()
return cls._reactive_cache
@classmethod
def _get_goals(cls) -> list:
"""Get cached goals list."""
if cls._goals_cache is None:
cls._goals_cache = get_all_goals()
return cls._goals_cache
@classmethod
def _get_actions(cls) -> list:
"""Get cached actions list."""
if cls._actions_cache is None:
cls._actions_cache = get_all_actions()
return cls._actions_cache
@classmethod
def reset_caches(cls) -> None:
"""Reset all caches (call after config reload)."""
cls._planner_cache = None
cls._reactive_cache = None
cls._goals_cache = None
cls._actions_cache = None
def should_deliberate(self) -> bool:
"""Check if this agent should run full BDI deliberation this turn.
Timeslicing: not every agent deliberates every turn.
Agents are staggered based on their ID hash.
"""
from backend.config import get_config
config = get_config()
# Get thinking interval from config (default to 1 = every turn)
bdi_config = getattr(config, 'bdi', None)
thinking_interval = getattr(bdi_config, 'thinking_interval', 1) if bdi_config else 1
if thinking_interval <= 1:
return True # Deliberate every turn
# Stagger agents across turns
agent_hash = hash(self.agent.id) % thinking_interval
return (self.current_turn % thinking_interval) == agent_hash
def decide(self) -> AIDecision:
"""Make a decision using BDI reasoning with GOAP planning.
Decision flow:
1. Night time: mandatory sleep
2. Check if should deliberate (timeslicing)
3. If deliberating: run full BDI cycle
4. If not: continue current intention or reactive fallback
"""
# Night time - mandatory sleep
if self.is_night:
return AIDecision(
action=ActionType.SLEEP,
reason="Night time: sleeping",
goal_name="Sleep",
bdi_info={"mode": "night"},
)
# Check if we should run full deliberation
if self.should_deliberate():
return self._deliberate()
else:
return self._continue_or_react()
def _deliberate(self) -> AIDecision:
"""Run full BDI deliberation cycle."""
# Filter goals by desires
filtered_goals = self.desires.filter_goals_by_desire(self.goals, self.beliefs)
# Check if we should reconsider current intention
should_replan = self.intentions.should_reconsider(
beliefs=self.beliefs,
desire_manager=self.desires,
available_goals=filtered_goals,
)
if not should_replan and self.intentions.has_intention():
# Continue with current intention
action = self.intentions.get_next_action()
if action:
return self._convert_to_decision(
goap_action=action,
goal=self.intentions.current_intention.goal,
plan=self.intentions.current_intention.plan,
mode="continue",
)
# Need to plan for a goal
world_state = self.beliefs.to_world_state()
plan = self.planner.plan_for_goals(
initial_state=world_state,
goals=filtered_goals,
available_actions=self.actions,
)
if plan and not plan.is_empty:
# Commit to new intention
self.intentions.commit_to_plan(
goal=plan.goal,
plan=plan,
current_turn=self.current_turn,
)
goap_action = plan.first_action
return self._convert_to_decision(
goap_action=goap_action,
goal=plan.goal,
plan=plan,
mode="new_plan",
)
# Fallback to reactive planning
return self._reactive_fallback()
def _continue_or_react(self) -> AIDecision:
"""Continue current intention or use reactive fallback (no deliberation)."""
if self.intentions.has_intention():
action = self.intentions.get_next_action()
if action:
return self._convert_to_decision(
goap_action=action,
goal=self.intentions.current_intention.goal,
plan=self.intentions.current_intention.plan,
mode="timeslice_continue",
)
# No intention, use reactive fallback
return self._reactive_fallback()
def _reactive_fallback(self) -> AIDecision:
"""Use reactive planning when no intention exists."""
world_state = self.beliefs.to_world_state()
best_action = self.reactive_planner.select_best_action(
state=world_state,
goals=self.goals,
available_actions=self.actions,
)
if best_action:
return self._convert_to_decision(
goap_action=best_action,
goal=None,
plan=None,
mode="reactive",
)
# Ultimate fallback - rest
return AIDecision(
action=ActionType.REST,
reason="No valid action found, resting",
bdi_info={"mode": "fallback"},
)
def _convert_to_decision(
self,
goap_action: "GOAPAction",
goal: Optional["Goal"],
plan: Optional["Plan"],
mode: str = "deliberate",
) -> AIDecision:
"""Convert a GOAP action to an AIDecision with proper parameters."""
action_type = goap_action.action_type
target_resource = goap_action.target_resource
# Build reason string
if goal:
reason = f"{goal.name}: {goap_action.name}"
else:
reason = f"Reactive: {goap_action.name}"
# BDI debug info
bdi_info = {
"mode": mode,
"dominant_desire": self.desires.dominant_desire.value if self.desires.dominant_desire else None,
"commitment": self.intentions.commitment_strategy.value,
"has_intention": self.intentions.has_intention(),
}
# Handle different action types
if action_type == ActionType.CONSUME:
return AIDecision(
action=action_type,
target_resource=target_resource,
reason=reason,
goal_name=goal.name if goal else "",
plan_length=len(plan.actions) if plan else 0,
bdi_info=bdi_info,
)
elif action_type == ActionType.TRADE:
return self._create_trade_decision(goap_action, goal, plan, reason, bdi_info)
elif action_type in [ActionType.HUNT, ActionType.GATHER, ActionType.CHOP_WOOD,
ActionType.GET_WATER, ActionType.WEAVE]:
return AIDecision(
action=action_type,
target_resource=target_resource,
reason=reason,
goal_name=goal.name if goal else "",
plan_length=len(plan.actions) if plan else 0,
bdi_info=bdi_info,
)
elif action_type == ActionType.BUILD_FIRE:
return AIDecision(
action=action_type,
target_resource=ResourceType.WOOD,
reason=reason,
goal_name=goal.name if goal else "",
plan_length=len(plan.actions) if plan else 0,
bdi_info=bdi_info,
)
elif action_type in [ActionType.REST, ActionType.SLEEP]:
return AIDecision(
action=action_type,
reason=reason,
goal_name=goal.name if goal else "",
plan_length=len(plan.actions) if plan else 0,
bdi_info=bdi_info,
)
# Default case
return AIDecision(
action=action_type,
target_resource=target_resource,
reason=reason,
goal_name=goal.name if goal else "",
plan_length=len(plan.actions) if plan else 0,
bdi_info=bdi_info,
)
def _create_trade_decision(
self,
goap_action: "GOAPAction",
goal: Optional["Goal"],
plan: Optional["Plan"],
reason: str,
bdi_info: dict,
) -> AIDecision:
"""Create a trade decision with actual market parameters."""
target_resource = goap_action.target_resource
action_name = goap_action.name.lower()
if "buy" in action_name:
# Find the best order to buy from
order = self.market.get_cheapest_order(target_resource)
if order and order.seller_id != self.agent.id:
# Check trust for this seller
trust = self.beliefs.get_trade_trust(order.seller_id)
# Skip distrusted sellers if we're picky
if trust < -0.5 and self.p.price_sensitivity > 1.2:
# Try next cheapest? For now, fall back to gathering
return self._create_gather_fallback(target_resource, reason, goal, plan, bdi_info)
# Calculate quantity to buy
# Use max(1, ...) to avoid division by zero
can_afford = self.agent.money // max(1, order.price_per_unit)
space = self.agent.inventory_space()
quantity = min(2, can_afford, space, order.quantity)
if quantity > 0:
return AIDecision(
action=ActionType.TRADE,
target_resource=target_resource,
order_id=order.id,
quantity=quantity,
price=order.price_per_unit,
reason=f"{reason} @ {order.price_per_unit}c",
goal_name=goal.name if goal else "",
plan_length=len(plan.actions) if plan else 0,
bdi_info=bdi_info,
)
# Can't buy - fallback to gathering
return self._create_gather_fallback(target_resource, reason, goal, plan, bdi_info)
elif "sell" in action_name:
# Create a sell order
quantity_available = self.agent.get_resource_count(target_resource)
# Calculate minimum to keep
min_keep = self._get_min_keep(target_resource)
quantity_to_sell = min(3, quantity_available - min_keep)
if quantity_to_sell > 0:
price = self._calculate_sell_price(target_resource)
return AIDecision(
action=ActionType.TRADE,
target_resource=target_resource,
quantity=quantity_to_sell,
price=price,
reason=f"{reason} @ {price}c",
goal_name=goal.name if goal else "",
plan_length=len(plan.actions) if plan else 0,
bdi_info=bdi_info,
)
# Invalid trade action - rest
return AIDecision(
action=ActionType.REST,
reason="Trade not possible",
bdi_info=bdi_info,
)
def _create_gather_fallback(
self,
resource_type: ResourceType,
reason: str,
goal: Optional["Goal"],
plan: Optional["Plan"],
bdi_info: dict,
) -> AIDecision:
"""Create a gather action as fallback when buying isn't possible."""
action_map = {
ResourceType.WATER: ActionType.GET_WATER,
ResourceType.BERRIES: ActionType.GATHER,
ResourceType.MEAT: ActionType.HUNT,
ResourceType.WOOD: ActionType.CHOP_WOOD,
}
action = action_map.get(resource_type, ActionType.GATHER)
return AIDecision(
action=action,
target_resource=resource_type,
reason=f"{reason} (gathering instead)",
goal_name=goal.name if goal else "",
plan_length=len(plan.actions) if plan else 0,
bdi_info=bdi_info,
)
def _get_min_keep(self, resource_type: ResourceType) -> int:
"""Get minimum quantity to keep for survival."""
# Adjusted by hoarding rate from desires
hoarding_mult = 0.5 + self.p.hoarding_rate
base_min = {
ResourceType.WATER: 2,
ResourceType.MEAT: 1,
ResourceType.BERRIES: 2,
ResourceType.WOOD: 1,
ResourceType.HIDE: 0,
}
return int(base_min.get(resource_type, 1) * hoarding_mult)
def _calculate_sell_price(self, resource_type: ResourceType) -> int:
"""Calculate sell price based on fair value and market conditions."""
from backend.core.ai import get_energy_cost
from backend.config import get_config
config = get_config()
economy = getattr(config, 'economy', None)
energy_to_money_ratio = getattr(economy, 'energy_to_money_ratio', 150) if economy else 150
min_price = getattr(economy, 'min_price', 100) if economy else 100
energy_cost = get_energy_cost(resource_type)
fair_value = max(min_price, int(round(energy_cost * energy_to_money_ratio)))
# Apply trading skill
sell_modifier = get_trade_price_modifier(self.skills.trading, is_buying=False)
# Get market signal
signal = self.market.get_market_signal(resource_type)
if signal == "sell": # Scarcity
price = int(round(fair_value * 1.3 * sell_modifier))
elif signal == "hold":
price = int(round(fair_value * sell_modifier))
else: # Surplus
cheapest = self.market.get_cheapest_order(resource_type)
if cheapest and cheapest.seller_id != self.agent.id:
# Undercut, but respect floor (80% of fair value or min_price)
floor_price = max(min_price, int(round(fair_value * 0.8)))
price = max(floor_price, cheapest.price_per_unit - 1)
else:
price = int(round(fair_value * 0.8 * sell_modifier))
return max(min_price, price)
def record_action_result(self, success: bool, action_type: str) -> None:
"""Record the result of an action for learning and intention tracking."""
# Update intention
self.intentions.advance_intention(success)
# Update beliefs/memory
if success:
self.beliefs.record_successful_action(action_type)
else:
self.beliefs.record_failed_action(action_type)
# Persistent BDI state storage for agents
_agent_bdi_state: dict[str, tuple[BeliefBase, DesireManager, IntentionManager]] = {}
def get_bdi_decision(
agent: "Agent",
market: "OrderBook",
step_in_day: int = 1,
day_steps: int = 10,
current_turn: int = 0,
is_night: bool = False,
) -> AIDecision:
"""Get a BDI-based AI decision for an agent.
This is the main entry point for the BDI AI system.
It maintains persistent BDI state for each agent.
"""
# Get or create persistent BDI state
if agent.id not in _agent_bdi_state:
beliefs = BeliefBase()
desires = DesireManager(agent.personality)
intentions = IntentionManager.from_personality(agent.personality)
_agent_bdi_state[agent.id] = (beliefs, desires, intentions)
else:
beliefs, desires, intentions = _agent_bdi_state[agent.id]
# Create AI instance with persistent state
ai = BDIAgentAI(
agent=agent,
market=market,
step_in_day=step_in_day,
day_steps=day_steps,
current_turn=current_turn,
is_night=is_night,
beliefs=beliefs,
desires=desires,
intentions=intentions,
)
return ai.decide()
def reset_bdi_state() -> None:
"""Reset all BDI state (call on simulation reset)."""
global _agent_bdi_state
_agent_bdi_state.clear()
BDIAgentAI.reset_caches()
def remove_agent_bdi_state(agent_id: str) -> None:
"""Remove BDI state for a specific agent (call on agent death)."""
_agent_bdi_state.pop(agent_id, None)

401
backend/core/bdi/belief.py Normal file
View File

@ -0,0 +1,401 @@
"""Belief System for BDI agents.
The BeliefBase maintains persistent state about the world from the agent's
perspective, including:
- Current sensory data (vitals, inventory, market)
- Memory of past events (failed trades, good hunting spots)
- Dirty flags for efficient updates
This replaces the transient WorldState creation with a persistent belief system.
"""
from dataclasses import dataclass, field
from typing import TYPE_CHECKING, Optional
from collections import deque
if TYPE_CHECKING:
from backend.domain.agent import Agent
from backend.core.market import OrderBook
@dataclass
class MemoryEvent:
"""A remembered event that may influence future decisions."""
event_type: str # "trade_failed", "hunt_failed", "good_deal", etc.
turn: int
data: dict = field(default_factory=dict)
relevance: float = 1.0 # Decays over time
@dataclass
class BeliefBase:
"""Persistent belief system for a BDI agent.
Maintains both current perceptions and memories of past events.
Uses dirty flags to avoid unnecessary recomputation.
"""
# Current perception state (cached WorldState fields)
thirst_pct: float = 1.0
hunger_pct: float = 1.0
heat_pct: float = 1.0
energy_pct: float = 1.0
# Resource counts
water_count: int = 0
food_count: int = 0
meat_count: int = 0
berries_count: int = 0
wood_count: int = 0
hide_count: int = 0
# Inventory state
has_clothes: bool = False
inventory_space: int = 0
inventory_full: bool = False
# Economic state
money: int = 0
is_wealthy: bool = False
# Market beliefs (what we believe about market availability)
can_buy_water: bool = False
can_buy_food: bool = False
can_buy_meat: bool = False
can_buy_berries: bool = False
can_buy_wood: bool = False
water_market_price: int = 0
food_market_price: int = 0
wood_market_price: int = 0
# Time beliefs
is_night: bool = False
is_evening: bool = False
step_in_day: int = 0
day_steps: int = 10
current_turn: int = 0
# Personality (cached from agent, rarely changes)
wealth_desire: float = 0.5
hoarding_rate: float = 0.5
risk_tolerance: float = 0.5
market_affinity: float = 0.5
is_trader: bool = False
gather_preference: float = 1.0
hunt_preference: float = 1.0
trade_preference: float = 1.0
# Skills
hunting_skill: float = 1.0
gathering_skill: float = 1.0
trading_skill: float = 1.0
# Thresholds (from config)
critical_threshold: float = 0.25
low_threshold: float = 0.45
# Calculated urgencies
thirst_urgency: float = 0.0
hunger_urgency: float = 0.0
heat_urgency: float = 0.0
energy_urgency: float = 0.0
# === BDI Extensions ===
# Memory system - stores past events that influence decisions
memories: deque = field(default_factory=lambda: deque(maxlen=50))
# Track failed actions for this resource type (turn -> count)
failed_hunts: int = 0
failed_gathers: int = 0
failed_trades: int = 0
# Track successful trades with agents (agent_id -> positive count)
trusted_traders: dict = field(default_factory=dict)
# Track failed trades with agents (agent_id -> negative count)
distrusted_traders: dict = field(default_factory=dict)
# Dirty flags for optimization
_vitals_dirty: bool = True
_inventory_dirty: bool = True
_market_dirty: bool = True
_last_update_turn: int = -1
def update_from_sensors(
self,
agent: "Agent",
market: "OrderBook",
step_in_day: int = 1,
day_steps: int = 10,
current_turn: int = 0,
is_night: bool = False,
) -> None:
"""Update beliefs from current agent and market state.
Uses dirty flags to skip unchanged data.
"""
from backend.domain.resources import ResourceType
from backend.config import get_config
self.current_turn = current_turn
self.step_in_day = step_in_day
self.day_steps = day_steps
self.is_night = is_night
self.is_evening = step_in_day >= day_steps - 2
# Always update vitals (they change every turn)
stats = agent.stats
self.thirst_pct = stats.thirst / stats.MAX_THIRST
self.hunger_pct = stats.hunger / stats.MAX_HUNGER
self.heat_pct = stats.heat / stats.MAX_HEAT
self.energy_pct = stats.energy / stats.MAX_ENERGY
# Update inventory
self.water_count = agent.get_resource_count(ResourceType.WATER)
self.meat_count = agent.get_resource_count(ResourceType.MEAT)
self.berries_count = agent.get_resource_count(ResourceType.BERRIES)
self.wood_count = agent.get_resource_count(ResourceType.WOOD)
self.hide_count = agent.get_resource_count(ResourceType.HIDE)
self.food_count = self.meat_count + self.berries_count
self.has_clothes = agent.has_clothes()
self.inventory_space = agent.inventory_space()
self.inventory_full = agent.inventory_full()
self.money = agent.money
# Update personality (cached, rarely changes)
self.wealth_desire = agent.personality.wealth_desire
self.hoarding_rate = agent.personality.hoarding_rate
self.risk_tolerance = agent.personality.risk_tolerance
self.market_affinity = agent.personality.market_affinity
self.gather_preference = agent.personality.gather_preference
self.hunt_preference = agent.personality.hunt_preference
self.trade_preference = agent.personality.trade_preference
# Skills
self.hunting_skill = agent.skills.hunting
self.gathering_skill = agent.skills.gathering
self.trading_skill = agent.skills.trading
# Market availability
def get_market_info(resource_type: ResourceType) -> tuple[bool, int]:
order = market.get_cheapest_order(resource_type)
if order and order.seller_id != agent.id and agent.money >= order.price_per_unit:
return True, order.price_per_unit
return False, 0
self.can_buy_water, self.water_market_price = get_market_info(ResourceType.WATER)
self.can_buy_meat, meat_price = get_market_info(ResourceType.MEAT)
self.can_buy_berries, berries_price = get_market_info(ResourceType.BERRIES)
self.can_buy_wood, self.wood_market_price = get_market_info(ResourceType.WOOD)
self.can_buy_food = self.can_buy_meat or self.can_buy_berries
food_price = min(
meat_price if self.can_buy_meat else float('inf'),
berries_price if self.can_buy_berries else float('inf')
)
self.food_market_price = int(food_price) if food_price != float('inf') else 0
# Wealth calculation
config = get_config()
economy_config = getattr(config, 'economy', None)
min_wealth_target = getattr(economy_config, 'min_wealth_target', 5000) if economy_config else 5000
wealth_target = int(min_wealth_target * (0.5 + self.wealth_desire))
self.is_wealthy = self.money >= wealth_target
# Trader check
self.is_trader = self.trade_preference > 1.3 and self.market_affinity > 0.5
# Config thresholds
agent_config = config.agent_stats
self.critical_threshold = agent_config.critical_threshold
self.low_threshold = 0.45
# Calculate urgencies
self._calculate_urgencies()
# Decay old memories
self._decay_memories()
self._last_update_turn = current_turn
def _calculate_urgencies(self) -> None:
"""Calculate urgency values for each vital stat."""
def calc_urgency(pct: float, critical: float, low: float) -> float:
if pct >= low:
return 0.0
elif pct >= critical:
return 1.0 - (pct - critical) / (low - critical)
else:
return 1.0 + (critical - pct) / critical * 2.0
self.thirst_urgency = calc_urgency(self.thirst_pct, self.critical_threshold, self.low_threshold)
self.hunger_urgency = calc_urgency(self.hunger_pct, self.critical_threshold, self.low_threshold)
self.heat_urgency = calc_urgency(self.heat_pct, self.critical_threshold, self.low_threshold)
if self.energy_pct < 0.25:
self.energy_urgency = 2.0
elif self.energy_pct < 0.40:
self.energy_urgency = 1.0
else:
self.energy_urgency = 0.0
def _decay_memories(self) -> None:
"""Decay memory relevance over time."""
for memory in self.memories:
age = self.current_turn - memory.turn
# Memories decay exponentially with age
memory.relevance = max(0.1, 1.0 / (1.0 + age * 0.1))
def add_memory(self, event_type: str, data: dict = None) -> None:
"""Add a new memory event."""
self.memories.append(MemoryEvent(
event_type=event_type,
turn=self.current_turn,
data=data or {},
relevance=1.0,
))
def record_failed_action(self, action_type: str) -> None:
"""Record a failed action for learning."""
if action_type == "hunt":
self.failed_hunts += 1
self.add_memory("hunt_failed")
elif action_type == "gather":
self.failed_gathers += 1
self.add_memory("gather_failed")
elif action_type == "trade":
self.failed_trades += 1
self.add_memory("trade_failed")
def record_successful_action(self, action_type: str) -> None:
"""Record a successful action, reducing failure counts."""
if action_type == "hunt":
self.failed_hunts = max(0, self.failed_hunts - 1)
elif action_type == "gather":
self.failed_gathers = max(0, self.failed_gathers - 1)
elif action_type == "trade":
self.failed_trades = max(0, self.failed_trades - 1)
def record_trade_partner(self, partner_id: str, success: bool) -> None:
"""Track trade relationship with another agent."""
if success:
self.trusted_traders[partner_id] = self.trusted_traders.get(partner_id, 0) + 1
# Reduce distrust if they've been bad before
if partner_id in self.distrusted_traders:
self.distrusted_traders[partner_id] = max(0, self.distrusted_traders[partner_id] - 1)
else:
self.distrusted_traders[partner_id] = self.distrusted_traders.get(partner_id, 0) + 1
def get_trade_trust(self, partner_id: str) -> float:
"""Get trust level for a trade partner (-1 to 1)."""
trust = self.trusted_traders.get(partner_id, 0)
distrust = self.distrusted_traders.get(partner_id, 0)
total = trust + distrust
if total == 0:
return 0.0 # Unknown partner
return (trust - distrust) / total
def has_critical_need(self) -> bool:
"""Check if any vital stat is critical (requires immediate attention)."""
return (
self.thirst_urgency >= 2.0 or
self.hunger_urgency >= 2.0 or
self.heat_urgency >= 2.0 or
self.energy_urgency >= 2.0
)
def get_most_urgent_need(self) -> Optional[str]:
"""Get the most urgent vital need, if any."""
urgencies = {
"thirst": self.thirst_urgency,
"hunger": self.hunger_urgency,
"heat": self.heat_urgency,
"energy": self.energy_urgency,
}
max_urgency = max(urgencies.values())
if max_urgency < 0.5:
return None
return max(urgencies, key=urgencies.get)
def to_world_state(self):
"""Convert beliefs to a WorldState for GOAP planner compatibility."""
from backend.core.goap.world_state import WorldState
return WorldState(
thirst_pct=self.thirst_pct,
hunger_pct=self.hunger_pct,
heat_pct=self.heat_pct,
energy_pct=self.energy_pct,
water_count=self.water_count,
food_count=self.food_count,
meat_count=self.meat_count,
berries_count=self.berries_count,
wood_count=self.wood_count,
hide_count=self.hide_count,
has_clothes=self.has_clothes,
inventory_space=self.inventory_space,
inventory_full=self.inventory_full,
money=self.money,
is_wealthy=self.is_wealthy,
can_buy_water=self.can_buy_water,
can_buy_food=self.can_buy_food,
can_buy_meat=self.can_buy_meat,
can_buy_berries=self.can_buy_berries,
can_buy_wood=self.can_buy_wood,
water_market_price=self.water_market_price,
food_market_price=self.food_market_price,
wood_market_price=self.wood_market_price,
is_night=self.is_night,
is_evening=self.is_evening,
step_in_day=self.step_in_day,
day_steps=self.day_steps,
wealth_desire=self.wealth_desire,
hoarding_rate=self.hoarding_rate,
risk_tolerance=self.risk_tolerance,
market_affinity=self.market_affinity,
is_trader=self.is_trader,
gather_preference=self.gather_preference,
hunt_preference=self.hunt_preference,
trade_preference=self.trade_preference,
hunting_skill=self.hunting_skill,
gathering_skill=self.gathering_skill,
trading_skill=self.trading_skill,
critical_threshold=self.critical_threshold,
low_threshold=self.low_threshold,
)
def to_dict(self) -> dict:
"""Convert to dictionary for debugging/logging."""
return {
"vitals": {
"thirst": round(self.thirst_pct, 2),
"hunger": round(self.hunger_pct, 2),
"heat": round(self.heat_pct, 2),
"energy": round(self.energy_pct, 2),
},
"urgencies": {
"thirst": round(self.thirst_urgency, 2),
"hunger": round(self.hunger_urgency, 2),
"heat": round(self.heat_urgency, 2),
"energy": round(self.energy_urgency, 2),
},
"inventory": {
"water": self.water_count,
"meat": self.meat_count,
"berries": self.berries_count,
"wood": self.wood_count,
"hide": self.hide_count,
"space": self.inventory_space,
},
"economy": {
"money": self.money,
"is_wealthy": self.is_wealthy,
},
"memory": {
"failed_hunts": self.failed_hunts,
"failed_gathers": self.failed_gathers,
"failed_trades": self.failed_trades,
"memory_count": len(self.memories),
},
}

347
backend/core/bdi/desire.py Normal file
View File

@ -0,0 +1,347 @@
"""Desire System for BDI agents.
Desires represent high-level, long-term motivations that persist across turns.
Unlike GOAP goals (which are immediate targets), desires are personality-driven
and influence which goals get activated.
Key concepts:
- Desires are weighted by personality traits
- Desires can be satisfied temporarily but return
- Desires influence goal selection, not direct actions
"""
from dataclasses import dataclass
from enum import Enum
from typing import TYPE_CHECKING, Optional
if TYPE_CHECKING:
from backend.core.bdi.belief import BeliefBase
from backend.domain.personality import PersonalityTraits
from backend.core.goap.goal import Goal
class DesireType(Enum):
"""Types of high-level desires."""
SURVIVAL = "survival" # Stay alive (thirst, hunger, heat, energy)
ACCUMULATE_WEALTH = "wealth" # Build up money reserves
STOCK_RESOURCES = "stock" # Hoard resources for security
MASTER_PROFESSION = "mastery" # Improve skills in preferred activity
SOCIAL_STANDING = "social" # Gain reputation through trade
COMFORT = "comfort" # Maintain clothes, warmth, rest
@dataclass
class Desire:
"""A high-level motivation that influences goal selection.
Desires have:
- A base intensity (how strongly the agent wants this)
- A satisfaction level (0-1, how fulfilled is this desire currently)
- A personality weight (how much this agent's personality cares)
"""
desire_type: DesireType
base_intensity: float = 1.0 # Base importance (0-2)
satisfaction: float = 0.5 # Current fulfillment (0-1)
personality_weight: float = 1.0 # Multiplier from personality
# Persistence tracking
turns_pursued: int = 0 # How long we've been chasing this
turns_since_progress: int = 0 # Turns since we made progress
@property
def effective_intensity(self) -> float:
"""Calculate current desire intensity considering satisfaction."""
# Desire is stronger when unsatisfied
unsatisfied = 1.0 - self.satisfaction
# Apply personality weight
intensity = self.base_intensity * self.personality_weight * (0.5 + unsatisfied)
# Reduce intensity if pursued too long without progress (boredom)
if self.turns_since_progress > 10:
boredom_factor = max(0.3, 1.0 - (self.turns_since_progress - 10) * 0.05)
intensity *= boredom_factor
return intensity
def update_satisfaction(self, new_satisfaction: float) -> None:
"""Update satisfaction level and track progress."""
if new_satisfaction > self.satisfaction:
self.turns_since_progress = 0 # Made progress!
else:
self.turns_since_progress += 1
self.satisfaction = max(0.0, min(1.0, new_satisfaction))
self.turns_pursued += 1
def reset_pursuit(self) -> None:
"""Reset pursuit tracking (when switching to different desire)."""
self.turns_pursued = 0
self.turns_since_progress = 0
class DesireManager:
"""Manages an agent's desires and converts them to active goals.
The DesireManager:
1. Maintains a set of desires weighted by personality
2. Updates desire satisfaction based on beliefs
3. Selects which desires should drive current goals
4. Filters/prioritizes GOAP goals based on active desires
"""
def __init__(self, personality: "PersonalityTraits"):
"""Initialize desires based on personality."""
self.desires: dict[DesireType, Desire] = {}
self._init_desires(personality)
# Track which desire is currently dominant
self.dominant_desire: Optional[DesireType] = None
self.stubbornness: float = 0.5 # How reluctant to switch desires
def _init_desires(self, personality: "PersonalityTraits") -> None:
"""Initialize desires with personality-based weights."""
# Survival - everyone has this, but risk-averse agents weight it higher
self.desires[DesireType.SURVIVAL] = Desire(
desire_type=DesireType.SURVIVAL,
base_intensity=2.0, # Always high base
personality_weight=1.5 - personality.risk_tolerance * 0.5,
)
# Accumulate Wealth - driven by wealth_desire
self.desires[DesireType.ACCUMULATE_WEALTH] = Desire(
desire_type=DesireType.ACCUMULATE_WEALTH,
base_intensity=1.0,
personality_weight=0.5 + personality.wealth_desire,
)
# Stock Resources - driven by hoarding_rate
self.desires[DesireType.STOCK_RESOURCES] = Desire(
desire_type=DesireType.STOCK_RESOURCES,
base_intensity=0.8,
personality_weight=0.5 + personality.hoarding_rate,
)
# Master Profession - driven by strongest preference
max_pref = max(
personality.hunt_preference,
personality.gather_preference,
personality.trade_preference,
)
self.desires[DesireType.MASTER_PROFESSION] = Desire(
desire_type=DesireType.MASTER_PROFESSION,
base_intensity=0.6,
personality_weight=max_pref * 0.5,
)
# Social Standing - driven by market_affinity and trade_preference
self.desires[DesireType.SOCIAL_STANDING] = Desire(
desire_type=DesireType.SOCIAL_STANDING,
base_intensity=0.5,
personality_weight=personality.market_affinity * personality.trade_preference * 0.5,
)
# Comfort - moderate for everyone
self.desires[DesireType.COMFORT] = Desire(
desire_type=DesireType.COMFORT,
base_intensity=0.7,
personality_weight=1.0 - personality.risk_tolerance * 0.3,
)
# Calculate stubbornness from personality
# High hoarding + low risk tolerance = stubborn
self.stubbornness = (personality.hoarding_rate + (1.0 - personality.risk_tolerance)) / 2
def update_from_beliefs(self, beliefs: "BeliefBase") -> None:
"""Update desire satisfaction based on current beliefs."""
# Survival satisfaction based on vitals
min_vital = min(
beliefs.thirst_pct,
beliefs.hunger_pct,
beliefs.heat_pct,
beliefs.energy_pct,
)
self.desires[DesireType.SURVIVAL].update_satisfaction(min_vital)
# Wealth satisfaction based on money relative to wealthy threshold
if beliefs.is_wealthy:
self.desires[DesireType.ACCUMULATE_WEALTH].update_satisfaction(0.8)
else:
# Scale satisfaction by money (rough approximation)
wealth_sat = min(1.0, beliefs.money / 10000)
self.desires[DesireType.ACCUMULATE_WEALTH].update_satisfaction(wealth_sat)
# Stock satisfaction based on inventory
total_resources = (
beliefs.water_count +
beliefs.food_count +
beliefs.wood_count
)
stock_sat = min(1.0, total_resources / 15) # 15 resources = satisfied
self.desires[DesireType.STOCK_RESOURCES].update_satisfaction(stock_sat)
# Mastery satisfaction based on skill levels
max_skill = max(
beliefs.hunting_skill,
beliefs.gathering_skill,
beliefs.trading_skill,
)
mastery_sat = (max_skill - 1.0) / 1.0 # 0 at skill 1.0, 1 at skill 2.0
self.desires[DesireType.MASTER_PROFESSION].update_satisfaction(max(0, mastery_sat))
# Social satisfaction - harder to measure, use trade success
trade_success = max(0, 5 - beliefs.failed_trades) / 5
self.desires[DesireType.SOCIAL_STANDING].update_satisfaction(trade_success)
# Comfort satisfaction
comfort_factors = [
beliefs.energy_pct,
beliefs.heat_pct,
1.0 if beliefs.has_clothes else 0.5,
]
comfort_sat = sum(comfort_factors) / len(comfort_factors)
self.desires[DesireType.COMFORT].update_satisfaction(comfort_sat)
def get_active_desires(self) -> list[Desire]:
"""Get desires sorted by effective intensity (highest first)."""
return sorted(
self.desires.values(),
key=lambda d: d.effective_intensity,
reverse=True,
)
def should_switch_desire(self, new_dominant: DesireType) -> bool:
"""Determine if we should switch dominant desire.
Stubborn agents require a larger intensity difference to switch.
"""
if self.dominant_desire is None:
return True
if new_dominant == self.dominant_desire:
return False
current = self.desires[self.dominant_desire]
new = self.desires[new_dominant]
# Calculate required intensity difference based on stubbornness
# Stubbornness 0.5 = need 20% more intensity to switch
# Stubbornness 1.0 = need 50% more intensity to switch
required_difference = 1.0 + self.stubbornness * 0.5
return new.effective_intensity > current.effective_intensity * required_difference
def update_dominant_desire(self) -> DesireType:
"""Update and return the currently dominant desire."""
active = self.get_active_desires()
if not active:
return DesireType.SURVIVAL
top_desire = active[0].desire_type
if self.should_switch_desire(top_desire):
# Reset pursuit on old desire if switching
if self.dominant_desire and self.dominant_desire != top_desire:
self.desires[self.dominant_desire].reset_pursuit()
self.dominant_desire = top_desire
return self.dominant_desire
def filter_goals_by_desire(
self,
goals: list["Goal"],
beliefs: "BeliefBase",
) -> list["Goal"]:
"""Filter and re-prioritize goals based on active desires.
This is where desires influence GOAP goal selection:
- Goals aligned with dominant desire get priority boost
- Goals conflicting with desires get reduced priority
"""
# Update dominant desire
dominant = self.update_dominant_desire()
# Get world state for goal priority calculation
world_state = beliefs.to_world_state()
# Calculate goal priorities with desire modifiers
goal_priorities: list[tuple["Goal", float]] = []
for goal in goals:
base_priority = goal.get_priority(world_state)
# Apply desire-based modifiers
modifier = self._get_goal_desire_modifier(goal.name, dominant, beliefs)
adjusted_priority = base_priority * modifier
goal_priorities.append((goal, adjusted_priority))
# Sort by adjusted priority
goal_priorities.sort(key=lambda x: x[1], reverse=True)
# Return goals (the priorities will be recalculated by planner,
# but we've filtered/ordered them by our preferences)
return [g for g, _ in goal_priorities]
def _get_goal_desire_modifier(
self,
goal_name: str,
dominant_desire: DesireType,
beliefs: "BeliefBase",
) -> float:
"""Get priority modifier for a goal based on dominant desire."""
goal_lower = goal_name.lower()
# Map goals to desires they serve
goal_desire_map = {
# Survival goals
"satisfy thirst": DesireType.SURVIVAL,
"satisfy hunger": DesireType.SURVIVAL,
"maintain heat": DesireType.SURVIVAL,
"restore energy": DesireType.SURVIVAL,
# Wealth goals
"build wealth": DesireType.ACCUMULATE_WEALTH,
"sell excess": DesireType.ACCUMULATE_WEALTH,
"find deals": DesireType.ACCUMULATE_WEALTH,
"trader arbitrage": DesireType.ACCUMULATE_WEALTH,
# Stock goals
"stock water": DesireType.STOCK_RESOURCES,
"stock food": DesireType.STOCK_RESOURCES,
"stock wood": DesireType.STOCK_RESOURCES,
# Comfort goals
"get clothes": DesireType.COMFORT,
"sleep": DesireType.COMFORT,
}
# Find which desire this goal serves
goal_desire = None
for gn, desire in goal_desire_map.items():
if gn in goal_lower:
goal_desire = desire
break
if goal_desire is None:
return 1.0 # No modifier for unknown goals
# Boost if aligned with dominant desire
if goal_desire == dominant_desire:
return 1.3 + self.desires[dominant_desire].effective_intensity * 0.2
# Survival always gets a baseline boost if critical
if goal_desire == DesireType.SURVIVAL and beliefs.has_critical_need():
return 2.0 # Critical needs override other desires
# Slight reduction for non-aligned goals
return 0.9
def to_dict(self) -> dict:
"""Convert to dictionary for debugging/logging."""
return {
"dominant_desire": self.dominant_desire.value if self.dominant_desire else None,
"stubbornness": round(self.stubbornness, 2),
"desires": {
d.desire_type.value: {
"intensity": round(d.effective_intensity, 2),
"satisfaction": round(d.satisfaction, 2),
"personality_weight": round(d.personality_weight, 2),
"turns_pursued": d.turns_pursued,
}
for d in self.desires.values()
},
}

View File

@ -0,0 +1,289 @@
"""Intention System for BDI agents.
Intentions represent committed plans that the agent is executing.
Unlike desires (motivations) or goals (targets), intentions are
concrete action sequences the agent has decided to pursue.
Key concepts:
- Intention persistence: agents stick to plans unless interrupted
- Commitment strategies: different levels of plan commitment
- Plan monitoring: detecting when a plan becomes invalid
"""
from dataclasses import dataclass
from enum import Enum
from typing import TYPE_CHECKING, Optional
if TYPE_CHECKING:
from backend.core.bdi.belief import BeliefBase
from backend.core.bdi.desire import DesireManager
from backend.core.goap.goal import Goal
from backend.core.goap.action import GOAPAction
from backend.core.goap.planner import Plan
class CommitmentStrategy(Enum):
"""How strongly an agent commits to their current intention."""
REACTIVE = "reactive" # Replan every turn (no commitment)
CAUTIOUS = "cautious" # Replan if priorities shift significantly
DETERMINED = "determined" # Stick to plan unless it becomes impossible
STUBBORN = "stubborn" # Only abandon for critical interrupts
@dataclass
class Intention:
"""A committed plan that the agent is executing.
Tracks the goal, plan, and execution progress.
"""
goal: "Goal" # The goal we're pursuing
plan: "Plan" # The plan to achieve it
start_turn: int # When we committed
actions_completed: int = 0 # How many actions done
last_action_success: bool = True # Did last action succeed?
consecutive_failures: int = 0 # Failures in a row
@property
def current_action(self) -> Optional["GOAPAction"]:
"""Get the next action to execute."""
if self.plan.is_empty:
return None
remaining = self.plan.actions[self.actions_completed:]
return remaining[0] if remaining else None
@property
def is_complete(self) -> bool:
"""Check if the plan has been fully executed."""
return self.actions_completed >= len(self.plan.actions)
@property
def remaining_actions(self) -> int:
"""Number of actions left in the plan."""
return max(0, len(self.plan.actions) - self.actions_completed)
def advance(self, success: bool) -> None:
"""Mark current action as executed and advance."""
self.last_action_success = success
if success:
self.actions_completed += 1
self.consecutive_failures = 0
else:
self.consecutive_failures += 1
class IntentionManager:
"""Manages the agent's current intention and commitment.
Responsibilities:
- Maintain the current intention (goal + plan)
- Decide when to continue vs. replan
- Handle plan failure and recovery
"""
def __init__(self, commitment_strategy: CommitmentStrategy = CommitmentStrategy.CAUTIOUS):
self.current_intention: Optional[Intention] = None
self.commitment_strategy = commitment_strategy
# Tracking
self.intentions_completed: int = 0
self.intentions_abandoned: int = 0
self.total_actions_executed: int = 0
# Thresholds for reconsideration
self.max_consecutive_failures: int = 2
self.priority_switch_threshold: float = 1.5 # New goal must be 1.5x priority
@classmethod
def from_personality(cls, personality) -> "IntentionManager":
"""Create an IntentionManager with commitment strategy based on personality."""
# Derive commitment from personality traits
# High hoarding + low risk tolerance = stubborn
commitment_score = (
personality.hoarding_rate * 0.4 +
(1.0 - personality.risk_tolerance) * 0.4 +
(1.0 - personality.market_affinity) * 0.2
)
if commitment_score > 0.7:
strategy = CommitmentStrategy.STUBBORN
elif commitment_score > 0.5:
strategy = CommitmentStrategy.DETERMINED
elif commitment_score > 0.3:
strategy = CommitmentStrategy.CAUTIOUS
else:
strategy = CommitmentStrategy.REACTIVE
return cls(commitment_strategy=strategy)
def has_intention(self) -> bool:
"""Check if we have an active intention."""
return (
self.current_intention is not None and
not self.current_intention.is_complete
)
def get_next_action(self) -> Optional["GOAPAction"]:
"""Get the next action from current intention."""
if not self.has_intention():
return None
return self.current_intention.current_action
def should_reconsider(
self,
beliefs: "BeliefBase",
desire_manager: "DesireManager",
available_goals: list["Goal"],
) -> bool:
"""Determine if we should reconsider our current intention.
This implements the commitment strategy logic.
"""
# No intention = definitely need to plan
if not self.has_intention():
return True
intention = self.current_intention
# Check for critical interrupts (always reconsider)
if beliefs.has_critical_need():
# But only if current intention isn't already addressing it
urgent_need = beliefs.get_most_urgent_need()
if not self._intention_addresses_need(intention, urgent_need):
return True
# Check for too many failures
if intention.consecutive_failures >= self.max_consecutive_failures:
return True
# Strategy-specific logic
if self.commitment_strategy == CommitmentStrategy.REACTIVE:
return True # Always replan
elif self.commitment_strategy == CommitmentStrategy.CAUTIOUS:
# Reconsider if a significantly better goal is available
return self._better_goal_available(
beliefs, desire_manager, available_goals,
threshold=self.priority_switch_threshold
)
elif self.commitment_strategy == CommitmentStrategy.DETERMINED:
# Only reconsider for much better goals or if plan is failing
return (
intention.consecutive_failures > 0 or
self._better_goal_available(
beliefs, desire_manager, available_goals,
threshold=2.0 # Need 2x priority to switch
)
)
elif self.commitment_strategy == CommitmentStrategy.STUBBORN:
# Only abandon for critical needs or impossible plans
return (
beliefs.has_critical_need() or
intention.consecutive_failures >= self.max_consecutive_failures
)
return False
def _intention_addresses_need(self, intention: Intention, need: str) -> bool:
"""Check if current intention addresses a vital need."""
if not intention or not need:
return False
goal_name = intention.goal.name.lower()
need_map = {
"thirst": "thirst",
"hunger": "hunger",
"heat": "heat",
"energy": "energy",
}
return need_map.get(need, "") in goal_name
def _better_goal_available(
self,
beliefs: "BeliefBase",
desire_manager: "DesireManager",
available_goals: list["Goal"],
threshold: float,
) -> bool:
"""Check if there's a significantly better goal available."""
if not self.has_intention():
return True
current_goal = self.current_intention.goal
world_state = beliefs.to_world_state()
current_priority = current_goal.get_priority(world_state)
# Get desire-filtered goals
filtered_goals = desire_manager.filter_goals_by_desire(available_goals, beliefs)
for goal in filtered_goals:
if goal.name == current_goal.name:
continue
goal_priority = goal.get_priority(world_state)
if goal_priority > current_priority * threshold:
return True
return False
def commit_to_plan(self, goal: "Goal", plan: "Plan", current_turn: int) -> None:
"""Commit to a new intention."""
# Track abandoned intention
if self.has_intention():
self.intentions_abandoned += 1
self.current_intention = Intention(
goal=goal,
plan=plan,
start_turn=current_turn,
)
def advance_intention(self, success: bool) -> None:
"""Record action execution and advance the intention."""
if not self.has_intention():
return
self.current_intention.advance(success)
self.total_actions_executed += 1
# Check if intention is complete
if self.current_intention.is_complete:
self.intentions_completed += 1
self.current_intention = None
def abandon_intention(self, reason: str = "unknown") -> None:
"""Abandon the current intention."""
if self.has_intention():
self.intentions_abandoned += 1
self.current_intention = None
def get_plan_progress(self) -> dict:
"""Get current plan execution progress."""
if not self.has_intention():
return {"has_intention": False}
intention = self.current_intention
return {
"has_intention": True,
"goal": intention.goal.name,
"actions_completed": intention.actions_completed,
"remaining_actions": intention.remaining_actions,
"consecutive_failures": intention.consecutive_failures,
"turns_active": 0, # Would need current_turn to calculate
}
def to_dict(self) -> dict:
"""Convert to dictionary for debugging/logging."""
return {
"commitment_strategy": self.commitment_strategy.value,
"has_intention": self.has_intention(),
"current_goal": self.current_intention.goal.name if self.has_intention() else None,
"plan_progress": self.get_plan_progress(),
"stats": {
"intentions_completed": self.intentions_completed,
"intentions_abandoned": self.intentions_abandoned,
"total_actions": self.total_actions_executed,
},
}

View File

@ -30,21 +30,26 @@ class TurnLog:
turn: int turn: int
agent_actions: list[dict] = field(default_factory=list) agent_actions: list[dict] = field(default_factory=list)
deaths: list[str] = field(default_factory=list) deaths: list[str] = field(default_factory=list)
births: list[str] = field(default_factory=list)
trades: list[dict] = field(default_factory=list) trades: list[dict] = field(default_factory=list)
# Resource tracking for this turn # Resource tracking for this turn
resources_produced: dict = field(default_factory=dict) resources_produced: dict = field(default_factory=dict)
resources_consumed: dict = field(default_factory=dict) resources_consumed: dict = field(default_factory=dict)
resources_spoiled: dict = field(default_factory=dict) resources_spoiled: dict = field(default_factory=dict)
# New day events
day_events: dict = field(default_factory=dict)
def to_dict(self) -> dict: def to_dict(self) -> dict:
return { return {
"turn": self.turn, "turn": self.turn,
"agent_actions": self.agent_actions, "agent_actions": self.agent_actions,
"deaths": self.deaths, "deaths": self.deaths,
"births": self.births,
"trades": self.trades, "trades": self.trades,
"resources_produced": self.resources_produced, "resources_produced": self.resources_produced,
"resources_consumed": self.resources_consumed, "resources_consumed": self.resources_consumed,
"resources_spoiled": self.resources_spoiled, "resources_spoiled": self.resources_spoiled,
"day_events": self.day_events,
} }
@ -171,21 +176,15 @@ class GameEngine:
money=agent.money, money=agent.money,
) )
if self.world.is_night(): # GOAP AI handles night time automatically
# Force sleep at night decision = get_ai_decision(
decision = AIDecision( agent,
action=ActionType.SLEEP, self.market,
reason="Night time: sleeping", step_in_day=self.world.step_in_day,
) day_steps=self.world.config.day_steps,
else: current_turn=current_turn,
# Pass time info so AI can prepare for night is_night=self.world.is_night(),
decision = get_ai_decision( )
agent,
self.market,
step_in_day=self.world.step_in_day,
day_steps=self.world.config.day_steps,
current_turn=current_turn,
)
decisions.append((agent, decision)) decisions.append((agent, decision))
@ -283,28 +282,54 @@ class GameEngine:
# End turn logging # End turn logging
self.logger.end_turn() self.logger.end_turn()
# 8. Advance time # 8. Advance time (returns True if new day started)
self.world.advance_time() new_day = self.world.advance_time()
# 9. Check win/lose conditions (count only truly living agents, not corpses) # 9. Process new day events (aging, births, sinks)
if new_day:
day_events = self._process_new_day(turn_log)
turn_log.day_events = day_events
# 10. Check win/lose conditions (count only truly living agents, not corpses)
if len(self.world.get_living_agents()) == 0: if len(self.world.get_living_agents()) == 0:
self.is_running = False self.is_running = False
self.logger.close() self.logger.close()
# Keep turn_logs bounded to prevent memory growth
max_logs = get_config().performance.max_turn_logs
self.turn_logs.append(turn_log) self.turn_logs.append(turn_log)
if len(self.turn_logs) > max_logs:
# Remove oldest logs, keep only recent ones
self.turn_logs = self.turn_logs[-max_logs:]
return turn_log return turn_log
def _mark_dead_agents(self, current_turn: int) -> list[Agent]: def _mark_dead_agents(self, current_turn: int) -> list[Agent]:
"""Mark agents who just died as corpses. Returns list of newly dead agents.""" """Mark agents who just died as corpses. Returns list of newly dead agents.
Also processes inheritance - distributing wealth to children.
"""
newly_dead = [] newly_dead = []
for agent in self.world.agents: for agent in self.world.agents:
if not agent.is_alive() and not agent.is_corpse(): if not agent.is_alive() and not agent.is_corpse():
# Agent just died this turn # Determine cause of death
cause = agent.stats.get_critical_stat() or "unknown" if agent.is_too_old():
cause = "age"
else:
cause = agent.stats.get_critical_stat() or "unknown"
# Process inheritance BEFORE marking dead (while inventory still accessible)
inheritance = self.world.process_inheritance(agent)
if inheritance.get("beneficiaries"):
self.logger.log_event("inheritance", inheritance)
agent.mark_dead(current_turn, cause) agent.mark_dead(current_turn, cause)
# Clear their action to show death state # Clear their action to show death state
agent.current_action.action_type = "dead" agent.current_action.action_type = "dead"
agent.current_action.message = f"Died: {cause}" agent.current_action.message = f"Died: {cause}"
# Record death statistics
self.world.record_death(agent, cause)
newly_dead.append(agent) newly_dead.append(agent)
return newly_dead return newly_dead
@ -318,10 +343,150 @@ class GameEngine:
for agent in to_remove: for agent in to_remove:
self.world.agents.remove(agent) self.world.agents.remove(agent)
self.world.total_agents_died += 1 # Remove from index as well
self.world._agent_index.pop(agent.id, None)
# Note: death was already recorded in _mark_dead_agents
return to_remove return to_remove
def _process_new_day(self, turn_log: TurnLog) -> dict:
"""Process all new-day events: aging, births, resource sinks.
Called when a new simulation day starts.
"""
events = {
"day": self.world.current_day,
"births": [],
"age_deaths": [],
"taxes_collected": 0,
"storage_decay": {},
"random_events": [],
}
sinks_config = get_config().sinks
age_config = get_config().age
# 1. Age all living agents
for agent in self.world.get_living_agents():
agent.age_one_day()
# 2. Check for age-related deaths (after aging)
current_turn = self.world.current_turn
for agent in self.world.agents:
if not agent.is_corpse() and agent.is_too_old() and not agent.is_alive():
# Will be caught by _mark_dead_agents in the next turn
pass
# 3. Process potential births
for agent in list(self.world.get_living_agents()): # Copy list since we modify it
if agent.can_give_birth(self.world.current_day):
child = self.world.spawn_child(agent)
if child:
birth_info = {
"parent_id": agent.id,
"parent_name": agent.name,
"child_id": child.id,
"child_name": child.name,
}
events["births"].append(birth_info)
turn_log.births.append(child.name)
self.logger.log_event("birth", birth_info)
# 4. Apply daily money tax (wealth redistribution/removal)
if sinks_config.daily_tax_rate > 0:
total_taxes = 0
for agent in self.world.get_living_agents():
tax = int(agent.money * sinks_config.daily_tax_rate)
if tax > 0:
agent.money -= tax
total_taxes += tax
events["taxes_collected"] = total_taxes
# 5. Apply village storage decay (resources spoil over time)
if sinks_config.daily_village_decay_rate > 0:
decay_rate = sinks_config.daily_village_decay_rate
for agent in self.world.get_living_agents():
for resource in agent.inventory[:]: # Copy list to allow modification
# Random chance for each resource to decay
if random.random() < decay_rate:
decay_amount = max(1, int(resource.quantity * decay_rate))
resource.quantity -= decay_amount
res_type = resource.type.value
events["storage_decay"][res_type] = events["storage_decay"].get(res_type, 0) + decay_amount
# Track as spoiled
turn_log.resources_spoiled[res_type] = turn_log.resources_spoiled.get(res_type, 0) + decay_amount
self.resource_stats["spoiled"][res_type] = self.resource_stats["spoiled"].get(res_type, 0) + decay_amount
if resource.quantity <= 0:
agent.inventory.remove(resource)
# 6. Random events (fires, theft, etc.)
if random.random() < sinks_config.random_event_chance:
event = self._generate_random_event(events, turn_log)
if event:
events["random_events"].append(event)
return events
def _generate_random_event(self, events: dict, turn_log: TurnLog) -> Optional[dict]:
"""Generate a random village event (disaster, theft, etc.)."""
sinks_config = get_config().sinks
living_agents = self.world.get_living_agents()
if not living_agents:
return None
event_types = ["fire", "theft", "blessing"]
event_type = random.choice(event_types)
event_info = {"type": event_type, "affected": []}
if event_type == "fire":
# Fire destroys some resources from random agents
num_affected = max(1, len(living_agents) // 5) # 20% of agents affected
affected_agents = random.sample(living_agents, min(num_affected, len(living_agents)))
for agent in affected_agents:
for resource in agent.inventory[:]:
loss = int(resource.quantity * sinks_config.fire_event_resource_loss)
if loss > 0:
resource.quantity -= loss
res_type = resource.type.value
turn_log.resources_spoiled[res_type] = turn_log.resources_spoiled.get(res_type, 0) + loss
if resource.quantity <= 0:
agent.inventory.remove(resource)
event_info["affected"].append(agent.name)
elif event_type == "theft":
# Some money is stolen from wealthy agents
wealthy_agents = [a for a in living_agents if a.money > 1000]
if wealthy_agents:
victim = random.choice(wealthy_agents)
stolen = int(victim.money * sinks_config.theft_event_money_loss)
victim.money -= stolen
event_info["affected"].append(victim.name)
event_info["amount_stolen"] = stolen
elif event_type == "blessing":
# Good harvest - some agents get bonus resources
lucky_agent = random.choice(living_agents)
from backend.domain.resources import Resource, ResourceType
bonus_type = random.choice([ResourceType.BERRIES, ResourceType.WOOD])
bonus = Resource(type=bonus_type, quantity=random.randint(2, 5), created_turn=self.world.current_turn)
lucky_agent.add_to_inventory(bonus)
event_info["affected"].append(lucky_agent.name)
event_info["bonus"] = f"+{bonus.quantity} {bonus_type.value}"
if event_info["affected"]:
self.logger.log_event("random_event", event_info)
return event_info
return None
def _execute_action(self, agent: Agent, decision: AIDecision) -> Optional[ActionResult]: def _execute_action(self, agent: Agent, decision: AIDecision) -> Optional[ActionResult]:
"""Execute an action for an agent.""" """Execute an action for an agent."""
action = decision.action action = decision.action
@ -400,9 +565,17 @@ class GameEngine:
- Gathering skill affects gather output - Gathering skill affects gather output
- Woodcutting skill affects wood output - Woodcutting skill affects wood output
- Skills improve with use - Skills improve with use
Age modifies:
- Energy costs (young use less, old use more)
- Skill effectiveness (young less effective, old more effective "wisdom")
- Learning rate (young learn faster, old learn slower)
""" """
# Check energy # Calculate age-modified energy cost
energy_cost = abs(config.energy_cost) base_energy_cost = abs(config.energy_cost)
energy_cost_modifier = agent.get_energy_cost_modifier()
energy_cost = max(1, int(base_energy_cost * energy_cost_modifier))
if not agent.spend_energy(energy_cost): if not agent.spend_energy(energy_cost):
return ActionResult( return ActionResult(
action_type=action, action_type=action,
@ -430,16 +603,20 @@ class GameEngine:
# Get relevant skill for this action # Get relevant skill for this action
skill_name = self._get_skill_for_action(action) skill_name = self._get_skill_for_action(action)
skill_value = getattr(agent.skills, skill_name, 1.0) if skill_name else 1.0 skill_value = getattr(agent.skills, skill_name, 1.0) if skill_name else 1.0
skill_modifier = get_action_skill_modifier(skill_value)
# Check success chance (modified by skill) # Apply age-based skill modifier (young less effective, old more effective)
age_skill_modifier = agent.get_skill_modifier()
skill_modifier = get_action_skill_modifier(skill_value) * age_skill_modifier
# Check success chance (modified by skill and age)
# Higher skill = higher effective success chance # Higher skill = higher effective success chance
effective_success_chance = min(0.98, config.success_chance * skill_modifier) effective_success_chance = min(0.98, config.success_chance * skill_modifier)
if random.random() > effective_success_chance: if random.random() > effective_success_chance:
# Record action attempt (skill still improves on failure, just less) # Record action attempt (skill still improves on failure, just less)
agent.record_action(action.value) agent.record_action(action.value)
if skill_name: if skill_name:
agent.skills.improve(skill_name, 0.005) # Small improvement on failure learning_modifier = agent.get_learning_modifier()
agent.skills.improve(skill_name, 0.005, learning_modifier) # Small improvement on failure
return ActionResult( return ActionResult(
action_type=action, action_type=action,
success=False, success=False,
@ -447,14 +624,21 @@ class GameEngine:
message="Action failed", message="Action failed",
) )
# Generate output (modified by skill for quantity) # Generate output (modified by skill and age for quantity)
resources_gained = [] resources_gained = []
if config.output_resource: if config.output_resource:
# Check storage limit before producing
res_type = config.output_resource.value
storage_available = self.world.get_storage_available(res_type)
# Skill affects output quantity # Skill affects output quantity
base_quantity = random.randint(config.min_output, config.max_output) base_quantity = random.randint(config.min_output, config.max_output)
quantity = max(config.min_output, int(base_quantity * skill_modifier)) quantity = max(config.min_output, int(base_quantity * skill_modifier))
# Limit by storage
quantity = min(quantity, storage_available)
if quantity > 0: if quantity > 0:
resource = Resource( resource = Resource(
type=config.output_resource, type=config.output_resource,
@ -469,10 +653,15 @@ class GameEngine:
created_turn=self.world.current_turn, created_turn=self.world.current_turn,
)) ))
# Secondary output (e.g., hide from hunting) - also affected by skill # Secondary output (e.g., hide from hunting) - also affected by skill and storage
if config.secondary_output: if config.secondary_output:
res_type = config.secondary_output.value
storage_available = self.world.get_storage_available(res_type)
base_quantity = random.randint(config.secondary_min, config.secondary_max) base_quantity = random.randint(config.secondary_min, config.secondary_max)
quantity = max(0, int(base_quantity * skill_modifier)) quantity = max(0, int(base_quantity * skill_modifier))
quantity = min(quantity, storage_available)
if quantity > 0: if quantity > 0:
resource = Resource( resource = Resource(
type=config.secondary_output, type=config.secondary_output,
@ -487,10 +676,11 @@ class GameEngine:
created_turn=self.world.current_turn, created_turn=self.world.current_turn,
)) ))
# Record action and improve skill # Record action and improve skill (modified by age learning rate)
agent.record_action(action.value) agent.record_action(action.value)
if skill_name: if skill_name:
agent.skills.improve(skill_name, 0.015) # Skill improves with successful use learning_modifier = agent.get_learning_modifier()
agent.skills.improve(skill_name, 0.015, learning_modifier) # Skill improves with successful use
# Build success message with details # Build success message with details
gained_str = ", ".join(f"+{r.quantity} {r.type.value}" for r in resources_gained) gained_str = ", ".join(f"+{r.quantity} {r.type.value}" for r in resources_gained)
@ -571,13 +761,15 @@ class GameEngine:
if seller: if seller:
seller.money += result.total_paid seller.money += result.total_paid
seller.record_trade(result.total_paid) seller.record_trade(result.total_paid)
seller.skills.improve("trading", 0.02) # Seller skill improves seller.skills.improve("trading", 0.02, seller.get_learning_modifier()) # Seller skill improves
agent.spend_energy(abs(config.energy_cost)) # Age-modified energy cost for trading
energy_cost = max(1, int(abs(config.energy_cost) * agent.get_energy_cost_modifier()))
agent.spend_energy(energy_cost)
# Record buyer's trade and improve skill # Record buyer's trade and improve skill (with age learning modifier)
agent.record_action("trade") agent.record_action("trade")
agent.skills.improve("trading", 0.01) # Buyer skill improves less agent.skills.improve("trading", 0.01, agent.get_learning_modifier()) # Buyer skill improves less
return ActionResult( return ActionResult(
action_type=ActionType.TRADE, action_type=ActionType.TRADE,

View File

@ -0,0 +1,39 @@
"""GOAP (Goal-Oriented Action Planning) module for agent decision making.
This module provides a GOAP-based AI system where agents:
1. Evaluate their current world state
2. Select the most relevant goal based on priorities
3. Plan a sequence of actions to achieve that goal
4. Execute the first action in the plan
Key components:
- WorldState: Dictionary-like representation of agent/world state
- Goal: Goals with dynamic priority calculation
- GOAPAction: Actions with preconditions and effects
- Planner: A* search for finding optimal action sequences
"""
from .world_state import WorldState
from .goal import Goal, GoalType
from .action import GOAPAction
from .planner import GOAPPlanner
from .goals import SURVIVAL_GOALS, ECONOMIC_GOALS, get_all_goals
from .actions import get_all_actions, get_action_by_type
from .debug import GOAPDebugInfo, get_goap_debug_info, get_all_agents_goap_debug
__all__ = [
'WorldState',
'Goal',
'GoalType',
'GOAPAction',
'GOAPPlanner',
'SURVIVAL_GOALS',
'ECONOMIC_GOALS',
'get_all_goals',
'get_all_actions',
'get_action_by_type',
'GOAPDebugInfo',
'get_goap_debug_info',
'get_all_agents_goap_debug',
]

419
backend/core/goap/action.py Normal file
View File

@ -0,0 +1,419 @@
"""GOAP Action definitions.
Actions are the building blocks of plans. Each action has:
- Preconditions: What must be true for the action to be valid
- Effects: How the action changes the world state
- Cost: How expensive the action is (for planning)
"""
from dataclasses import dataclass, field
from typing import Callable, Optional, TYPE_CHECKING
from backend.domain.action import ActionType
from backend.domain.resources import ResourceType
from .world_state import WorldState
if TYPE_CHECKING:
from backend.domain.agent import Agent
from backend.core.market import OrderBook
@dataclass
class GOAPAction:
"""A GOAP action that can be part of a plan.
Actions transform the world state. The planner uses preconditions
and effects to search for valid action sequences.
Attributes:
name: Human-readable name
action_type: The underlying ActionType to execute
target_resource: Optional resource this action targets
preconditions: Function that checks if action is valid in a state
effects: Function that returns the expected effects on state
cost: Function that calculates action cost (lower = preferred)
get_decision_params: Function to get parameters for AIDecision
"""
name: str
action_type: ActionType
target_resource: Optional[ResourceType] = None
# Functions that evaluate in context of world state
preconditions: Callable[[WorldState], bool] = field(default=lambda s: True)
effects: Callable[[WorldState], dict] = field(default=lambda s: {})
cost: Callable[[WorldState], float] = field(default=lambda s: 1.0)
# For generating the actual decision
get_decision_params: Optional[Callable[[WorldState, "Agent", "OrderBook"], dict]] = None
def is_valid(self, state: WorldState) -> bool:
"""Check if this action can be performed in the given state."""
return self.preconditions(state)
def apply(self, state: WorldState) -> WorldState:
"""Apply this action's effects to a state, returning a new state.
This is used by the planner for forward search.
"""
new_state = state.copy()
effects = self.effects(state)
for key, value in effects.items():
if hasattr(new_state, key):
if isinstance(value, (int, float)):
# For numeric values, handle both absolute and relative changes
current = getattr(new_state, key)
if isinstance(current, bool):
setattr(new_state, key, bool(value))
else:
setattr(new_state, key, value)
else:
setattr(new_state, key, value)
# Recalculate urgencies
new_state._calculate_urgencies()
return new_state
def get_cost(self, state: WorldState) -> float:
"""Get the cost of this action in the given state."""
return self.cost(state)
def __repr__(self) -> str:
resource = f"({self.target_resource.value})" if self.target_resource else ""
return f"GOAPAction({self.name}{resource})"
def __hash__(self) -> int:
return hash((self.name, self.action_type, self.target_resource))
def __eq__(self, other) -> bool:
if not isinstance(other, GOAPAction):
return False
return (self.name == other.name and
self.action_type == other.action_type and
self.target_resource == other.target_resource)
def create_consume_action(
resource_type: ResourceType,
stat_name: str,
stat_increase: float,
secondary_stat: Optional[str] = None,
secondary_increase: float = 0.0,
) -> GOAPAction:
"""Factory for creating consume resource actions."""
count_name = f"{resource_type.value}_count" if resource_type != ResourceType.BERRIES else "berries_count"
if resource_type == ResourceType.MEAT:
count_name = "meat_count"
elif resource_type == ResourceType.WATER:
count_name = "water_count"
# Map stat name to pct name
pct_name = f"{stat_name}_pct"
secondary_pct = f"{secondary_stat}_pct" if secondary_stat else None
def preconditions(state: WorldState) -> bool:
count = getattr(state, count_name, 0)
return count > 0
def effects(state: WorldState) -> dict:
result = {}
current = getattr(state, pct_name)
result[pct_name] = min(1.0, current + stat_increase)
if secondary_pct:
current_sec = getattr(state, secondary_pct)
result[secondary_pct] = min(1.0, current_sec + secondary_increase)
# Reduce resource count
current_count = getattr(state, count_name)
result[count_name] = max(0, current_count - 1)
# Update food count if consuming food
if resource_type in [ResourceType.MEAT, ResourceType.BERRIES]:
result["food_count"] = max(0, state.food_count - 1)
return result
def cost(state: WorldState) -> float:
# Consuming is very cheap - 0 energy cost
return 0.5
return GOAPAction(
name=f"Consume {resource_type.value}",
action_type=ActionType.CONSUME,
target_resource=resource_type,
preconditions=preconditions,
effects=effects,
cost=cost,
)
def create_gather_action(
action_type: ActionType,
resource_type: ResourceType,
energy_cost: float,
expected_output: int,
success_chance: float = 1.0,
) -> GOAPAction:
"""Factory for creating resource gathering actions."""
count_name = f"{resource_type.value}_count"
if resource_type == ResourceType.BERRIES:
count_name = "berries_count"
elif resource_type == ResourceType.MEAT:
count_name = "meat_count"
def preconditions(state: WorldState) -> bool:
# Need enough energy and inventory space
energy_needed = abs(energy_cost) / 50.0 # Convert to percentage
return state.energy_pct >= energy_needed + 0.05 and state.inventory_space > 0
def effects(state: WorldState) -> dict:
result = {}
# Spend energy
energy_spent = abs(energy_cost) / 50.0
result["energy_pct"] = max(0, state.energy_pct - energy_spent)
# Gain resources (adjusted for success chance)
effective_output = int(expected_output * success_chance)
current = getattr(state, count_name)
result[count_name] = current + effective_output
# Update food count if gathering food
if resource_type in [ResourceType.MEAT, ResourceType.BERRIES]:
result["food_count"] = state.food_count + effective_output
# Update inventory space
result["inventory_space"] = max(0, state.inventory_space - effective_output)
return result
def cost(state: WorldState) -> float:
# Calculate cost based on efficiency (energy per unit of food)
food_per_action = expected_output * success_chance
if food_per_action > 0:
base_cost = abs(energy_cost) / food_per_action * 0.5
else:
base_cost = abs(energy_cost) / 5.0
# Adjust for success chance (penalize unreliable actions slightly)
if success_chance < 1.0:
base_cost *= 1.0 + (1.0 - success_chance) * 0.3
# STRONG profession specialization effect for gathering
if action_type == ActionType.GATHER:
# Compare gather_preference to other preferences
# Specialists get big discounts, generalists pay penalty
other_prefs = (state.hunt_preference + state.trade_preference) / 2
relative_strength = state.gather_preference / max(0.1, other_prefs)
# relative_strength > 1.0 means gathering is your specialty
# relative_strength < 1.0 means you're NOT a gatherer
if relative_strength >= 1.0:
# Specialist discount: up to 50% off
preference_modifier = 1.0 / relative_strength
else:
# Non-specialist penalty: up to 3x cost
preference_modifier = 1.0 + (1.0 - relative_strength) * 2.0
base_cost *= preference_modifier
# Skill reduces cost further (experienced = efficient)
# skill 0: no bonus, skill 1.0: 40% discount
skill_modifier = 1.0 - state.gathering_skill * 0.4
base_cost *= skill_modifier
return base_cost
return GOAPAction(
name=f"{action_type.value}",
action_type=action_type,
target_resource=resource_type,
preconditions=preconditions,
effects=effects,
cost=cost,
)
def create_buy_action(resource_type: ResourceType) -> GOAPAction:
"""Factory for creating market buy actions."""
can_buy_name = f"can_buy_{resource_type.value}"
if resource_type in [ResourceType.MEAT, ResourceType.BERRIES]:
can_buy_name = "can_buy_food" # Simplified - we check specific later
count_name = f"{resource_type.value}_count"
if resource_type == ResourceType.BERRIES:
count_name = "berries_count"
elif resource_type == ResourceType.MEAT:
count_name = "meat_count"
price_name = f"{resource_type.value}_market_price"
if resource_type in [ResourceType.MEAT, ResourceType.BERRIES]:
price_name = "food_market_price"
def preconditions(state: WorldState) -> bool:
# Check specific availability
if resource_type == ResourceType.MEAT:
can_buy = state.can_buy_meat
elif resource_type == ResourceType.BERRIES:
can_buy = state.can_buy_berries
else:
can_buy = getattr(state, f"can_buy_{resource_type.value}", False)
return can_buy and state.inventory_space > 0
def effects(state: WorldState) -> dict:
result = {}
# Get price
if resource_type == ResourceType.MEAT:
price = state.food_market_price
elif resource_type == ResourceType.BERRIES:
price = state.food_market_price
else:
price = getattr(state, price_name, 10)
# Spend money
result["money"] = state.money - price
# Gain resource
current = getattr(state, count_name)
result[count_name] = current + 1
# Update food count if buying food
if resource_type in [ResourceType.MEAT, ResourceType.BERRIES]:
result["food_count"] = state.food_count + 1
# Spend small energy
result["energy_pct"] = max(0, state.energy_pct - 0.02)
# Update inventory
result["inventory_space"] = max(0, state.inventory_space - 1)
return result
def cost(state: WorldState) -> float:
# Trading cost is low (1 energy)
base_cost = 0.2
# MILD profession effect for trading (everyone should be able to trade)
# Traders get a bonus, but non-traders shouldn't be heavily penalized
# (trading benefits the whole economy)
other_prefs = (state.hunt_preference + state.gather_preference) / 2
relative_strength = state.trade_preference / max(0.1, other_prefs)
if relative_strength >= 1.0:
# Specialist discount: up to 40% off for dedicated traders
preference_modifier = max(0.6, 1.0 / relative_strength)
else:
# Mild non-specialist penalty: up to 50% cost increase
preference_modifier = 1.0 + (1.0 - relative_strength) * 0.5
base_cost *= preference_modifier
# Skill reduces cost (experienced traders are efficient)
# skill 0: no bonus, skill 1.0: 40% discount
skill_modifier = 1.0 - state.trading_skill * 0.4
base_cost *= skill_modifier
# Market affinity still has mild effect
base_cost *= (1.2 - state.market_affinity * 0.4)
# Check if it's a good deal
if resource_type == ResourceType.MEAT:
price = state.food_market_price
elif resource_type == ResourceType.BERRIES:
price = state.food_market_price
else:
price = getattr(state, price_name, 100)
# Higher price = higher cost (scaled for 100-500g price range)
# At fair value (~150g), multiplier is ~1.5x
# At min price (100g), multiplier is ~1.33x
base_cost *= (1.0 + price / 300.0)
return base_cost
return GOAPAction(
name=f"Buy {resource_type.value}",
action_type=ActionType.TRADE,
target_resource=resource_type,
preconditions=preconditions,
effects=effects,
cost=cost,
)
def create_rest_action() -> GOAPAction:
"""Create the rest action."""
def preconditions(state: WorldState) -> bool:
return state.energy_pct < 0.9 # Only rest if not full
def effects(state: WorldState) -> dict:
# Rest restores energy (12 out of 50 = 0.24)
return {
"energy_pct": min(1.0, state.energy_pct + 0.24),
}
def cost(state: WorldState) -> float:
# Resting is cheap but we prefer productive actions
return 2.0
return GOAPAction(
name="Rest",
action_type=ActionType.REST,
preconditions=preconditions,
effects=effects,
cost=cost,
)
def create_build_fire_action() -> GOAPAction:
"""Create the build fire action."""
def preconditions(state: WorldState) -> bool:
return state.wood_count > 0 and state.energy_pct >= 0.1
def effects(state: WorldState) -> dict:
return {
"heat_pct": min(1.0, state.heat_pct + 0.20), # Fire gives 20 heat out of 100
"wood_count": max(0, state.wood_count - 1),
"energy_pct": max(0, state.energy_pct - 0.08), # 4 energy cost
}
def cost(state: WorldState) -> float:
# Building fire is relatively cheap when we have wood
return 1.5
return GOAPAction(
name="Build Fire",
action_type=ActionType.BUILD_FIRE,
target_resource=ResourceType.WOOD,
preconditions=preconditions,
effects=effects,
cost=cost,
)
def create_sleep_action() -> GOAPAction:
"""Create the sleep action (for night)."""
def preconditions(state: WorldState) -> bool:
return state.is_night
def effects(state: WorldState) -> dict:
return {
"energy_pct": 1.0, # Full energy restore
}
def cost(state: WorldState) -> float:
return 0.0 # Sleep is mandatory at night
return GOAPAction(
name="Sleep",
action_type=ActionType.SLEEP,
preconditions=preconditions,
effects=effects,
cost=cost,
)

View File

@ -0,0 +1,399 @@
"""Predefined GOAP actions for agents.
Actions are organized by category:
- Consume actions: Use resources from inventory
- Gather actions: Produce resources
- Trade actions: Buy/sell on market
- Utility actions: Rest, sleep, build fire
"""
from typing import Optional, TYPE_CHECKING
from backend.domain.action import ActionType, ACTION_CONFIG
from backend.domain.resources import ResourceType
from .world_state import WorldState
from .action import (
GOAPAction,
create_consume_action,
create_gather_action,
create_buy_action,
create_rest_action,
create_build_fire_action,
create_sleep_action,
)
if TYPE_CHECKING:
from backend.domain.agent import Agent
from backend.core.market import OrderBook
def _get_action_configs():
"""Get action configurations from config."""
return ACTION_CONFIG
# =============================================================================
# CONSUME ACTIONS
# =============================================================================
def _create_drink_water() -> GOAPAction:
"""Drink water to restore thirst."""
return create_consume_action(
resource_type=ResourceType.WATER,
stat_name="thirst",
stat_increase=0.50, # 50 thirst out of 100
)
def _create_eat_meat() -> GOAPAction:
"""Eat meat to restore hunger (primary food source)."""
return create_consume_action(
resource_type=ResourceType.MEAT,
stat_name="hunger",
stat_increase=0.35, # 35 hunger
secondary_stat="energy",
secondary_increase=0.24, # 12 energy
)
def _create_eat_berries() -> GOAPAction:
"""Eat berries to restore hunger and some thirst."""
return create_consume_action(
resource_type=ResourceType.BERRIES,
stat_name="hunger",
stat_increase=0.10, # 10 hunger
secondary_stat="thirst",
secondary_increase=0.04, # 4 thirst
)
CONSUME_ACTIONS = [
_create_drink_water(),
_create_eat_meat(),
_create_eat_berries(),
]
# =============================================================================
# GATHER ACTIONS
# =============================================================================
def _create_get_water() -> GOAPAction:
"""Get water from the river."""
config = _get_action_configs()[ActionType.GET_WATER]
return create_gather_action(
action_type=ActionType.GET_WATER,
resource_type=ResourceType.WATER,
energy_cost=config.energy_cost,
expected_output=1,
success_chance=1.0,
)
def _create_gather_berries() -> GOAPAction:
"""Gather berries (safe, reliable)."""
config = _get_action_configs()[ActionType.GATHER]
expected = (config.min_output + config.max_output) // 2
return create_gather_action(
action_type=ActionType.GATHER,
resource_type=ResourceType.BERRIES,
energy_cost=config.energy_cost,
expected_output=expected,
success_chance=1.0,
)
def _create_hunt() -> GOAPAction:
"""Hunt for meat (risky, high reward).
Hunt should be attractive because:
- Meat gives much more hunger than berries (35 vs 10)
- Meat also gives energy (12)
- You also get hide for clothes
Cost is balanced against gathering:
- Hunt: -7 energy, 70% success, 2-5 meat + 0-2 hide
- Gather: -3 energy, 100% success, 2-4 berries
Effective food per energy:
- Hunt: 3.5 meat avg * 0.7 = 2.45 meat = 2.45 * 35 hunger = 85.75 hunger for 7 energy = 12.25 hunger/energy
- Gather: 3 berries avg * 1.0 = 3 berries = 3 * 10 hunger = 30 hunger for 3 energy = 10 hunger/energy
So hunting is actually MORE efficient per energy for hunger! The cost should reflect this.
"""
config = _get_action_configs()[ActionType.HUNT]
expected = (config.min_output + config.max_output) // 2
# Custom preconditions for hunting
def preconditions(state: WorldState) -> bool:
# Need more energy for hunting (but not excessively so)
energy_needed = abs(config.energy_cost) / 50.0 + 0.05
return state.energy_pct >= energy_needed and state.inventory_space >= 2
def effects(state: WorldState) -> dict:
# Account for success chance
effective_meat = int(round(expected * config.success_chance))
effective_hide = int(round(1 * config.success_chance)) # Average hide
energy_spent = abs(config.energy_cost) / 50.0
return {
"energy_pct": max(0, state.energy_pct - energy_spent),
"meat_count": state.meat_count + effective_meat,
"food_count": state.food_count + effective_meat,
"hide_count": state.hide_count + effective_hide,
"inventory_space": max(0, state.inventory_space - effective_meat - effective_hide),
}
def cost(state: WorldState) -> float:
# Hunt should be comparable to gather when considering value:
# - Hunt gives 3.5 meat avg (35 hunger each) = 122.5 hunger value
# - Gather gives 3 berries avg (10 hunger each) = 30 hunger value
# Hunt is 4x more valuable for hunger! So cost can be higher but not 4x.
# Base cost similar to gather
base_cost = 0.6
# Success chance penalty (small)
if config.success_chance < 1.0:
base_cost *= 1.0 + (1.0 - config.success_chance) * 0.2
# STRONG profession specialization effect for hunting
# Compare hunt_preference to other preferences
other_prefs = (state.gather_preference + state.trade_preference) / 2
relative_strength = state.hunt_preference / max(0.1, other_prefs)
# relative_strength > 1.0 means hunting is your specialty
if relative_strength >= 1.0:
# Specialist discount: up to 50% off
preference_modifier = 1.0 / relative_strength
else:
# Non-specialist penalty: up to 3x cost
preference_modifier = 1.0 + (1.0 - relative_strength) * 2.0
base_cost *= preference_modifier
# Skill reduces cost further (experienced hunters are efficient)
# skill 0: no bonus, skill 1.0: 40% discount
skill_modifier = 1.0 - state.hunting_skill * 0.4
base_cost *= skill_modifier
# Risk tolerance still has mild effect
risk_modifier = 1.0 + (0.5 - state.risk_tolerance) * 0.15
base_cost *= risk_modifier
# Big bonus if we have no meat - prioritize getting some
if state.meat_count == 0:
base_cost *= 0.6
# Bonus if low on food in general
if state.food_count < 2:
base_cost *= 0.8
return base_cost
return GOAPAction(
name="Hunt",
action_type=ActionType.HUNT,
target_resource=ResourceType.MEAT,
preconditions=preconditions,
effects=effects,
cost=cost,
)
def _create_chop_wood() -> GOAPAction:
"""Chop wood for fires."""
config = _get_action_configs()[ActionType.CHOP_WOOD]
expected = (config.min_output + config.max_output) // 2
return create_gather_action(
action_type=ActionType.CHOP_WOOD,
resource_type=ResourceType.WOOD,
energy_cost=config.energy_cost,
expected_output=expected,
success_chance=config.success_chance,
)
def _create_weave_clothes() -> GOAPAction:
"""Craft clothes from hide."""
config = _get_action_configs()[ActionType.WEAVE]
def preconditions(state: WorldState) -> bool:
return (
state.hide_count >= 1 and
not state.has_clothes and
state.energy_pct >= abs(config.energy_cost) / 50.0 + 0.05
)
def effects(state: WorldState) -> dict:
return {
"has_clothes": True,
"hide_count": state.hide_count - 1,
"energy_pct": max(0, state.energy_pct - abs(config.energy_cost) / 50.0),
}
def cost(state: WorldState) -> float:
return abs(config.energy_cost) / 3.0
return GOAPAction(
name="Weave Clothes",
action_type=ActionType.WEAVE,
target_resource=ResourceType.CLOTHES,
preconditions=preconditions,
effects=effects,
cost=cost,
)
GATHER_ACTIONS = [
_create_get_water(),
_create_gather_berries(),
_create_hunt(),
_create_chop_wood(),
_create_weave_clothes(),
]
# =============================================================================
# TRADE ACTIONS
# =============================================================================
def _create_buy_water() -> GOAPAction:
"""Buy water from the market."""
return create_buy_action(ResourceType.WATER)
def _create_buy_meat() -> GOAPAction:
"""Buy meat from the market."""
return create_buy_action(ResourceType.MEAT)
def _create_buy_berries() -> GOAPAction:
"""Buy berries from the market."""
return create_buy_action(ResourceType.BERRIES)
def _create_buy_wood() -> GOAPAction:
"""Buy wood from the market."""
return create_buy_action(ResourceType.WOOD)
def _create_sell_action(resource_type: ResourceType, min_keep: int = 1) -> GOAPAction:
"""Factory for creating sell actions."""
count_name = f"{resource_type.value}_count"
if resource_type == ResourceType.BERRIES:
count_name = "berries_count"
elif resource_type == ResourceType.MEAT:
count_name = "meat_count"
def preconditions(state: WorldState) -> bool:
current = getattr(state, count_name)
return current > min_keep and state.energy_pct >= 0.05
def effects(state: WorldState) -> dict:
# Estimate we'll get a reasonable price (around min_price from config)
# This is approximate - actual execution will get real prices
estimated_price = 150 # Better estimate than min_price (fair value)
current = getattr(state, count_name)
sell_qty = min(3, current - min_keep) # Sell up to 3, keep minimum
result = {
"money": state.money + estimated_price * sell_qty,
count_name: current - sell_qty,
"inventory_space": state.inventory_space + sell_qty,
"energy_pct": max(0, state.energy_pct - 0.02),
}
# Update food count if selling food
if resource_type in [ResourceType.MEAT, ResourceType.BERRIES]:
result["food_count"] = state.food_count - sell_qty
return result
def cost(state: WorldState) -> float:
# Selling has low cost - everyone should be able to sell excess
base_cost = 1.0
# MILD profession effect for selling (everyone should be able to trade)
other_prefs = (state.hunt_preference + state.gather_preference) / 2
relative_strength = state.trade_preference / max(0.1, other_prefs)
if relative_strength >= 1.0:
# Specialist discount: up to 40% off for dedicated traders
preference_modifier = max(0.6, 1.0 / relative_strength)
else:
# Mild non-specialist penalty: up to 50% cost increase
preference_modifier = 1.0 + (1.0 - relative_strength) * 0.5
base_cost *= preference_modifier
# Skill reduces cost (experienced traders know the market)
# skill 0: no bonus, skill 1.0: 40% discount
skill_modifier = 1.0 - state.trading_skill * 0.4
base_cost *= skill_modifier
# Hoarders reluctant to sell (mild effect)
base_cost *= (0.8 + state.hoarding_rate * 0.4)
return base_cost
return GOAPAction(
name=f"Sell {resource_type.value}",
action_type=ActionType.TRADE,
target_resource=resource_type,
preconditions=preconditions,
effects=effects,
cost=cost,
)
TRADE_ACTIONS = [
_create_buy_water(),
_create_buy_meat(),
_create_buy_berries(),
_create_buy_wood(),
_create_sell_action(ResourceType.WATER, min_keep=2),
_create_sell_action(ResourceType.MEAT, min_keep=1),
_create_sell_action(ResourceType.BERRIES, min_keep=2),
_create_sell_action(ResourceType.WOOD, min_keep=1),
_create_sell_action(ResourceType.HIDE, min_keep=0),
]
# =============================================================================
# UTILITY ACTIONS
# =============================================================================
UTILITY_ACTIONS = [
create_rest_action(),
create_build_fire_action(),
create_sleep_action(),
]
# =============================================================================
# ALL ACTIONS
# =============================================================================
def get_all_actions() -> list[GOAPAction]:
"""Get all available GOAP actions."""
return CONSUME_ACTIONS + GATHER_ACTIONS + TRADE_ACTIONS + UTILITY_ACTIONS
def get_action_by_type(action_type: ActionType) -> list[GOAPAction]:
"""Get all GOAP actions of a specific type."""
all_actions = get_all_actions()
return [a for a in all_actions if a.action_type == action_type]
def get_action_by_name(name: str) -> Optional[GOAPAction]:
"""Get a specific action by name."""
all_actions = get_all_actions()
for action in all_actions:
if action.name == name:
return action
return None

258
backend/core/goap/debug.py Normal file
View File

@ -0,0 +1,258 @@
"""GOAP Debug utilities for visualization and analysis.
Provides detailed information about GOAP decision-making for debugging
and visualization purposes.
"""
from dataclasses import dataclass, field
from typing import Optional, TYPE_CHECKING
from .world_state import WorldState, create_world_state
from .goal import Goal
from .action import GOAPAction
from .planner import GOAPPlanner, ReactivePlanner, Plan
from .goals import get_all_goals
from .actions import get_all_actions
if TYPE_CHECKING:
from backend.domain.agent import Agent
from backend.core.market import OrderBook
@dataclass
class GoalDebugInfo:
"""Debug information for a single goal."""
name: str
goal_type: str
priority: float
is_satisfied: bool
is_selected: bool = False
def to_dict(self) -> dict:
return {
"name": self.name,
"goal_type": self.goal_type,
"priority": round(self.priority, 2),
"is_satisfied": self.is_satisfied,
"is_selected": self.is_selected,
}
@dataclass
class ActionDebugInfo:
"""Debug information for a single action."""
name: str
action_type: str
target_resource: Optional[str]
is_valid: bool
cost: float
is_in_plan: bool = False
plan_order: int = -1
def to_dict(self) -> dict:
return {
"name": self.name,
"action_type": self.action_type,
"target_resource": self.target_resource,
"is_valid": self.is_valid,
"cost": round(self.cost, 2),
"is_in_plan": self.is_in_plan,
"plan_order": self.plan_order,
}
@dataclass
class PlanDebugInfo:
"""Debug information for the current plan."""
goal_name: str
actions: list[str]
total_cost: float
plan_length: int
def to_dict(self) -> dict:
return {
"goal_name": self.goal_name,
"actions": self.actions,
"total_cost": round(self.total_cost, 2),
"plan_length": self.plan_length,
}
@dataclass
class GOAPDebugInfo:
"""Complete GOAP debug information for an agent."""
agent_id: str
agent_name: str
world_state: dict
goals: list[GoalDebugInfo]
actions: list[ActionDebugInfo]
current_plan: Optional[PlanDebugInfo]
selected_action: Optional[str]
decision_reason: str
def to_dict(self) -> dict:
return {
"agent_id": self.agent_id,
"agent_name": self.agent_name,
"world_state": self.world_state,
"goals": [g.to_dict() for g in self.goals],
"actions": [a.to_dict() for a in self.actions],
"current_plan": self.current_plan.to_dict() if self.current_plan else None,
"selected_action": self.selected_action,
"decision_reason": self.decision_reason,
}
def get_goap_debug_info(
agent: "Agent",
market: "OrderBook",
step_in_day: int = 1,
day_steps: int = 10,
is_night: bool = False,
) -> GOAPDebugInfo:
"""Get detailed GOAP debug information for an agent.
This function performs the same planning as the actual AI,
but captures detailed information about the decision process.
"""
# Create world state
state = create_world_state(
agent=agent,
market=market,
step_in_day=step_in_day,
day_steps=day_steps,
is_night=is_night,
)
# Get goals and actions
all_goals = get_all_goals()
all_actions = get_all_actions()
# Evaluate all goals
goal_infos = []
selected_goal = None
selected_plan = None
# Sort by priority
goals_with_priority = []
for goal in all_goals:
priority = goal.priority(state)
satisfied = goal.satisfied(state)
goals_with_priority.append((goal, priority, satisfied))
goals_with_priority.sort(key=lambda x: x[1], reverse=True)
# Try planning for each goal
planner = GOAPPlanner(max_iterations=50)
for goal, priority, satisfied in goals_with_priority:
if priority > 0 and not satisfied:
plan = planner.plan(state, goal, all_actions)
if plan and not plan.is_empty:
selected_goal = goal
selected_plan = plan
break
# Build goal debug info
for goal, priority, satisfied in goals_with_priority:
info = GoalDebugInfo(
name=goal.name,
goal_type=goal.goal_type.value,
priority=priority,
is_satisfied=satisfied,
is_selected=(goal == selected_goal),
)
goal_infos.append(info)
# Build action debug info
action_infos = []
plan_action_names = []
if selected_plan:
plan_action_names = [a.name for a in selected_plan.actions]
for action in all_actions:
is_valid = action.is_valid(state)
cost = action.get_cost(state) if is_valid else float('inf')
in_plan = action.name in plan_action_names
order = plan_action_names.index(action.name) if in_plan else -1
info = ActionDebugInfo(
name=action.name,
action_type=action.action_type.value,
target_resource=action.target_resource.value if action.target_resource else None,
is_valid=is_valid,
cost=cost if cost != float('inf') else -1,
is_in_plan=in_plan,
plan_order=order,
)
action_infos.append(info)
# Sort actions: plan actions first (by order), then valid actions, then invalid
action_infos.sort(key=lambda a: (
0 if a.is_in_plan else 1,
a.plan_order if a.is_in_plan else 999,
0 if a.is_valid else 1,
a.cost if a.cost >= 0 else 9999,
))
# Build plan debug info
plan_info = None
if selected_plan:
plan_info = PlanDebugInfo(
goal_name=selected_plan.goal.name,
actions=[a.name for a in selected_plan.actions],
total_cost=selected_plan.total_cost,
plan_length=len(selected_plan.actions),
)
# Determine selected action and reason
selected_action = None
reason = "No plan found"
if is_night:
selected_action = "Sleep"
reason = "Night time: sleeping"
elif selected_plan and selected_plan.first_action:
selected_action = selected_plan.first_action.name
reason = f"{selected_plan.goal.name}: {selected_action}"
else:
# Fallback to reactive planning
reactive_planner = ReactivePlanner()
best_action = reactive_planner.select_best_action(state, all_goals, all_actions)
if best_action:
selected_action = best_action.name
reason = f"Reactive: {best_action.name}"
# Mark the reactive action in the action list
for action_info in action_infos:
if action_info.name == best_action.name:
action_info.is_in_plan = True
action_info.plan_order = 0
return GOAPDebugInfo(
agent_id=agent.id,
agent_name=agent.name,
world_state=state.to_dict(),
goals=goal_infos,
actions=action_infos,
current_plan=plan_info,
selected_action=selected_action,
decision_reason=reason,
)
def get_all_agents_goap_debug(
agents: list["Agent"],
market: "OrderBook",
step_in_day: int = 1,
day_steps: int = 10,
is_night: bool = False,
) -> list[GOAPDebugInfo]:
"""Get GOAP debug info for all agents."""
return [
get_goap_debug_info(agent, market, step_in_day, day_steps, is_night)
for agent in agents
if agent.is_alive()
]

185
backend/core/goap/goal.py Normal file
View File

@ -0,0 +1,185 @@
"""Goal definitions for GOAP planning.
Goals represent what an agent wants to achieve. Each goal has:
- A name/type for identification
- A condition that checks if the goal is satisfied
- A priority function that determines how important the goal is
- Optional target state values for the planner
"""
from dataclasses import dataclass, field
from enum import Enum
from typing import Callable, Optional
from .world_state import WorldState
class GoalType(Enum):
"""Types of goals agents can pursue."""
# Survival goals - highest priority when needed
SATISFY_THIRST = "satisfy_thirst"
SATISFY_HUNGER = "satisfy_hunger"
MAINTAIN_HEAT = "maintain_heat"
RESTORE_ENERGY = "restore_energy"
# Resource goals - medium priority
STOCK_WATER = "stock_water"
STOCK_FOOD = "stock_food"
STOCK_WOOD = "stock_wood"
GET_CLOTHES = "get_clothes"
# Economic goals - lower priority but persistent
BUILD_WEALTH = "build_wealth"
SELL_EXCESS = "sell_excess"
FIND_DEALS = "find_deals"
TRADER_ARBITRAGE = "trader_arbitrage"
# Night behavior
SLEEP = "sleep"
@dataclass
class Goal:
"""A goal that an agent can pursue.
Goals are the driving force of GOAP. The planner searches for
action sequences that transform the current world state into
one where the goal condition is satisfied.
Attributes:
goal_type: The type of goal (for identification)
name: Human-readable name
is_satisfied: Function that checks if goal is achieved in a state
get_priority: Function that calculates goal priority (higher = more important)
target_state: Optional dict of state values the goal aims for
max_plan_depth: Maximum actions to plan for this goal
"""
goal_type: GoalType
name: str
is_satisfied: Callable[[WorldState], bool]
get_priority: Callable[[WorldState], float]
target_state: dict = field(default_factory=dict)
max_plan_depth: int = 3
def satisfied(self, state: WorldState) -> bool:
"""Check if goal is satisfied in the given state."""
return self.is_satisfied(state)
def priority(self, state: WorldState) -> float:
"""Get the priority of this goal in the given state."""
return self.get_priority(state)
def __repr__(self) -> str:
return f"Goal({self.name})"
def create_survival_goal(
goal_type: GoalType,
name: str,
stat_name: str,
target_pct: float = 0.6,
base_priority: float = 10.0,
) -> Goal:
"""Factory for creating survival-related goals.
Survival goals have high priority when the relevant stat is low.
Priority scales with urgency.
"""
urgency_name = f"{stat_name}_urgency"
pct_name = f"{stat_name}_pct"
def is_satisfied(state: WorldState) -> bool:
return getattr(state, pct_name) >= target_pct
def get_priority(state: WorldState) -> float:
urgency = getattr(state, urgency_name)
pct = getattr(state, pct_name)
if urgency <= 0:
return 0.0 # No need to pursue this goal
# Priority increases with urgency
# Critical urgency (>1.0) gives very high priority
priority = base_priority * urgency
# Extra boost when critical
if pct < state.critical_threshold:
priority *= 2.0
return priority
return Goal(
goal_type=goal_type,
name=name,
is_satisfied=is_satisfied,
get_priority=get_priority,
target_state={pct_name: target_pct},
max_plan_depth=2, # Survival should be quick
)
def create_resource_stock_goal(
goal_type: GoalType,
name: str,
resource_name: str,
target_count: int,
base_priority: float = 5.0,
) -> Goal:
"""Factory for creating resource stockpiling goals.
Resource goals have moderate priority and aim to maintain reserves.
"""
count_name = f"{resource_name}_count"
def is_satisfied(state: WorldState) -> bool:
return getattr(state, count_name) >= target_count
def get_priority(state: WorldState) -> float:
current = getattr(state, count_name)
if current >= target_count:
return 0.0 # Already have enough
# Priority based on how far from target
deficit = target_count - current
priority = base_priority * (deficit / target_count)
# Lower priority if survival is urgent
max_urgency = max(state.thirst_urgency, state.hunger_urgency, state.heat_urgency)
if max_urgency > 0.5:
priority *= 0.5
# Hoarders prioritize stockpiling more
priority *= (0.8 + state.hoarding_rate * 0.4)
# Evening boost - stock up before night
if state.is_evening:
priority *= 1.5
return priority
return Goal(
goal_type=goal_type,
name=name,
is_satisfied=is_satisfied,
get_priority=get_priority,
target_state={count_name: target_count},
max_plan_depth=3,
)
def create_economic_goal(
goal_type: GoalType,
name: str,
is_satisfied: Callable[[WorldState], bool],
get_priority: Callable[[WorldState], float],
) -> Goal:
"""Factory for creating economic/trading goals."""
return Goal(
goal_type=goal_type,
name=name,
is_satisfied=is_satisfied,
get_priority=get_priority,
max_plan_depth=2,
)

411
backend/core/goap/goals.py Normal file
View File

@ -0,0 +1,411 @@
"""Predefined goals for GOAP agents.
Goals are organized into categories:
- Survival goals: Immediate needs (thirst, hunger, heat, energy)
- Resource goals: Building reserves
- Economic goals: Trading and wealth building
"""
from .goal import Goal, GoalType, create_survival_goal, create_resource_stock_goal, create_economic_goal
from .world_state import WorldState
# =============================================================================
# SURVIVAL GOALS
# =============================================================================
def _create_satisfy_thirst_goal() -> Goal:
"""Satisfy immediate thirst."""
return create_survival_goal(
goal_type=GoalType.SATISFY_THIRST,
name="Satisfy Thirst",
stat_name="thirst",
target_pct=0.5, # Want to get to 50%
base_priority=15.0, # Highest base priority - thirst is most dangerous
)
def _create_satisfy_hunger_goal() -> Goal:
"""Satisfy immediate hunger."""
return create_survival_goal(
goal_type=GoalType.SATISFY_HUNGER,
name="Satisfy Hunger",
stat_name="hunger",
target_pct=0.5,
base_priority=12.0,
)
def _create_maintain_heat_goal() -> Goal:
"""Maintain body heat."""
return create_survival_goal(
goal_type=GoalType.MAINTAIN_HEAT,
name="Maintain Heat",
stat_name="heat",
target_pct=0.5,
base_priority=10.0,
)
def _create_restore_energy_goal() -> Goal:
"""Restore energy when low."""
def is_satisfied(state: WorldState) -> bool:
return state.energy_pct >= 0.4
def get_priority(state: WorldState) -> float:
if state.energy_pct >= 0.4:
return 0.0
# Priority increases as energy decreases
urgency = (0.4 - state.energy_pct) / 0.4
# But not if we have more urgent survival needs
max_vital_urgency = max(state.thirst_urgency, state.hunger_urgency, state.heat_urgency)
if max_vital_urgency > 1.5:
# Critical survival need - don't rest
return 0.0
base_priority = 6.0 * urgency
# Evening boost - want energy for night
if state.is_evening:
base_priority *= 1.5
return base_priority
return Goal(
goal_type=GoalType.RESTORE_ENERGY,
name="Restore Energy",
is_satisfied=is_satisfied,
get_priority=get_priority,
target_state={"energy_pct": 0.6},
max_plan_depth=1, # Just rest
)
SURVIVAL_GOALS = [
_create_satisfy_thirst_goal(),
_create_satisfy_hunger_goal(),
_create_maintain_heat_goal(),
_create_restore_energy_goal(),
]
# =============================================================================
# RESOURCE GOALS
# =============================================================================
def _create_stock_water_goal() -> Goal:
"""Maintain water reserves."""
def is_satisfied(state: WorldState) -> bool:
target = int(2 * (0.5 + state.hoarding_rate))
return state.water_count >= target
def get_priority(state: WorldState) -> float:
target = int(2 * (0.5 + state.hoarding_rate))
if state.water_count >= target:
return 0.0
deficit = target - state.water_count
base_priority = 4.0 * (deficit / max(1, target))
# Lower if urgent survival needs
if max(state.thirst_urgency, state.hunger_urgency) > 1.0:
base_priority *= 0.3
# Evening boost
if state.is_evening:
base_priority *= 2.0
return base_priority
return Goal(
goal_type=GoalType.STOCK_WATER,
name="Stock Water",
is_satisfied=is_satisfied,
get_priority=get_priority,
max_plan_depth=2,
)
def _create_stock_food_goal() -> Goal:
"""Maintain food reserves (meat + berries)."""
def is_satisfied(state: WorldState) -> bool:
target = int(3 * (0.5 + state.hoarding_rate))
return state.food_count >= target
def get_priority(state: WorldState) -> float:
target = int(3 * (0.5 + state.hoarding_rate))
if state.food_count >= target:
return 0.0
deficit = target - state.food_count
base_priority = 4.0 * (deficit / max(1, target))
# Lower if urgent survival needs
if max(state.thirst_urgency, state.hunger_urgency) > 1.0:
base_priority *= 0.3
# Evening boost
if state.is_evening:
base_priority *= 2.0
# Risk-takers may prefer hunting (more food per action)
base_priority *= (0.8 + state.risk_tolerance * 0.4)
return base_priority
return Goal(
goal_type=GoalType.STOCK_FOOD,
name="Stock Food",
is_satisfied=is_satisfied,
get_priority=get_priority,
max_plan_depth=2,
)
def _create_stock_wood_goal() -> Goal:
"""Maintain wood reserves for fires."""
def is_satisfied(state: WorldState) -> bool:
target = int(2 * (0.5 + state.hoarding_rate))
return state.wood_count >= target
def get_priority(state: WorldState) -> float:
target = int(2 * (0.5 + state.hoarding_rate))
if state.wood_count >= target:
return 0.0
deficit = target - state.wood_count
base_priority = 3.0 * (deficit / max(1, target))
# Higher priority if heat is becoming an issue
if state.heat_urgency > 0.5:
base_priority *= 1.5
# Lower if urgent survival needs
if max(state.thirst_urgency, state.hunger_urgency) > 1.0:
base_priority *= 0.3
return base_priority
return Goal(
goal_type=GoalType.STOCK_WOOD,
name="Stock Wood",
is_satisfied=is_satisfied,
get_priority=get_priority,
max_plan_depth=2,
)
def _create_get_clothes_goal() -> Goal:
"""Get clothes for heat protection."""
def is_satisfied(state: WorldState) -> bool:
return state.has_clothes
def get_priority(state: WorldState) -> float:
if state.has_clothes:
return 0.0
# Only pursue if we have hide
if state.hide_count < 1:
return 0.0
base_priority = 2.0
# Higher if heat is an issue
if state.heat_urgency > 0.3:
base_priority *= 1.5
return base_priority
return Goal(
goal_type=GoalType.GET_CLOTHES,
name="Get Clothes",
is_satisfied=is_satisfied,
get_priority=get_priority,
max_plan_depth=1,
)
RESOURCE_GOALS = [
_create_stock_water_goal(),
_create_stock_food_goal(),
_create_stock_wood_goal(),
_create_get_clothes_goal(),
]
# =============================================================================
# ECONOMIC GOALS
# =============================================================================
def _create_build_wealth_goal() -> Goal:
"""Accumulate money through trading."""
def is_satisfied(state: WorldState) -> bool:
return state.is_wealthy
def get_priority(state: WorldState) -> float:
if state.is_wealthy:
return 0.0
# Base priority scaled by wealth desire
base_priority = 2.0 * state.wealth_desire
# Only when survival is stable
max_urgency = max(state.thirst_urgency, state.hunger_urgency, state.heat_urgency)
if max_urgency > 0.5:
return 0.0
# Traders prioritize wealth more
if state.is_trader:
base_priority *= 2.0
return base_priority
return create_economic_goal(
goal_type=GoalType.BUILD_WEALTH,
name="Build Wealth",
is_satisfied=is_satisfied,
get_priority=get_priority,
)
def _create_sell_excess_goal() -> Goal:
"""Sell excess resources on the market."""
def is_satisfied(state: WorldState) -> bool:
# Satisfied if inventory is not getting full
return state.inventory_space > 3
def get_priority(state: WorldState) -> float:
if state.inventory_space > 5:
return 0.0 # Plenty of space
# Priority increases as inventory fills
fullness = 1.0 - (state.inventory_space / 12.0)
base_priority = 3.0 * fullness
# Low hoarders sell more readily
base_priority *= (1.5 - state.hoarding_rate)
# Only when survival is stable
max_urgency = max(state.thirst_urgency, state.hunger_urgency)
if max_urgency > 0.5:
base_priority *= 0.5
return base_priority
return create_economic_goal(
goal_type=GoalType.SELL_EXCESS,
name="Sell Excess",
is_satisfied=is_satisfied,
get_priority=get_priority,
)
def _create_find_deals_goal() -> Goal:
"""Find good deals on the market."""
def is_satisfied(state: WorldState) -> bool:
# This goal is never fully "satisfied" - it's opportunistic
return False
def get_priority(state: WorldState) -> float:
# Only pursue if we have money and market access
if state.money < 10:
return 0.0
# Check if there are deals available
has_deals = state.can_buy_water or state.can_buy_food or state.can_buy_wood
if not has_deals:
return 0.0
# Base priority from market affinity
base_priority = 2.0 * state.market_affinity
# Only when survival is stable
max_urgency = max(state.thirst_urgency, state.hunger_urgency)
if max_urgency > 0.5:
return 0.0
# Need inventory space
if state.inventory_space < 2:
return 0.0
return base_priority
return create_economic_goal(
goal_type=GoalType.FIND_DEALS,
name="Find Deals",
is_satisfied=is_satisfied,
get_priority=get_priority,
)
def _create_trader_arbitrage_goal() -> Goal:
"""Trader-specific arbitrage goal (buy low, sell high)."""
def is_satisfied(state: WorldState) -> bool:
return False # Always looking for opportunities
def get_priority(state: WorldState) -> float:
# Only for traders
if not state.is_trader:
return 0.0
# Need capital to trade
if state.money < 20:
return 1.0 # Low priority - need to sell something first
# Base priority for traders
base_priority = 5.0
# Only when survival is stable
max_urgency = max(state.thirst_urgency, state.hunger_urgency, state.heat_urgency)
if max_urgency > 0.3:
base_priority *= 0.5
return base_priority
return create_economic_goal(
goal_type=GoalType.TRADER_ARBITRAGE,
name="Trader Arbitrage",
is_satisfied=is_satisfied,
get_priority=get_priority,
)
def _create_sleep_goal() -> Goal:
"""Sleep at night."""
def is_satisfied(state: WorldState) -> bool:
return not state.is_night # Satisfied when it's not night
def get_priority(state: WorldState) -> float:
if not state.is_night:
return 0.0
# Highest priority at night
return 100.0
return Goal(
goal_type=GoalType.SLEEP,
name="Sleep",
is_satisfied=is_satisfied,
get_priority=get_priority,
max_plan_depth=1,
)
ECONOMIC_GOALS = [
_create_build_wealth_goal(),
_create_sell_excess_goal(),
_create_find_deals_goal(),
_create_trader_arbitrage_goal(),
_create_sleep_goal(),
]
def get_all_goals() -> list[Goal]:
"""Get all available goals."""
return SURVIVAL_GOALS + RESOURCE_GOALS + ECONOMIC_GOALS

View File

@ -0,0 +1,411 @@
"""GOAP-based AI decision system for agents.
This module provides the main interface for GOAP-based decision making
using Goal-Oriented Action Planning.
"""
from dataclasses import dataclass, field
from typing import Optional, TYPE_CHECKING
from backend.domain.action import ActionType
from backend.domain.resources import ResourceType
from backend.domain.personality import get_trade_price_modifier
from .world_state import WorldState, create_world_state
from .goal import Goal
from .action import GOAPAction
from .planner import GOAPPlanner, ReactivePlanner, Plan
from .goals import get_all_goals
from .actions import get_all_actions
if TYPE_CHECKING:
from backend.domain.agent import Agent
from backend.core.market import OrderBook
from backend.core.ai import AIDecision, TradeItem
@dataclass
class TradeItem:
"""A single item to buy/sell in a trade."""
order_id: str
resource_type: ResourceType
quantity: int
price_per_unit: int
@dataclass
class AIDecision:
"""A decision made by the AI for an agent."""
action: ActionType
target_resource: Optional[ResourceType] = None
order_id: Optional[str] = None
quantity: int = 1
price: int = 0
reason: str = ""
trade_items: list[TradeItem] = field(default_factory=list)
adjust_order_id: Optional[str] = None
new_price: Optional[int] = None
# GOAP-specific fields
goal_name: str = ""
plan_length: int = 0
def to_dict(self) -> dict:
return {
"action": self.action.value,
"target_resource": self.target_resource.value if self.target_resource else None,
"order_id": self.order_id,
"quantity": self.quantity,
"price": self.price,
"reason": self.reason,
"trade_items": [
{
"order_id": t.order_id,
"resource_type": t.resource_type.value,
"quantity": t.quantity,
"price_per_unit": t.price_per_unit,
}
for t in self.trade_items
],
"adjust_order_id": self.adjust_order_id,
"new_price": self.new_price,
"goal_name": self.goal_name,
"plan_length": self.plan_length,
}
class GOAPAgentAI:
"""GOAP-based AI decision maker for agents.
This uses goal-oriented action planning to select actions:
1. Build world state from agent and market
2. Evaluate all goals and their priorities
3. Use planner to find action sequence for best goal
4. Return the first action as the decision
Falls back to reactive planning for simple decisions.
"""
def __init__(
self,
agent: "Agent",
market: "OrderBook",
step_in_day: int = 1,
day_steps: int = 10,
current_turn: int = 0,
is_night: bool = False,
):
self.agent = agent
self.market = market
self.step_in_day = step_in_day
self.day_steps = day_steps
self.current_turn = current_turn
self.is_night = is_night
# Build world state
self.state = create_world_state(
agent=agent,
market=market,
step_in_day=step_in_day,
day_steps=day_steps,
is_night=is_night,
)
# Initialize planners
self.planner = GOAPPlanner(max_iterations=50)
self.reactive_planner = ReactivePlanner()
# Get available goals and actions
self.goals = get_all_goals()
self.actions = get_all_actions()
# Personality shortcuts
self.p = agent.personality
self.skills = agent.skills
def decide(self) -> AIDecision:
"""Make a decision using GOAP planning.
Decision flow:
1. Force sleep if night
2. Try to find a plan for the highest priority goal
3. If no plan found, use reactive selection
4. Convert GOAP action to AIDecision with proper parameters
"""
# Night time - mandatory sleep
if self.is_night:
return AIDecision(
action=ActionType.SLEEP,
reason="Night time: sleeping",
goal_name="Sleep",
)
# Try GOAP planning
plan = self.planner.plan_for_goals(
initial_state=self.state,
goals=self.goals,
available_actions=self.actions,
)
if plan and not plan.is_empty:
# We have a plan - execute first action
goap_action = plan.first_action
return self._convert_to_decision(
goap_action=goap_action,
goal=plan.goal,
plan=plan,
)
# Fallback to reactive selection
best_action = self.reactive_planner.select_best_action(
state=self.state,
goals=self.goals,
available_actions=self.actions,
)
if best_action:
return self._convert_to_decision(
goap_action=best_action,
goal=None,
plan=None,
)
# Ultimate fallback - rest
return AIDecision(
action=ActionType.REST,
reason="No valid action found, resting",
)
def _convert_to_decision(
self,
goap_action: GOAPAction,
goal: Optional[Goal],
plan: Optional[Plan],
) -> AIDecision:
"""Convert a GOAP action to an AIDecision with proper parameters.
This handles the translation from abstract GOAP actions to
concrete decisions with order IDs, prices, etc.
"""
action_type = goap_action.action_type
target_resource = goap_action.target_resource
# Build reason string
if goal:
reason = f"{goal.name}: {goap_action.name}"
else:
reason = f"Reactive: {goap_action.name}"
# Handle different action types
if action_type == ActionType.CONSUME:
return AIDecision(
action=action_type,
target_resource=target_resource,
reason=reason,
goal_name=goal.name if goal else "",
plan_length=len(plan.actions) if plan else 0,
)
elif action_type == ActionType.TRADE:
return self._create_trade_decision(goap_action, goal, plan, reason)
elif action_type in [ActionType.HUNT, ActionType.GATHER, ActionType.CHOP_WOOD,
ActionType.GET_WATER, ActionType.WEAVE]:
return AIDecision(
action=action_type,
target_resource=target_resource,
reason=reason,
goal_name=goal.name if goal else "",
plan_length=len(plan.actions) if plan else 0,
)
elif action_type == ActionType.BUILD_FIRE:
return AIDecision(
action=action_type,
target_resource=ResourceType.WOOD,
reason=reason,
goal_name=goal.name if goal else "",
plan_length=len(plan.actions) if plan else 0,
)
elif action_type in [ActionType.REST, ActionType.SLEEP]:
return AIDecision(
action=action_type,
reason=reason,
goal_name=goal.name if goal else "",
plan_length=len(plan.actions) if plan else 0,
)
# Default case
return AIDecision(
action=action_type,
target_resource=target_resource,
reason=reason,
goal_name=goal.name if goal else "",
plan_length=len(plan.actions) if plan else 0,
)
def _create_trade_decision(
self,
goap_action: GOAPAction,
goal: Optional[Goal],
plan: Optional[Plan],
reason: str,
) -> AIDecision:
"""Create a trade decision with actual market parameters.
This translates abstract "Buy X" or "Sell X" actions into
concrete decisions with order IDs, prices, and quantities.
"""
target_resource = goap_action.target_resource
action_name = goap_action.name.lower()
if "buy" in action_name:
# Find the best order to buy from
order = self.market.get_cheapest_order(target_resource)
if order and order.seller_id != self.agent.id:
# Calculate quantity to buy
can_afford = self.agent.money // max(1, order.price_per_unit)
space = self.agent.inventory_space()
quantity = min(2, can_afford, space, order.quantity)
if quantity > 0:
return AIDecision(
action=ActionType.TRADE,
target_resource=target_resource,
order_id=order.id,
quantity=quantity,
price=order.price_per_unit,
reason=f"{reason} @ {order.price_per_unit}c",
goal_name=goal.name if goal else "",
plan_length=len(plan.actions) if plan else 0,
)
# Can't buy - fallback to gathering
return self._create_gather_fallback(target_resource, reason, goal, plan)
elif "sell" in action_name:
# Create a sell order
quantity_available = self.agent.get_resource_count(target_resource)
# Calculate minimum to keep
min_keep = self._get_min_keep(target_resource)
quantity_to_sell = min(3, quantity_available - min_keep)
if quantity_to_sell > 0:
price = self._calculate_sell_price(target_resource)
return AIDecision(
action=ActionType.TRADE,
target_resource=target_resource,
quantity=quantity_to_sell,
price=price,
reason=f"{reason} @ {price}c",
goal_name=goal.name if goal else "",
plan_length=len(plan.actions) if plan else 0,
)
# Invalid trade action - rest
return AIDecision(
action=ActionType.REST,
reason="Trade not possible",
)
def _create_gather_fallback(
self,
resource_type: ResourceType,
reason: str,
goal: Optional[Goal],
plan: Optional[Plan],
) -> AIDecision:
"""Create a gather action as fallback when buying isn't possible."""
# Map resource to gather action
action_map = {
ResourceType.WATER: ActionType.GET_WATER,
ResourceType.BERRIES: ActionType.GATHER,
ResourceType.MEAT: ActionType.HUNT,
ResourceType.WOOD: ActionType.CHOP_WOOD,
}
action = action_map.get(resource_type, ActionType.GATHER)
return AIDecision(
action=action,
target_resource=resource_type,
reason=f"{reason} (gathering instead)",
goal_name=goal.name if goal else "",
plan_length=len(plan.actions) if plan else 0,
)
def _get_min_keep(self, resource_type: ResourceType) -> int:
"""Get minimum quantity to keep for survival."""
# Adjusted by hoarding rate
hoarding_mult = 0.5 + self.p.hoarding_rate
base_min = {
ResourceType.WATER: 2,
ResourceType.MEAT: 1,
ResourceType.BERRIES: 2,
ResourceType.WOOD: 1,
ResourceType.HIDE: 0,
}
return int(base_min.get(resource_type, 1) * hoarding_mult)
def _calculate_sell_price(self, resource_type: ResourceType) -> int:
"""Calculate sell price based on fair value and market conditions."""
# Get energy cost to produce
from backend.core.ai import get_energy_cost
from backend.config import get_config
config = get_config()
economy = getattr(config, 'economy', None)
energy_to_money_ratio = getattr(economy, 'energy_to_money_ratio', 150) if economy else 150
min_price = getattr(economy, 'min_price', 100) if economy else 100
energy_cost = get_energy_cost(resource_type)
fair_value = max(min_price, int(energy_cost * energy_to_money_ratio))
# Apply trading skill
sell_modifier = get_trade_price_modifier(self.skills.trading, is_buying=False)
# Get market signal
signal = self.market.get_market_signal(resource_type)
if signal == "sell": # Scarcity
price = int(fair_value * 1.3 * sell_modifier)
elif signal == "hold":
price = int(fair_value * sell_modifier)
else: # Surplus
cheapest = self.market.get_cheapest_order(resource_type)
if cheapest and cheapest.seller_id != self.agent.id:
price = max(min_price, cheapest.price_per_unit - 1)
else:
price = int(fair_value * 0.8 * sell_modifier)
return max(min_price, price)
def get_goap_decision(
agent: "Agent",
market: "OrderBook",
step_in_day: int = 1,
day_steps: int = 10,
current_turn: int = 0,
is_night: bool = False,
) -> AIDecision:
"""Convenience function to get a GOAP-based AI decision for an agent.
This is the main entry point for the GOAP AI system.
"""
ai = GOAPAgentAI(
agent=agent,
market=market,
step_in_day=step_in_day,
day_steps=day_steps,
current_turn=current_turn,
is_night=is_night,
)
return ai.decide()

View File

@ -0,0 +1,335 @@
"""GOAP Planner using A* search.
The planner finds optimal action sequences to achieve goals.
It uses A* search with the goal condition as the target.
"""
import heapq
from dataclasses import dataclass, field
from typing import Optional
from .world_state import WorldState
from .goal import Goal
from .action import GOAPAction
@dataclass(order=True)
class PlanNode:
"""A node in the planning search tree."""
f_cost: float # Total cost (g + h)
g_cost: float = field(compare=False) # Cost so far
state: WorldState = field(compare=False)
action: Optional[GOAPAction] = field(compare=False, default=None)
parent: Optional["PlanNode"] = field(compare=False, default=None)
depth: int = field(compare=False, default=0)
@dataclass
class Plan:
"""A plan is a sequence of actions to achieve a goal."""
goal: Goal
actions: list[GOAPAction]
total_cost: float
expected_final_state: WorldState
@property
def first_action(self) -> Optional[GOAPAction]:
"""Get the first action to execute."""
return self.actions[0] if self.actions else None
@property
def is_empty(self) -> bool:
return len(self.actions) == 0
def __repr__(self) -> str:
action_names = " -> ".join(a.name for a in self.actions)
return f"Plan({self.goal.name}: {action_names} [cost={self.total_cost:.1f}])"
class GOAPPlanner:
"""A* planner for GOAP.
Finds the lowest-cost sequence of actions that transforms
the current world state into one where the goal is satisfied.
"""
def __init__(self, max_iterations: int = 100):
self.max_iterations = max_iterations
def plan(
self,
initial_state: WorldState,
goal: Goal,
available_actions: list[GOAPAction],
) -> Optional[Plan]:
"""Find an action sequence to achieve the goal.
Uses A* search where:
- g(n) = accumulated action costs
- h(n) = heuristic estimate to goal (0 for now - effectively Dijkstra's)
Returns None if no plan found within iteration limit.
"""
# Check if goal is already satisfied
if goal.satisfied(initial_state):
return Plan(
goal=goal,
actions=[],
total_cost=0.0,
expected_final_state=initial_state,
)
# Priority queue for A*
open_set: list[PlanNode] = []
start_node = PlanNode(
f_cost=0.0,
g_cost=0.0,
state=initial_state,
action=None,
parent=None,
depth=0,
)
heapq.heappush(open_set, start_node)
# Track visited states to avoid cycles
# We use a simplified state hash for efficiency
visited: set[tuple] = set()
iterations = 0
while open_set and iterations < self.max_iterations:
iterations += 1
# Get node with lowest f_cost
current = heapq.heappop(open_set)
# Check depth limit
if current.depth >= goal.max_plan_depth:
continue
# Create state hash for cycle detection
state_hash = self._hash_state(current.state)
if state_hash in visited:
continue
visited.add(state_hash)
# Try each action
for action in available_actions:
# Check if action is valid in current state
if not action.is_valid(current.state):
continue
# Apply action to get new state
new_state = action.apply(current.state)
# Calculate costs
action_cost = action.get_cost(current.state)
g_cost = current.g_cost + action_cost
h_cost = self._heuristic(new_state, goal)
f_cost = g_cost + h_cost
# Create new node
new_node = PlanNode(
f_cost=f_cost,
g_cost=g_cost,
state=new_state,
action=action,
parent=current,
depth=current.depth + 1,
)
# Check if goal is satisfied
if goal.satisfied(new_state):
# Reconstruct and return plan
return self._reconstruct_plan(new_node, goal)
# Add to open set
heapq.heappush(open_set, new_node)
# No plan found
return None
def plan_for_goals(
self,
initial_state: WorldState,
goals: list[Goal],
available_actions: list[GOAPAction],
) -> Optional[Plan]:
"""Find the best plan among multiple goals.
Selects the highest-priority goal that has a valid plan,
considering both goal priority and plan cost.
"""
# Sort goals by priority (highest first)
sorted_goals = sorted(goals, key=lambda g: g.priority(initial_state), reverse=True)
best_plan: Optional[Plan] = None
best_score = float('-inf')
for goal in sorted_goals:
priority = goal.priority(initial_state)
# Skip low-priority goals if we already have a good plan
if priority <= 0:
continue
if best_plan and priority < best_score * 0.5:
# This goal is much lower priority, skip
break
plan = self.plan(initial_state, goal, available_actions)
if plan:
# Score = priority / (cost + 1)
# Higher priority and lower cost = better
score = priority / (plan.total_cost + 1.0)
if score > best_score:
best_score = score
best_plan = plan
return best_plan
def _hash_state(self, state: WorldState) -> tuple:
"""Create a hashable representation of key state values.
We don't hash everything - just the values that matter for planning.
"""
return (
round(state.thirst_pct, 1),
round(state.hunger_pct, 1),
round(state.heat_pct, 1),
round(state.energy_pct, 1),
state.water_count,
state.food_count,
state.wood_count,
state.money // 10, # Bucket money
)
def _heuristic(self, state: WorldState, goal: Goal) -> float:
"""Estimate cost to reach goal from state.
For now, we use a simple heuristic based on the distance
from current state values to goal target values.
"""
if not goal.target_state:
return 0.0
h = 0.0
for key, target in goal.target_state.items():
if hasattr(state, key):
current = getattr(state, key)
if isinstance(current, (int, float)) and isinstance(target, (int, float)):
diff = abs(target - current)
h += diff
return h
def _reconstruct_plan(self, final_node: PlanNode, goal: Goal) -> Plan:
"""Reconstruct the action sequence from the final node."""
actions = []
node = final_node
while node.parent is not None:
if node.action:
actions.append(node.action)
node = node.parent
actions.reverse()
return Plan(
goal=goal,
actions=actions,
total_cost=final_node.g_cost,
expected_final_state=final_node.state,
)
class ReactivePlanner:
"""A simpler reactive planner for immediate needs.
Sometimes we don't need full planning - we just need to
pick the best immediate action. This planner evaluates
single actions against goals.
"""
def select_best_action(
self,
state: WorldState,
goals: list[Goal],
available_actions: list[GOAPAction],
) -> Optional[GOAPAction]:
"""Select the single best action to take right now.
Evaluates each valid action and scores it based on how well
it progresses toward high-priority goals.
"""
best_action: Optional[GOAPAction] = None
best_score = float('-inf')
for action in available_actions:
if not action.is_valid(state):
continue
score = self._score_action(state, action, goals)
if score > best_score:
best_score = score
best_action = action
return best_action
def _score_action(
self,
state: WorldState,
action: GOAPAction,
goals: list[Goal],
) -> float:
"""Score an action based on its contribution to goals."""
# Apply action to get expected new state
new_state = action.apply(state)
action_cost = action.get_cost(state)
total_score = 0.0
for goal in goals:
priority = goal.priority(state)
if priority <= 0:
continue
# Check if this action helps with the goal
was_satisfied = goal.satisfied(state)
now_satisfied = goal.satisfied(new_state)
if now_satisfied and not was_satisfied:
# Action satisfies the goal - big bonus!
total_score += priority * 10.0
elif not was_satisfied:
# Check if we made progress
# This is a simplified check based on urgencies
old_urgency = self._get_goal_urgency(goal, state)
new_urgency = self._get_goal_urgency(goal, new_state)
if new_urgency < old_urgency:
improvement = old_urgency - new_urgency
total_score += priority * improvement * 5.0
# Subtract cost
total_score -= action_cost
return total_score
def _get_goal_urgency(self, goal: Goal, state: WorldState) -> float:
"""Get the urgency related to a goal."""
# Map goal types to state urgencies
from .goal import GoalType
urgency_map = {
GoalType.SATISFY_THIRST: state.thirst_urgency,
GoalType.SATISFY_HUNGER: state.hunger_urgency,
GoalType.MAINTAIN_HEAT: state.heat_urgency,
GoalType.RESTORE_ENERGY: state.energy_urgency,
}
return urgency_map.get(goal.goal_type, 0.0)

View File

@ -0,0 +1,319 @@
"""World State representation for GOAP planning.
The WorldState is a symbolic representation of the agent's current situation,
used by the planner to reason about actions and goals.
"""
from dataclasses import dataclass, field
from typing import TYPE_CHECKING, Optional
if TYPE_CHECKING:
from backend.domain.agent import Agent
from backend.core.market import OrderBook
@dataclass
class WorldState:
"""Symbolic representation of the world from an agent's perspective.
This captures all relevant state needed for GOAP planning:
- Agent vital stats (as percentages 0-1)
- Resource counts in inventory
- Market availability
- Economic state
- Time of day
The state uses normalized values (0-1) for stats to make
threshold comparisons easy and consistent.
"""
# Vital stats as percentages (0.0 to 1.0)
thirst_pct: float = 1.0
hunger_pct: float = 1.0
heat_pct: float = 1.0
energy_pct: float = 1.0
# Resource counts in inventory
water_count: int = 0
food_count: int = 0 # meat + berries
meat_count: int = 0
berries_count: int = 0
wood_count: int = 0
hide_count: int = 0
# Inventory state
has_clothes: bool = False
inventory_space: int = 0
inventory_full: bool = False
# Economic state
money: int = 0
is_wealthy: bool = False # Has comfortable money reserves
# Market availability (can we buy these?)
can_buy_water: bool = False
can_buy_food: bool = False
can_buy_meat: bool = False
can_buy_berries: bool = False
can_buy_wood: bool = False
water_market_price: int = 0
food_market_price: int = 0 # Cheapest of meat/berries
wood_market_price: int = 0
# Time state
is_night: bool = False
is_evening: bool = False # Near end of day
step_in_day: int = 0
day_steps: int = 10
# Agent personality shortcuts (affect goal priorities)
wealth_desire: float = 0.5
hoarding_rate: float = 0.5
risk_tolerance: float = 0.5
market_affinity: float = 0.5
is_trader: bool = False
# Profession preferences (0.5-1.5 range, higher = more preferred)
gather_preference: float = 1.0
hunt_preference: float = 1.0
trade_preference: float = 1.0
# Skill levels (0.0-1.0, higher = more skilled)
hunting_skill: float = 0.0
gathering_skill: float = 0.0
trading_skill: float = 0.0
# Critical thresholds (from config)
critical_threshold: float = 0.25
low_threshold: float = 0.45
# Calculated urgencies (how urgent is each need?)
thirst_urgency: float = 0.0
hunger_urgency: float = 0.0
heat_urgency: float = 0.0
energy_urgency: float = 0.0
def __post_init__(self):
"""Calculate urgencies after initialization."""
self._calculate_urgencies()
def _calculate_urgencies(self):
"""Calculate urgency values for each vital stat.
Urgency is 0 when stat is full, and increases as stat decreases.
Urgency > 1.0 when in critical range.
"""
# Urgency increases as stat decreases
# 0.0 = no urgency, 1.0 = needs attention, 2.0+ = critical
def calc_urgency(pct: float, critical: float, low: float) -> float:
if pct >= low:
return 0.0
elif pct >= critical:
# Linear increase from 0 to 1 as we go from low to critical
return 1.0 - (pct - critical) / (low - critical)
else:
# Exponential increase below critical
return 1.0 + (critical - pct) / critical * 2.0
self.thirst_urgency = calc_urgency(self.thirst_pct, self.critical_threshold, self.low_threshold)
self.hunger_urgency = calc_urgency(self.hunger_pct, self.critical_threshold, self.low_threshold)
self.heat_urgency = calc_urgency(self.heat_pct, self.critical_threshold, self.low_threshold)
# Energy urgency is different - we care about absolute level for work
if self.energy_pct < 0.25:
self.energy_urgency = 2.0
elif self.energy_pct < 0.40:
self.energy_urgency = 1.0
else:
self.energy_urgency = 0.0
def copy(self) -> "WorldState":
"""Create a copy of this world state."""
return WorldState(
thirst_pct=self.thirst_pct,
hunger_pct=self.hunger_pct,
heat_pct=self.heat_pct,
energy_pct=self.energy_pct,
water_count=self.water_count,
food_count=self.food_count,
meat_count=self.meat_count,
berries_count=self.berries_count,
wood_count=self.wood_count,
hide_count=self.hide_count,
has_clothes=self.has_clothes,
inventory_space=self.inventory_space,
inventory_full=self.inventory_full,
money=self.money,
is_wealthy=self.is_wealthy,
can_buy_water=self.can_buy_water,
can_buy_food=self.can_buy_food,
can_buy_meat=self.can_buy_meat,
can_buy_berries=self.can_buy_berries,
can_buy_wood=self.can_buy_wood,
water_market_price=self.water_market_price,
food_market_price=self.food_market_price,
wood_market_price=self.wood_market_price,
is_night=self.is_night,
is_evening=self.is_evening,
step_in_day=self.step_in_day,
day_steps=self.day_steps,
wealth_desire=self.wealth_desire,
hoarding_rate=self.hoarding_rate,
risk_tolerance=self.risk_tolerance,
market_affinity=self.market_affinity,
is_trader=self.is_trader,
critical_threshold=self.critical_threshold,
low_threshold=self.low_threshold,
)
def to_dict(self) -> dict:
"""Convert to dictionary for debugging/logging."""
return {
"vitals": {
"thirst": round(self.thirst_pct, 2),
"hunger": round(self.hunger_pct, 2),
"heat": round(self.heat_pct, 2),
"energy": round(self.energy_pct, 2),
},
"urgencies": {
"thirst": round(self.thirst_urgency, 2),
"hunger": round(self.hunger_urgency, 2),
"heat": round(self.heat_urgency, 2),
"energy": round(self.energy_urgency, 2),
},
"inventory": {
"water": self.water_count,
"meat": self.meat_count,
"berries": self.berries_count,
"wood": self.wood_count,
"hide": self.hide_count,
"space": self.inventory_space,
},
"economy": {
"money": self.money,
"is_wealthy": self.is_wealthy,
},
"market": {
"can_buy_water": self.can_buy_water,
"can_buy_food": self.can_buy_food,
"can_buy_wood": self.can_buy_wood,
},
"time": {
"is_night": self.is_night,
"is_evening": self.is_evening,
"step": self.step_in_day,
},
}
def create_world_state(
agent: "Agent",
market: "OrderBook",
step_in_day: int = 1,
day_steps: int = 10,
is_night: bool = False,
) -> WorldState:
"""Create a WorldState from an agent and market.
This is the main factory function for creating world states.
It extracts all relevant information from the agent and market.
"""
from backend.domain.resources import ResourceType
from backend.config import get_config
config = get_config()
agent_config = config.agent_stats
economy_config = getattr(config, 'economy', None)
stats = agent.stats
# Calculate stat percentages
thirst_pct = stats.thirst / stats.MAX_THIRST
hunger_pct = stats.hunger / stats.MAX_HUNGER
heat_pct = stats.heat / stats.MAX_HEAT
energy_pct = stats.energy / stats.MAX_ENERGY
# Get resource counts
water_count = agent.get_resource_count(ResourceType.WATER)
meat_count = agent.get_resource_count(ResourceType.MEAT)
berries_count = agent.get_resource_count(ResourceType.BERRIES)
wood_count = agent.get_resource_count(ResourceType.WOOD)
hide_count = agent.get_resource_count(ResourceType.HIDE)
food_count = meat_count + berries_count
# Check market availability
def get_market_info(resource_type: ResourceType) -> tuple[bool, int]:
"""Get market availability and price for a resource."""
order = market.get_cheapest_order(resource_type)
if order and order.seller_id != agent.id and agent.money >= order.price_per_unit:
return True, order.price_per_unit
return False, 0
can_buy_water, water_price = get_market_info(ResourceType.WATER)
can_buy_meat, meat_price = get_market_info(ResourceType.MEAT)
can_buy_berries, berries_price = get_market_info(ResourceType.BERRIES)
can_buy_wood, wood_price = get_market_info(ResourceType.WOOD)
# Can buy food if we can buy either meat or berries
can_buy_food = can_buy_meat or can_buy_berries
food_price = min(
meat_price if can_buy_meat else float('inf'),
berries_price if can_buy_berries else float('inf')
)
food_price = food_price if food_price != float('inf') else 0
# Wealth calculation
min_wealth_target = getattr(economy_config, 'min_wealth_target', 50) if economy_config else 50
wealth_target = int(min_wealth_target * (0.5 + agent.personality.wealth_desire))
is_wealthy = agent.money >= wealth_target
# Trader check
is_trader = agent.personality.trade_preference > 1.3 and agent.personality.market_affinity > 0.5
# Evening check (last 2 steps before night)
is_evening = step_in_day >= day_steps - 2
return WorldState(
thirst_pct=thirst_pct,
hunger_pct=hunger_pct,
heat_pct=heat_pct,
energy_pct=energy_pct,
water_count=water_count,
food_count=food_count,
meat_count=meat_count,
berries_count=berries_count,
wood_count=wood_count,
hide_count=hide_count,
has_clothes=agent.has_clothes(),
inventory_space=agent.inventory_space(),
inventory_full=agent.inventory_full(),
money=agent.money,
is_wealthy=is_wealthy,
can_buy_water=can_buy_water,
can_buy_food=can_buy_food,
can_buy_meat=can_buy_meat,
can_buy_berries=can_buy_berries,
can_buy_wood=can_buy_wood,
water_market_price=water_price,
food_market_price=int(food_price),
wood_market_price=wood_price,
is_night=is_night,
is_evening=is_evening,
step_in_day=step_in_day,
day_steps=day_steps,
wealth_desire=agent.personality.wealth_desire,
hoarding_rate=agent.personality.hoarding_rate,
risk_tolerance=agent.personality.risk_tolerance,
market_affinity=agent.personality.market_affinity,
is_trader=is_trader,
gather_preference=agent.personality.gather_preference,
hunt_preference=agent.personality.hunt_preference,
trade_preference=agent.personality.trade_preference,
hunting_skill=agent.skills.hunting,
gathering_skill=agent.skills.gathering,
trading_skill=agent.skills.trading,
critical_threshold=agent_config.critical_threshold,
low_threshold=0.45, # Could also be in config
)

View File

@ -1,7 +1,17 @@
"""Simulation logger for detailed step-by-step logging.""" """Simulation logger for detailed step-by-step logging.
Performance-optimized with non-blocking I/O:
- Logging can be disabled or reduced via config
- File writes happen in a background thread (producer-consumer pattern)
- Agent lookups use O(1) dict instead of O(n) list search
- No in-memory accumulation of all entries
"""
import json import json
import logging import logging
import threading
import queue
import atexit
from dataclasses import dataclass, field, asdict from dataclasses import dataclass, field, asdict
from datetime import datetime from datetime import datetime
from pathlib import Path from pathlib import Path
@ -57,62 +67,250 @@ class TurnLogEntry:
} }
class AsyncLogWriter:
"""Background thread that handles file I/O asynchronously.
Uses a producer-consumer pattern to decouple log generation
from file writing, preventing I/O from blocking the simulation.
"""
def __init__(self, max_queue_size: int = 1000):
self._queue: queue.Queue = queue.Queue(maxsize=max_queue_size)
self._stop_event = threading.Event()
self._thread: Optional[threading.Thread] = None
self._files: dict[str, TextIO] = {}
self._lock = threading.Lock()
def start(self) -> None:
"""Start the background writer thread."""
if self._thread is not None and self._thread.is_alive():
return
self._stop_event.clear()
self._thread = threading.Thread(target=self._writer_loop, daemon=True)
self._thread.start()
def stop(self, timeout: float = 2.0) -> None:
"""Stop the background writer thread and flush remaining items."""
self._stop_event.set()
if self._thread is not None:
self._thread.join(timeout=timeout)
self._thread = None
# Process any remaining items in the queue
self._drain_queue()
# Close all files
with self._lock:
for f in self._files.values():
try:
f.close()
except Exception:
pass
self._files.clear()
def _writer_loop(self) -> None:
"""Main loop for the background writer thread."""
while not self._stop_event.is_set():
try:
# Wait for items with timeout to allow stop checks
item = self._queue.get(timeout=0.1)
self._process_item(item)
self._queue.task_done()
except queue.Empty:
continue
except Exception as e:
# Log errors but don't crash the thread
logging.getLogger("simulation").warning(f"Async log writer error: {e}")
def _process_item(self, item: dict) -> None:
"""Process a single log item."""
action = item.get("action")
if action == "open":
self._open_file(item["path"], item["file_id"])
elif action == "write":
self._write_to_file(item["file_id"], item["data"])
elif action == "flush":
self._flush_file(item.get("file_id"))
elif action == "close":
self._close_file(item["file_id"])
def _open_file(self, path: str, file_id: str) -> None:
"""Open a file for writing."""
with self._lock:
if file_id not in self._files:
self._files[file_id] = open(path, "w", encoding="utf-8")
def _write_to_file(self, file_id: str, data: str) -> None:
"""Write data to a file."""
with self._lock:
f = self._files.get(file_id)
if f:
f.write(data)
def _flush_file(self, file_id: Optional[str] = None) -> None:
"""Flush file(s) to disk."""
with self._lock:
if file_id:
f = self._files.get(file_id)
if f:
f.flush()
else:
for f in self._files.values():
f.flush()
def _close_file(self, file_id: str) -> None:
"""Close a file."""
with self._lock:
f = self._files.pop(file_id, None)
if f:
f.close()
def _drain_queue(self) -> None:
"""Process all remaining items in the queue."""
while True:
try:
item = self._queue.get_nowait()
self._process_item(item)
self._queue.task_done()
except queue.Empty:
break
def enqueue(self, item: dict) -> bool:
"""Add an item to the write queue.
Returns False if queue is full (item dropped).
"""
try:
self._queue.put_nowait(item)
return True
except queue.Full:
return False
def open_file(self, path: str, file_id: str) -> None:
"""Queue a file open operation."""
self.enqueue({"action": "open", "path": path, "file_id": file_id})
def write(self, file_id: str, data: str) -> None:
"""Queue a write operation."""
self.enqueue({"action": "write", "file_id": file_id, "data": data})
def flush(self, file_id: Optional[str] = None) -> None:
"""Queue a flush operation."""
self.enqueue({"action": "flush", "file_id": file_id})
def close_file(self, file_id: str) -> None:
"""Queue a file close operation."""
self.enqueue({"action": "close", "file_id": file_id})
class SimulationLogger: class SimulationLogger:
"""Logger that dumps detailed simulation data to files.""" """Logger that dumps detailed simulation data to files.
Performance optimized:
- Logging can be disabled entirely via config
- File writes happen in background thread (non-blocking)
- Agent lookups use O(1) dict instead of O(n) list search
- No in-memory accumulation of all entries
"""
def __init__(self, log_dir: str = "logs"): def __init__(self, log_dir: str = "logs"):
self.log_dir = Path(log_dir) self.log_dir = Path(log_dir)
self.log_dir.mkdir(exist_ok=True)
# Create session-specific log file # Load performance config
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") from backend.config import get_config
self.session_file = self.log_dir / f"sim_{timestamp}.jsonl" perf_config = get_config().performance
self.summary_file = self.log_dir / f"sim_{timestamp}_summary.txt" self.logging_enabled = perf_config.logging_enabled
self.detailed_logging = perf_config.detailed_logging
self.flush_interval = perf_config.log_flush_interval
# File handles # Get async logging config (default to True if available)
self.async_logging = getattr(perf_config, 'async_logging', True)
# Async writer (only created if logging enabled)
self._async_writer: Optional[AsyncLogWriter] = None
# File IDs for async writing
self._json_file_id: Optional[str] = None
self._summary_file_id: Optional[str] = None
# Fallback: synchronous file handles (used if async disabled)
self._json_file: Optional[TextIO] = None self._json_file: Optional[TextIO] = None
self._summary_file: Optional[TextIO] = None self._summary_file: Optional[TextIO] = None
# Also set up standard Python logging # Standard Python logging (minimal overhead even when enabled)
self.logger = logging.getLogger("simulation") self.logger = logging.getLogger("simulation")
self.logger.setLevel(logging.DEBUG) self.logger.setLevel(logging.WARNING) # Only warnings by default
# File handler for detailed logs # Current turn tracking
file_handler = logging.FileHandler(self.log_dir / f"sim_{timestamp}.log")
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(logging.Formatter(
"%(asctime)s | %(levelname)s | %(message)s"
))
self.logger.addHandler(file_handler)
# Console handler for important events
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)
console_handler.setFormatter(logging.Formatter(
"%(asctime)s | %(message)s", datefmt="%H:%M:%S"
))
self.logger.addHandler(console_handler)
self._entries: list[TurnLogEntry] = []
self._current_entry: Optional[TurnLogEntry] = None self._current_entry: Optional[TurnLogEntry] = None
# O(1) lookup for agent entries by ID
self._agent_entry_map: dict[str, AgentLogEntry] = {}
# Turn counter for flush batching
self._turns_since_flush = 0
# Stats
self._items_queued = 0
self._items_dropped = 0
def start_session(self, config: dict) -> None: def start_session(self, config: dict) -> None:
"""Start a new logging session.""" """Start a new logging session."""
self._json_file = open(self.session_file, "w") if not self.logging_enabled:
self._summary_file = open(self.summary_file, "w") return
# Write config as first line self.log_dir.mkdir(exist_ok=True)
self._json_file.write(json.dumps({"type": "config", "data": config}) + "\n")
self._json_file.flush()
self._summary_file.write(f"Simulation Session Started: {datetime.now()}\n") # Create session-specific log file paths
self._summary_file.write("=" * 60 + "\n\n") timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
self._summary_file.flush() json_path = self.log_dir / f"sim_{timestamp}.jsonl"
summary_path = self.log_dir / f"sim_{timestamp}_summary.txt"
self.logger.info(f"Logging session started: {self.session_file}") if self.async_logging:
# Use async writer
self._async_writer = AsyncLogWriter()
self._async_writer.start()
self._json_file_id = f"json_{timestamp}"
self._summary_file_id = f"summary_{timestamp}"
self._async_writer.open_file(str(json_path), self._json_file_id)
self._async_writer.open_file(str(summary_path), self._summary_file_id)
# Write initial data
self._async_writer.write(
self._json_file_id,
json.dumps({"type": "config", "data": config}) + "\n"
)
self._async_writer.write(
self._summary_file_id,
f"Simulation Session Started: {datetime.now()}\n" + "=" * 60 + "\n\n"
)
else:
# Use synchronous file handles
self._json_file = open(json_path, "w")
self._summary_file = open(summary_path, "w")
self._json_file.write(json.dumps({"type": "config", "data": config}) + "\n")
self._summary_file.write(f"Simulation Session Started: {datetime.now()}\n")
self._summary_file.write("=" * 60 + "\n\n")
if self.detailed_logging:
# Set up file handler for detailed logs
file_handler = logging.FileHandler(self.log_dir / f"sim_{timestamp}.log")
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(logging.Formatter(
"%(asctime)s | %(levelname)s | %(message)s"
))
self.logger.addHandler(file_handler)
self.logger.setLevel(logging.DEBUG)
def start_turn(self, turn: int, day: int, step_in_day: int, time_of_day: str) -> None: def start_turn(self, turn: int, day: int, step_in_day: int, time_of_day: str) -> None:
"""Start logging a new turn.""" """Start logging a new turn."""
if not self.logging_enabled:
return
self._current_entry = TurnLogEntry( self._current_entry = TurnLogEntry(
turn=turn, turn=turn,
day=day, day=day,
@ -120,7 +318,10 @@ class SimulationLogger:
time_of_day=time_of_day, time_of_day=time_of_day,
timestamp=datetime.now().isoformat(), timestamp=datetime.now().isoformat(),
) )
self.logger.debug(f"Turn {turn} started (Day {day}, Step {step_in_day}, {time_of_day})") self._agent_entry_map.clear()
if self.detailed_logging:
self.logger.debug(f"Turn {turn} started (Day {day}, Step {step_in_day}, {time_of_day})")
def log_agent_before( def log_agent_before(
self, self,
@ -133,39 +334,41 @@ class SimulationLogger:
money: int, money: int,
) -> None: ) -> None:
"""Log agent state before action.""" """Log agent state before action."""
if self._current_entry is None: if not self.logging_enabled or self._current_entry is None:
return return
# Create placeholder entry # Create entry and add to both list and map
entry = AgentLogEntry( entry = AgentLogEntry(
agent_id=agent_id, agent_id=agent_id,
agent_name=agent_name, agent_name=agent_name,
profession=profession, profession=profession,
position=position.copy(), position=position,
stats_before=stats.copy(), stats_before=stats,
stats_after={}, stats_after={},
decision={}, decision={},
action_result={}, action_result={},
inventory_before=inventory.copy(), inventory_before=inventory,
inventory_after=[], inventory_after=[],
money_before=money, money_before=money,
money_after=money, money_after=money,
) )
self._current_entry.agent_entries.append(entry) self._current_entry.agent_entries.append(entry)
self._agent_entry_map[agent_id] = entry
def log_agent_decision(self, agent_id: str, decision: dict) -> None: def log_agent_decision(self, agent_id: str, decision: dict) -> None:
"""Log agent's AI decision.""" """Log agent's AI decision."""
if self._current_entry is None: if not self.logging_enabled or self._current_entry is None:
return return
for entry in self._current_entry.agent_entries: # O(1) lookup instead of O(n) search
if entry.agent_id == agent_id: entry = self._agent_entry_map.get(agent_id)
entry.decision = decision.copy() if entry:
entry.decision = decision
if self.detailed_logging:
self.logger.debug( self.logger.debug(
f" {entry.agent_name}: decided to {decision.get('action', '?')} " f" {entry.agent_name}: decided to {decision.get('action', '?')} "
f"- {decision.get('reason', '')}" f"- {decision.get('reason', '')}"
) )
break
def log_agent_after( def log_agent_after(
self, self,
@ -177,70 +380,80 @@ class SimulationLogger:
action_result: dict, action_result: dict,
) -> None: ) -> None:
"""Log agent state after action.""" """Log agent state after action."""
if self._current_entry is None: if not self.logging_enabled or self._current_entry is None:
return return
for entry in self._current_entry.agent_entries: # O(1) lookup instead of O(n) search
if entry.agent_id == agent_id: entry = self._agent_entry_map.get(agent_id)
entry.stats_after = stats.copy() if entry:
entry.inventory_after = inventory.copy() entry.stats_after = stats
entry.money_after = money entry.inventory_after = inventory
entry.position = position.copy() entry.money_after = money
entry.action_result = action_result.copy() entry.position = position
break entry.action_result = action_result
def log_market_state(self, orders_before: list, orders_after: list) -> None: def log_market_state(self, orders_before: list, orders_after: list) -> None:
"""Log market state.""" """Log market state."""
if self._current_entry is None: if not self.logging_enabled or self._current_entry is None:
return return
self._current_entry.market_orders_before = orders_before self._current_entry.market_orders_before = orders_before
self._current_entry.market_orders_after = orders_after self._current_entry.market_orders_after = orders_after
def log_trade(self, trade: dict) -> None: def log_trade(self, trade: dict) -> None:
"""Log a trade transaction.""" """Log a trade transaction."""
if self._current_entry is None: if not self.logging_enabled or self._current_entry is None:
return return
self._current_entry.trades.append(trade) self._current_entry.trades.append(trade)
self.logger.debug(f" Trade: {trade.get('message', 'Unknown trade')}") if self.detailed_logging:
self.logger.debug(f" Trade: {trade.get('message', 'Unknown trade')}")
def log_death(self, agent_name: str, cause: str) -> None: def log_death(self, agent_name: str, cause: str) -> None:
"""Log an agent death.""" """Log an agent death."""
if self._current_entry is None: if not self.logging_enabled or self._current_entry is None:
return return
self._current_entry.deaths.append({"name": agent_name, "cause": cause}) self._current_entry.deaths.append({"name": agent_name, "cause": cause})
# Always log deaths even without detailed logging
self.logger.info(f" DEATH: {agent_name} died from {cause}") self.logger.info(f" DEATH: {agent_name} died from {cause}")
def log_event(self, event_type: str, event_data: dict) -> None:
"""Log a general event (births, random events, etc.)."""
if not self.logging_enabled or self._current_entry is None:
return
if event_type == "birth":
self.logger.info(
f" BIRTH: {event_data.get('child_name', '?')} born to {event_data.get('parent_name', '?')}"
)
elif event_type == "random_event" and self.detailed_logging:
self.logger.info(
f" EVENT: {event_data.get('type', '?')} affecting {event_data.get('affected', [])}"
)
elif self.detailed_logging:
self.logger.debug(f" Event [{event_type}]: {event_data}")
def log_statistics(self, stats: dict) -> None: def log_statistics(self, stats: dict) -> None:
"""Log end-of-turn statistics.""" """Log end-of-turn statistics."""
if self._current_entry is None: if not self.logging_enabled or self._current_entry is None:
return return
self._current_entry.statistics = stats.copy() self._current_entry.statistics = stats
def end_turn(self) -> None: def end_turn(self) -> None:
"""Finish logging the current turn and write to file.""" """Finish logging the current turn and write to file."""
if self._current_entry is None: if not self.logging_enabled or self._current_entry is None:
return return
self._entries.append(self._current_entry) entry = self._current_entry
# Write to JSON lines file # Prepare data
if self._json_file: json_data = json.dumps({"type": "turn", "data": entry.to_dict()}) + "\n"
self._json_file.write(
json.dumps({"type": "turn", "data": self._current_entry.to_dict()}) + "\n"
)
self._json_file.flush()
# Write summary summary_lines = [f"Turn {entry.turn} | Day {entry.day} Step {entry.step_in_day} ({entry.time_of_day})\n"]
if self._summary_file:
entry = self._current_entry
self._summary_file.write(
f"Turn {entry.turn} | Day {entry.day} Step {entry.step_in_day} ({entry.time_of_day})\n"
)
if self.detailed_logging:
for agent in entry.agent_entries: for agent in entry.agent_entries:
action = agent.decision.get("action", "?") action = agent.decision.get("action", "?")
result = "" if agent.action_result.get("success", False) else "" result = "+" if agent.action_result.get("success", False) else "-"
self._summary_file.write( summary_lines.append(
f" [{agent.agent_name}] {action} {result} | " f" [{agent.agent_name}] {action} {result} | "
f"E:{agent.stats_after.get('energy', '?')} " f"E:{agent.stats_after.get('energy', '?')} "
f"H:{agent.stats_after.get('hunger', '?')} " f"H:{agent.stats_after.get('hunger', '?')} "
@ -248,30 +461,88 @@ class SimulationLogger:
f"${agent.money_after}\n" f"${agent.money_after}\n"
) )
if entry.deaths: if entry.deaths:
for death in entry.deaths: for death in entry.deaths:
self._summary_file.write(f" 💀 {death['name']} died: {death['cause']}\n") summary_lines.append(f" X {death['name']} died: {death['cause']}\n")
self._summary_file.write("\n") summary_lines.append("\n")
self._summary_file.flush() summary_data = "".join(summary_lines)
self.logger.debug(f"Turn {self._current_entry.turn} completed") if self.async_logging and self._async_writer:
# Non-blocking write
self._async_writer.write(self._json_file_id, json_data)
self._async_writer.write(self._summary_file_id, summary_data)
self._items_queued += 2
else:
# Synchronous write
if self._json_file:
self._json_file.write(json_data)
if self._summary_file:
self._summary_file.write(summary_data)
# Batched flush - only flush every N turns
self._turns_since_flush += 1
if self._turns_since_flush >= self.flush_interval:
self._flush_files()
self._turns_since_flush = 0
# Clear current entry (don't accumulate in memory)
self._current_entry = None self._current_entry = None
self._agent_entry_map.clear()
def _flush_files(self) -> None:
"""Flush file buffers to disk."""
if self.async_logging and self._async_writer:
self._async_writer.flush()
else:
if self._json_file:
self._json_file.flush()
if self._summary_file:
self._summary_file.flush()
def close(self) -> None: def close(self) -> None:
"""Close log files.""" """Close log files."""
if self._json_file: if self.async_logging and self._async_writer:
self._json_file.close() # Write final message before closing
self._json_file = None if self._summary_file_id:
if self._summary_file: self._async_writer.write(
self._summary_file.write(f"\nSession ended: {datetime.now()}\n") self._summary_file_id,
self._summary_file.close() f"\nSession ended: {datetime.now()}\n"
self._summary_file = None )
self.logger.info("Logging session closed")
# Close files
if self._json_file_id:
self._async_writer.close_file(self._json_file_id)
if self._summary_file_id:
self._async_writer.close_file(self._summary_file_id)
# Stop the writer thread
self._async_writer.stop()
self._async_writer = None
else:
if self._json_file:
self._json_file.close()
self._json_file = None
if self._summary_file:
self._summary_file.write(f"\nSession ended: {datetime.now()}\n")
self._summary_file.close()
self._summary_file = None
def get_entries(self) -> list[TurnLogEntry]: def get_entries(self) -> list[TurnLogEntry]:
"""Get all logged entries.""" """Get all logged entries.
return self._entries.copy()
Note: Returns empty list when logging optimized (entries not kept in memory).
"""
return []
def get_stats(self) -> dict:
"""Get logging statistics."""
return {
"logging_enabled": self.logging_enabled,
"async_logging": self.async_logging,
"items_queued": self._items_queued,
"items_dropped": self._items_dropped,
}
# Global logger instance # Global logger instance
@ -294,3 +565,11 @@ def reset_simulation_logger() -> SimulationLogger:
_logger = SimulationLogger() _logger = SimulationLogger()
return _logger return _logger
# Ensure logger is closed on exit
@atexit.register
def _cleanup_logger():
global _logger
if _logger:
_logger.close()
_logger = None

View File

@ -40,7 +40,7 @@ class Order:
seller_id: str = "" seller_id: str = ""
resource_type: ResourceType = ResourceType.BERRIES resource_type: ResourceType = ResourceType.BERRIES
quantity: int = 1 quantity: int = 1
price_per_unit: int = 1 price_per_unit: int = 100 # Default to min_price from config
created_turn: int = 0 created_turn: int = 0
status: OrderStatus = OrderStatus.ACTIVE status: OrderStatus = OrderStatus.ACTIVE
@ -62,8 +62,9 @@ class Order:
def apply_discount(self, percentage: float = 0.1) -> None: def apply_discount(self, percentage: float = 0.1) -> None:
"""Apply a discount to the price.""" """Apply a discount to the price."""
reduction = max(1, int(self.price_per_unit * percentage)) min_price = _get_min_price()
self.price_per_unit = max(1, self.price_per_unit - reduction) reduction = max(1, int(round(self.price_per_unit * percentage)))
self.price_per_unit = max(min_price, self.price_per_unit - reduction)
def adjust_price(self, new_price: int, current_turn: int) -> bool: def adjust_price(self, new_price: int, current_turn: int) -> bool:
"""Adjust the order's price. Returns True if successful.""" """Adjust the order's price. Returns True if successful."""
@ -124,6 +125,14 @@ def _get_market_config():
return get_config().market return get_config().market
def _get_min_price() -> int:
"""Get minimum price floor from economy config."""
from backend.config import get_config
config = get_config()
economy = getattr(config, 'economy', None)
return getattr(economy, 'min_price', 100) if economy else 100
@dataclass @dataclass
class OrderBook: class OrderBook:
"""Central market order book with supply/demand tracking. """Central market order book with supply/demand tracking.
@ -145,9 +154,9 @@ class OrderBook:
DISCOUNT_RATE: float = 0.12 DISCOUNT_RATE: float = 0.12
# Supply/demand thresholds # Supply/demand thresholds
LOW_SUPPLY_THRESHOLD: int = 3 # Less than this = scarcity LOW_SUPPLY_THRESHOLD: int = 10 # Less than this = scarcity
HIGH_SUPPLY_THRESHOLD: int = 10 # More than this = surplus HIGH_SUPPLY_THRESHOLD: int = 50 # More than this = surplus
DEMAND_DECAY: float = 0.95 # How fast demand score decays per turn DEMAND_DECAY: float = 0.99 # How fast demand score decays per turn
def __post_init__(self): def __post_init__(self):
"""Initialize price history and load config values.""" """Initialize price history and load config values."""
@ -257,7 +266,8 @@ class OrderBook:
total_cost = actual_quantity * order.price_per_unit total_cost = actual_quantity * order.price_per_unit
if buyer_money < total_cost: if buyer_money < total_cost:
# Try to buy what they can afford # Try to buy what they can afford
actual_quantity = buyer_money // order.price_per_unit # Use max(1, ...) to avoid division by zero, though price is min 100
actual_quantity = buyer_money // max(1, order.price_per_unit)
if actual_quantity <= 0: if actual_quantity <= 0:
return TradeResult( return TradeResult(
success=False, success=False,
@ -285,6 +295,9 @@ class OrderBook:
# Record sale for price history (we need current_turn but don't have it here) # Record sale for price history (we need current_turn but don't have it here)
# The turn will be passed via the _record_sale call from engine # The turn will be passed via the _record_sale call from engine
self.trade_history.append(result) self.trade_history.append(result)
# Keep trade history bounded to prevent memory growth
if len(self.trade_history) > 1000:
self.trade_history = self.trade_history[-500:]
return result return result
def execute_multi_buy( def execute_multi_buy(
@ -364,7 +377,7 @@ class OrderBook:
# Use average sale price as reference if available # Use average sale price as reference if available
reference_price = base_price reference_price = base_price
if history.avg_sale_price > 0: if history.avg_sale_price > 0:
reference_price = int((base_price + history.avg_sale_price) / 2) reference_price = int(round((base_price + history.avg_sale_price) / 2))
# Adjust based on supply/demand # Adjust based on supply/demand
if ratio < 0.7: # Scarcity - raise price if ratio < 0.7: # Scarcity - raise price
@ -375,8 +388,8 @@ class OrderBook:
else: else:
price_multiplier = 1.0 price_multiplier = 1.0
suggested = int(reference_price * price_multiplier) suggested = int(round(reference_price * price_multiplier))
return max(1, suggested) return max(_get_min_price(), suggested)
def adjust_order_price(self, order_id: str, seller_id: str, new_price: int, current_turn: int) -> bool: def adjust_order_price(self, order_id: str, seller_id: str, new_price: int, current_turn: int) -> bool:
"""Adjust the price of an existing order. Returns True if successful.""" """Adjust the price of an existing order. Returns True if successful."""

450
backend/core/storage.py Normal file
View File

@ -0,0 +1,450 @@
"""State storage abstraction for simulation state.
Provides a unified interface for storing and retrieving simulation state,
with implementations for:
- In-memory storage (default, fast but not persistent)
- Redis storage (optional, enables decoupled UI polling and persistence)
This allows the simulation loop to snapshot state without blocking,
and enables external systems (like a web UI) to poll state independently.
"""
import json
import time
from abc import ABC, abstractmethod
from dataclasses import dataclass
from typing import Optional, Dict
@dataclass
class StateSnapshot:
"""A point-in-time snapshot of simulation state."""
turn: int
timestamp: float
data: dict
def to_json(self) -> str:
"""Convert to JSON string."""
return json.dumps({
"turn": self.turn,
"timestamp": self.timestamp,
"data": self.data,
})
@classmethod
def from_json(cls, json_str: str) -> "StateSnapshot":
"""Create from JSON string."""
obj = json.loads(json_str)
return cls(
turn=obj["turn"],
timestamp=obj["timestamp"],
data=obj["data"],
)
class StateStore(ABC):
"""Abstract interface for state storage.
Implementations should be thread-safe for concurrent read/write.
"""
@abstractmethod
def save_state(self, key: str, snapshot: StateSnapshot) -> bool:
"""Save a state snapshot.
Args:
key: Unique key for this state (e.g., "world", "market", "agent_123")
snapshot: The state snapshot to save
Returns:
True if saved successfully
"""
pass
@abstractmethod
def get_state(self, key: str) -> Optional[StateSnapshot]:
"""Get the latest state snapshot for a key.
Args:
key: The state key to retrieve
Returns:
The snapshot if found, None otherwise
"""
pass
@abstractmethod
def get_all_states(self, prefix: str = "") -> Dict[str, StateSnapshot]:
"""Get all states matching a prefix.
Args:
prefix: Key prefix to filter by (empty = all)
Returns:
Dict mapping keys to snapshots
"""
pass
@abstractmethod
def delete_state(self, key: str) -> bool:
"""Delete a state snapshot.
Args:
key: The state key to delete
Returns:
True if deleted (or didn't exist)
"""
pass
@abstractmethod
def clear_all(self) -> None:
"""Clear all stored states."""
pass
@abstractmethod
def is_healthy(self) -> bool:
"""Check if the store is healthy and accessible."""
pass
class MemoryStateStore(StateStore):
"""In-memory state storage (default implementation).
Fast but not persistent across restarts.
Thread-safe using a simple lock.
"""
def __init__(self, max_entries: int = 1000):
"""Initialize memory store.
Args:
max_entries: Maximum number of entries to keep (LRU eviction)
"""
import threading
self._data: Dict[str, StateSnapshot] = {}
self._lock = threading.Lock()
self._max_entries = max_entries
self._access_order: list[str] = [] # For LRU tracking
def save_state(self, key: str, snapshot: StateSnapshot) -> bool:
with self._lock:
# LRU eviction if at capacity
if len(self._data) >= self._max_entries and key not in self._data:
oldest = self._access_order.pop(0) if self._access_order else None
if oldest:
self._data.pop(oldest, None)
self._data[key] = snapshot
# Update access order
if key in self._access_order:
self._access_order.remove(key)
self._access_order.append(key)
return True
def get_state(self, key: str) -> Optional[StateSnapshot]:
with self._lock:
snapshot = self._data.get(key)
if snapshot and key in self._access_order:
# Update access order for LRU
self._access_order.remove(key)
self._access_order.append(key)
return snapshot
def get_all_states(self, prefix: str = "") -> Dict[str, StateSnapshot]:
with self._lock:
if not prefix:
return dict(self._data)
return {k: v for k, v in self._data.items() if k.startswith(prefix)}
def delete_state(self, key: str) -> bool:
with self._lock:
self._data.pop(key, None)
if key in self._access_order:
self._access_order.remove(key)
return True
def clear_all(self) -> None:
with self._lock:
self._data.clear()
self._access_order.clear()
def is_healthy(self) -> bool:
return True
class RedisStateStore(StateStore):
"""Redis-backed state storage.
Enables:
- Persistent state across restarts
- Decoupled UI polling (web clients can read state independently)
- Distributed access (multiple simulation instances)
Requires redis-py: pip install redis
"""
def __init__(
self,
host: str = "localhost",
port: int = 6379,
db: int = 0,
password: Optional[str] = None,
prefix: str = "villsim:",
ttl_seconds: int = 3600, # 1 hour default TTL
):
"""Initialize Redis store.
Args:
host: Redis server host
port: Redis server port
db: Redis database number
password: Redis password (if required)
prefix: Key prefix for all keys (for namespacing)
ttl_seconds: Time-to-live for entries (0 = no expiry)
"""
self._prefix = prefix
self._ttl = ttl_seconds
self._client = None
self._connection_params = {
"host": host,
"port": port,
"db": db,
"password": password,
"decode_responses": True,
}
self._connect()
def _connect(self) -> None:
"""Establish connection to Redis."""
try:
import redis
self._client = redis.Redis(**self._connection_params)
# Test connection
self._client.ping()
except ImportError:
raise ImportError(
"Redis support requires the 'redis' package. "
"Install with: pip install redis"
)
except Exception as e:
self._client = None
raise ConnectionError(f"Failed to connect to Redis: {e}")
def _make_key(self, key: str) -> str:
"""Create full Redis key with prefix."""
return f"{self._prefix}{key}"
def save_state(self, key: str, snapshot: StateSnapshot) -> bool:
if not self._client:
return False
try:
full_key = self._make_key(key)
data = snapshot.to_json()
if self._ttl > 0:
self._client.setex(full_key, self._ttl, data)
else:
self._client.set(full_key, data)
return True
except Exception:
return False
def get_state(self, key: str) -> Optional[StateSnapshot]:
if not self._client:
return None
try:
full_key = self._make_key(key)
data = self._client.get(full_key)
if data:
return StateSnapshot.from_json(data)
return None
except Exception:
return None
def get_all_states(self, prefix: str = "") -> Dict[str, StateSnapshot]:
if not self._client:
return {}
try:
pattern = self._make_key(prefix + "*")
keys = self._client.keys(pattern)
result = {}
for full_key in keys:
# Remove our prefix to get the original key
key = full_key[len(self._prefix):]
data = self._client.get(full_key)
if data:
result[key] = StateSnapshot.from_json(data)
return result
except Exception:
return {}
def delete_state(self, key: str) -> bool:
if not self._client:
return False
try:
full_key = self._make_key(key)
self._client.delete(full_key)
return True
except Exception:
return False
def clear_all(self) -> None:
if not self._client:
return
try:
pattern = self._make_key("*")
keys = self._client.keys(pattern)
if keys:
self._client.delete(*keys)
except Exception:
pass
def is_healthy(self) -> bool:
if not self._client:
return False
try:
self._client.ping()
return True
except Exception:
return False
def publish_state_update(self, channel: str, key: str) -> None:
"""Publish a state update notification (for real-time subscribers).
This can be used for WebSocket-style updates where clients
subscribe to state changes.
"""
if not self._client:
return
try:
self._client.publish(
f"{self._prefix}updates:{channel}",
json.dumps({"key": key, "timestamp": time.time()})
)
except Exception:
pass
class StubStateStore(StateStore):
"""No-op state store for when storage is disabled.
All operations succeed but don't actually store anything.
"""
def save_state(self, key: str, snapshot: StateSnapshot) -> bool:
return True
def get_state(self, key: str) -> Optional[StateSnapshot]:
return None
def get_all_states(self, prefix: str = "") -> Dict[str, StateSnapshot]:
return {}
def delete_state(self, key: str) -> bool:
return True
def clear_all(self) -> None:
pass
def is_healthy(self) -> bool:
return True
# Global state store instance
_state_store: Optional[StateStore] = None
def get_state_store() -> StateStore:
"""Get the global state store instance.
Creates a store based on config:
- If Redis is configured and available, uses Redis
- Otherwise falls back to in-memory storage
"""
global _state_store
if _state_store is None:
_state_store = _create_state_store()
return _state_store
def _create_state_store() -> StateStore:
"""Create the appropriate state store based on config."""
from backend.config import get_config
config = get_config()
# Check for Redis config
redis_config = getattr(config, 'redis', None)
if redis_config and getattr(redis_config, 'enabled', False):
try:
store = RedisStateStore(
host=getattr(redis_config, 'host', 'localhost'),
port=getattr(redis_config, 'port', 6379),
db=getattr(redis_config, 'db', 0),
password=getattr(redis_config, 'password', None),
prefix=getattr(redis_config, 'prefix', 'villsim:'),
ttl_seconds=getattr(redis_config, 'ttl_seconds', 3600),
)
if store.is_healthy():
return store
except Exception:
# Fall through to memory store
pass
# Check if storage is disabled
perf_config = getattr(config, 'performance', None)
if perf_config and not getattr(perf_config, 'state_storage_enabled', True):
return StubStateStore()
# Default to memory store
return MemoryStateStore()
def reset_state_store() -> None:
"""Reset the global state store."""
global _state_store
if _state_store:
_state_store.clear_all()
_state_store = None
def save_simulation_state(turn: int, state_data: dict) -> bool:
"""Convenience function to save simulation state.
Args:
turn: Current simulation turn
state_data: Full state dict (world, market, agents, etc.)
Returns:
True if saved successfully
"""
store = get_state_store()
snapshot = StateSnapshot(
turn=turn,
timestamp=time.time(),
data=state_data,
)
return store.save_state("simulation:current", snapshot)
def get_simulation_state() -> Optional[StateSnapshot]:
"""Convenience function to get current simulation state."""
store = get_state_store()
return store.get_state("simulation:current")

View File

@ -3,6 +3,8 @@
The world spawns diverse agents with varied personality traits, The world spawns diverse agents with varied personality traits,
skills, and starting conditions to create emergent professions skills, and starting conditions to create emergent professions
and class inequality. and class inequality.
Now includes age-based lifecycle with birth and death by old age.
""" """
import random import random
@ -15,6 +17,7 @@ from backend.domain.personality import (
PersonalityTraits, Skills, PersonalityTraits, Skills,
generate_random_personality, generate_random_skills generate_random_personality, generate_random_skills
) )
from backend.domain.resources import ResourceType
class TimeOfDay(Enum): class TimeOfDay(Enum):
@ -65,9 +68,31 @@ class World:
step_in_day: int = 0 step_in_day: int = 0
time_of_day: TimeOfDay = TimeOfDay.DAY time_of_day: TimeOfDay = TimeOfDay.DAY
# Agent index for O(1) lookups by ID
_agent_index: dict = field(default_factory=dict)
# Statistics # Statistics
total_agents_spawned: int = 0 total_agents_spawned: int = 0
total_agents_died: int = 0 total_agents_died: int = 0
total_births: int = 0
total_deaths_by_age: int = 0
total_deaths_by_starvation: int = 0
total_deaths_by_thirst: int = 0
total_deaths_by_cold: int = 0
# Village-wide storage tracking (for storage limits)
village_storage: dict = field(default_factory=lambda: {
"meat": 0,
"berries": 0,
"water": 0,
"wood": 0,
"hide": 0,
"clothes": 0,
})
# Cached statistics (updated periodically for performance)
_cached_stats: Optional[dict] = field(default=None)
_stats_cache_turn: int = field(default=-1)
def spawn_agent( def spawn_agent(
self, self,
@ -76,6 +101,9 @@ class World:
position: Optional[Position] = None, position: Optional[Position] = None,
archetype: Optional[str] = None, archetype: Optional[str] = None,
starting_money: Optional[int] = None, starting_money: Optional[int] = None,
age: Optional[int] = None,
generation: int = 0,
parent_ids: Optional[list[str]] = None,
) -> Agent: ) -> Agent:
"""Spawn a new agent in the world with unique personality. """Spawn a new agent in the world with unique personality.
@ -85,6 +113,9 @@ class World:
position: Starting position (random if None) position: Starting position (random if None)
archetype: Personality archetype ("hunter", "gatherer", "trader", etc.) archetype: Personality archetype ("hunter", "gatherer", "trader", etc.)
starting_money: Starting money (random with inequality if None) starting_money: Starting money (random with inequality if None)
age: Starting age (random within config range if None)
generation: 0 for initial spawn, 1+ for born in simulation
parent_ids: IDs of parent agents (for lineage tracking)
""" """
if position is None: if position is None:
position = Position( position = Position(
@ -92,14 +123,21 @@ class World:
y=random.randint(0, self.config.height - 1), y=random.randint(0, self.config.height - 1),
) )
# Generate unique personality and skills # Get age config for age calculation
from backend.config import get_config
age_config = get_config().age
# Calculate starting age
if age is None:
age = random.randint(age_config.min_start_age, age_config.max_start_age)
# Generate unique personality and skills (skills influenced by age)
personality = generate_random_personality(archetype) personality = generate_random_personality(archetype)
skills = generate_random_skills(personality) skills = generate_random_skills(personality, age=age)
# Variable starting money for class inequality # Variable starting money for class inequality
# Some agents start with more, some with less # Some agents start with more, some with less
if starting_money is None: if starting_money is None:
from backend.config import get_config
base_money = get_config().world.starting_money base_money = get_config().world.starting_money
# Random multiplier: 0.3x to 2.0x base money # Random multiplier: 0.3x to 2.0x base money
# This creates natural class inequality # This creates natural class inequality
@ -116,18 +154,189 @@ class World:
personality=personality, personality=personality,
skills=skills, skills=skills,
money=starting_money, money=starting_money,
age=age,
birth_day=self.current_day,
generation=generation,
parent_ids=parent_ids or [],
) )
self.agents.append(agent) self.agents.append(agent)
self._agent_index[agent.id] = agent # Maintain index for O(1) lookups
self.total_agents_spawned += 1 self.total_agents_spawned += 1
return agent return agent
def spawn_child(self, parent: Agent) -> Optional[Agent]:
"""Spawn a new agent as a child of an existing agent.
Birth chance is controlled by village prosperity (food abundance).
Parent transfers wealth to child at birth.
Returns the new agent or None if birth conditions not met.
"""
from backend.config import get_config
age_config = get_config().age
# Check birth eligibility
if not parent.can_give_birth(self.current_day):
return None
# Calculate economy-based birth chance
# More food in village = higher birth rate
# But even in hard times, some births occur (base chance always applies)
prosperity = self.calculate_prosperity()
# Prosperity boosts birth rate: base_chance * (1 + prosperity * multiplier)
# At prosperity=0: birth_chance = base_chance
# At prosperity=1: birth_chance = base_chance * (1 + multiplier)
birth_chance = age_config.birth_base_chance * (1 + prosperity * age_config.birth_prosperity_multiplier)
birth_chance = min(0.20, birth_chance) # Cap at 20%
if random.random() > birth_chance:
return None
# Birth happens! Child spawns near parent
child_pos = Position(
x=parent.position.x + random.uniform(-1, 1),
y=parent.position.y + random.uniform(-1, 1),
)
# Clamp to world bounds
child_pos.x = max(0, min(self.config.width - 1, child_pos.x))
child_pos.y = max(0, min(self.config.height - 1, child_pos.y))
# Child inherits some personality traits from parent with mutation
child_archetype = None # Random, not determined by parent
# Wealth transfer: parent gives portion of their wealth to child
wealth_transfer = age_config.birth_wealth_transfer
child_money = int(parent.money * wealth_transfer)
parent.money -= child_money
# Ensure child has minimum viable wealth
min_child_money = int(get_config().world.starting_money * 0.3)
child_money = max(child_money, min_child_money)
# Child starts at configured age (adult)
child_age = age_config.child_start_age
child = self.spawn_agent(
name=f"Child_{self.total_agents_spawned + 1}",
position=child_pos,
archetype=child_archetype,
starting_money=child_money,
age=child_age,
generation=parent.generation + 1,
parent_ids=[parent.id],
)
# Parent also transfers some food to child
self._transfer_resources_to_child(parent, child)
# Record birth for parent
parent.record_birth(self.current_day, child.id)
self.total_births += 1
return child
def _transfer_resources_to_child(self, parent: Agent, child: Agent) -> None:
"""Transfer some resources from parent to child at birth."""
# Transfer 1 of each food type parent has (if available)
for res_type in [ResourceType.MEAT, ResourceType.BERRIES, ResourceType.WATER]:
if parent.has_resource(res_type, 1):
parent.remove_from_inventory(res_type, 1)
from backend.domain.resources import Resource
child.add_to_inventory(Resource(
type=res_type,
quantity=1,
created_turn=self.current_turn,
))
def calculate_prosperity(self) -> float:
"""Calculate village prosperity (0.0 to 1.0) based on food abundance.
Higher prosperity = more births allowed.
This creates population cycles tied to resource availability.
"""
self.update_village_storage()
from backend.config import get_config
storage_config = get_config().storage
# Calculate how full each food storage is
meat_ratio = self.village_storage.get("meat", 0) / max(1, storage_config.village_meat_limit)
berries_ratio = self.village_storage.get("berries", 0) / max(1, storage_config.village_berries_limit)
water_ratio = self.village_storage.get("water", 0) / max(1, storage_config.village_water_limit)
# Average food abundance (weighted: meat most valuable)
prosperity = (meat_ratio * 0.4 + berries_ratio * 0.3 + water_ratio * 0.3)
return min(1.0, max(0.0, prosperity))
def process_inheritance(self, dead_agent: Agent) -> dict:
"""Process inheritance when an agent dies.
Wealth and resources are distributed to living children.
If no children, wealth is distributed to random villagers (estate tax).
Returns dict with inheritance details.
"""
from backend.config import get_config
age_config = get_config().age
if not age_config.inheritance_enabled:
return {"enabled": False}
inheritance_info = {
"enabled": True,
"deceased": dead_agent.name,
"total_money": dead_agent.money,
"total_resources": sum(r.quantity for r in dead_agent.inventory),
"beneficiaries": [],
}
# Find living children
living_children = []
for child_id in dead_agent.children_ids:
child = self.get_agent(child_id)
if child and child.is_alive():
living_children.append(child)
if living_children:
# Distribute equally among children
money_per_child = dead_agent.money // len(living_children)
for child in living_children:
child.money += money_per_child
inheritance_info["beneficiaries"].append({
"name": child.name,
"money": money_per_child,
})
# Distribute resources (round-robin)
for i, resource in enumerate(dead_agent.inventory):
recipient = living_children[i % len(living_children)]
recipient.add_to_inventory(resource)
else:
# No children - distribute to random villagers (estate tax effect)
living = self.get_living_agents()
if living:
# Give money to poorest villagers
poorest = sorted(living, key=lambda a: a.money)[:3]
if poorest:
money_each = dead_agent.money // len(poorest)
for villager in poorest:
villager.money += money_each
inheritance_info["beneficiaries"].append({
"name": villager.name,
"money": money_each,
"relation": "community"
})
# Clear dead agent's inventory (already distributed or lost)
dead_agent.inventory.clear()
dead_agent.money = 0
return inheritance_info
def get_agent(self, agent_id: str) -> Optional[Agent]: def get_agent(self, agent_id: str) -> Optional[Agent]:
"""Get an agent by ID.""" """Get an agent by ID (O(1) lookup via index)."""
for agent in self.agents: return self._agent_index.get(agent_id)
if agent.id == agent_id:
return agent
return None
def remove_dead_agents(self) -> list[Agent]: def remove_dead_agents(self) -> list[Agent]:
"""Remove all dead agents from the world. Returns list of removed agents. """Remove all dead agents from the world. Returns list of removed agents.
@ -137,16 +346,21 @@ class World:
# Don't actually remove here - let the engine handle corpse visualization # Don't actually remove here - let the engine handle corpse visualization
return dead_agents return dead_agents
def advance_time(self) -> None: def advance_time(self) -> bool:
"""Advance the simulation time by one step.""" """Advance the simulation time by one step.
Returns True if a new day started (for age/birth processing).
"""
self.current_turn += 1 self.current_turn += 1
self.step_in_day += 1 self.step_in_day += 1
total_steps = self.config.day_steps + self.config.night_steps total_steps = self.config.day_steps + self.config.night_steps
new_day = False
if self.step_in_day > total_steps: if self.step_in_day > total_steps:
self.step_in_day = 1 self.step_in_day = 1
self.current_day += 1 self.current_day += 1
new_day = True
# Determine time of day # Determine time of day
if self.step_in_day <= self.config.day_steps: if self.step_in_day <= self.config.day_steps:
@ -154,6 +368,91 @@ class World:
else: else:
self.time_of_day = TimeOfDay.NIGHT self.time_of_day = TimeOfDay.NIGHT
return new_day
def process_new_day(self) -> dict:
"""Process all new-day events: aging, births, sinks.
Returns a dict with events that happened.
"""
events = {
"aged_agents": [],
"births": [],
"age_deaths": [],
"storage_decay": {},
"taxes_collected": 0,
"random_events": [],
}
# Age all living agents
for agent in self.get_living_agents():
agent.age_one_day()
events["aged_agents"].append(agent.id)
# Check for births (only from living agents after aging)
for agent in self.get_living_agents():
if agent.can_give_birth(self.current_day):
child = self.spawn_child(agent)
if child:
events["births"].append({
"parent_id": agent.id,
"child_id": child.id,
"child_name": child.name,
})
return events
def update_village_storage(self) -> None:
"""Update the village-wide storage tracking."""
# Reset counts
for key in self.village_storage:
self.village_storage[key] = 0
# Count all resources in agent inventories
for agent in self.get_living_agents():
for resource in agent.inventory:
res_type = resource.type.value
if res_type in self.village_storage:
self.village_storage[res_type] += resource.quantity
def get_storage_limit(self, resource_type: str) -> int:
"""Get the storage limit for a resource type."""
from backend.config import get_config
storage_config = get_config().storage
limit_map = {
"meat": storage_config.village_meat_limit,
"berries": storage_config.village_berries_limit,
"water": storage_config.village_water_limit,
"wood": storage_config.village_wood_limit,
"hide": storage_config.village_hide_limit,
"clothes": storage_config.village_clothes_limit,
}
return limit_map.get(resource_type, 999999)
def get_storage_available(self, resource_type: str) -> int:
"""Get how much more of a resource can be stored village-wide."""
self.update_village_storage()
limit = self.get_storage_limit(resource_type)
current = self.village_storage.get(resource_type, 0)
return max(0, limit - current)
def is_storage_full(self, resource_type: str) -> bool:
"""Check if village storage for a resource type is full."""
return self.get_storage_available(resource_type) <= 0
def record_death(self, agent: Agent, reason: str) -> None:
"""Record a death and update statistics."""
self.total_agents_died += 1
if reason == "age":
self.total_deaths_by_age += 1
elif reason == "hunger":
self.total_deaths_by_starvation += 1
elif reason == "thirst":
self.total_deaths_by_thirst += 1
elif reason == "heat":
self.total_deaths_by_cold += 1
def is_night(self) -> bool: def is_night(self) -> bool:
"""Check if it's currently night.""" """Check if it's currently night."""
return self.time_of_day == TimeOfDay.NIGHT return self.time_of_day == TimeOfDay.NIGHT
@ -163,7 +462,23 @@ class World:
return [a for a in self.agents if a.is_alive() and not a.is_corpse()] return [a for a in self.agents if a.is_alive() and not a.is_corpse()]
def get_statistics(self) -> dict: def get_statistics(self) -> dict:
"""Get current world statistics including wealth distribution.""" """Get current world statistics including wealth distribution and demographics.
Uses caching based on performance config to avoid recalculating every turn.
"""
from backend.config import get_config
perf_config = get_config().performance
# Check if we can use cached stats
if (self._cached_stats is not None and
self.current_turn - self._stats_cache_turn < perf_config.stats_update_interval):
# Update just the essential changing values
self._cached_stats["current_turn"] = self.current_turn
self._cached_stats["current_day"] = self.current_day
self._cached_stats["step_in_day"] = self.step_in_day
self._cached_stats["time_of_day"] = self.time_of_day.value
return self._cached_stats
living = self.get_living_agents() living = self.get_living_agents()
total_money = sum(a.money for a in living) total_money = sum(a.money for a in living)
@ -174,6 +489,21 @@ class World:
prof = agent.profession.value prof = agent.profession.value
profession_counts[prof] = profession_counts.get(prof, 0) + 1 profession_counts[prof] = profession_counts.get(prof, 0) + 1
# Age demographics
age_distribution = {"young": 0, "prime": 0, "old": 0}
ages = []
generations = {}
for agent in living:
category = agent.get_age_category()
age_distribution[category] = age_distribution.get(category, 0) + 1
ages.append(agent.age)
gen = agent.generation
generations[gen] = generations.get(gen, 0) + 1
avg_age = sum(ages) / len(ages) if ages else 0
oldest_age = max(ages) if ages else 0
youngest_age = min(ages) if ages else 0
# Calculate wealth inequality metrics # Calculate wealth inequality metrics
if living: if living:
moneys = sorted([a.money for a in living]) moneys = sorted([a.money for a in living])
@ -182,16 +512,21 @@ class World:
richest = moneys[-1] if moneys else 0 richest = moneys[-1] if moneys else 0
poorest = moneys[0] if moneys else 0 poorest = moneys[0] if moneys else 0
# Gini coefficient for inequality (0 = perfect equality, 1 = max inequality) # Gini coefficient - O(n) algorithm instead of O(n²)
# Uses sorted list: Gini = (2 * sum(i * x_i)) / (n * sum(x_i)) - (n + 1) / n
n = len(moneys) n = len(moneys)
if n > 1 and total_money > 0: if n > 1 and total_money > 0:
sum_of_diffs = sum(abs(m1 - m2) for m1 in moneys for m2 in moneys) weighted_sum = sum((i + 1) * m for i, m in enumerate(moneys))
gini = sum_of_diffs / (2 * n * total_money) gini = (2 * weighted_sum) / (n * total_money) - (n + 1) / n
gini = max(0.0, min(1.0, gini)) # Clamp to [0, 1]
else: else:
gini = 0 gini = 0
else: else:
avg_money = median_money = richest = poorest = gini = 0 avg_money = median_money = richest = poorest = gini = 0
# Update village storage
self.update_village_storage()
return { return {
"current_turn": self.current_turn, "current_turn": self.current_turn,
"current_day": self.current_day, "current_day": self.current_day,
@ -200,6 +535,7 @@ class World:
"living_agents": len(living), "living_agents": len(living),
"total_agents_spawned": self.total_agents_spawned, "total_agents_spawned": self.total_agents_spawned,
"total_agents_died": self.total_agents_died, "total_agents_died": self.total_agents_died,
"total_births": self.total_births,
"total_money_in_circulation": total_money, "total_money_in_circulation": total_money,
"professions": profession_counts, "professions": profession_counts,
# Wealth inequality metrics # Wealth inequality metrics
@ -208,8 +544,28 @@ class World:
"richest_agent": richest, "richest_agent": richest,
"poorest_agent": poorest, "poorest_agent": poorest,
"gini_coefficient": round(gini, 3), "gini_coefficient": round(gini, 3),
# Age demographics
"age_distribution": age_distribution,
"avg_age": round(avg_age, 1),
"oldest_agent": oldest_age,
"youngest_agent": youngest_age,
"generations": generations,
# Death statistics
"deaths_by_cause": {
"age": self.total_deaths_by_age,
"starvation": self.total_deaths_by_starvation,
"thirst": self.total_deaths_by_thirst,
"cold": self.total_deaths_by_cold,
},
# Village storage
"village_storage": self.village_storage.copy(),
} }
# Cache the computed stats
self._cached_stats = stats
self._stats_cache_turn = self.current_turn
return stats
def get_state_snapshot(self) -> dict: def get_state_snapshot(self) -> dict:
"""Get a full snapshot of the world state for API.""" """Get a full snapshot of the world state for API."""
return { return {

View File

@ -25,6 +25,12 @@ def _get_agent_stats_config():
return get_config().agent_stats return get_config().agent_stats
def _get_age_config():
"""Get age configuration from global config."""
from backend.config import get_config
return get_config().age
class Profession(Enum): class Profession(Enum):
"""Agent professions - now derived from personality and skills.""" """Agent professions - now derived from personality and skills."""
VILLAGER = "villager" VILLAGER = "villager"
@ -96,14 +102,24 @@ class AgentStats:
# Critical threshold - loaded from config # Critical threshold - loaded from config
CRITICAL_THRESHOLD: float = field(default=0.25) CRITICAL_THRESHOLD: float = field(default=0.25)
def apply_passive_decay(self, has_clothes: bool = False) -> None: def apply_passive_decay(self, has_clothes: bool = False, decay_modifier: float = 1.0) -> None:
"""Apply passive stat decay each turn.""" """Apply passive stat decay each turn.
self.energy = max(0, self.energy - self.ENERGY_DECAY)
self.hunger = max(0, self.hunger - self.HUNGER_DECAY) Args:
self.thirst = max(0, self.thirst - self.THIRST_DECAY) has_clothes: Whether agent has clothes (reduces heat decay)
decay_modifier: Age-based modifier (old agents decay faster)
"""
energy_decay = int(self.ENERGY_DECAY * decay_modifier)
hunger_decay = int(self.HUNGER_DECAY * decay_modifier)
thirst_decay = int(self.THIRST_DECAY * decay_modifier)
self.energy = max(0, self.energy - energy_decay)
self.hunger = max(0, self.hunger - hunger_decay)
self.thirst = max(0, self.thirst - thirst_decay)
# Clothes reduce heat loss by 50% # Clothes reduce heat loss by 50%
heat_decay = self.HEAT_DECAY // 2 if has_clothes else self.HEAT_DECAY heat_decay = int(self.HEAT_DECAY * decay_modifier)
heat_decay = heat_decay // 2 if has_clothes else heat_decay
self.heat = max(0, self.heat - heat_decay) self.heat = max(0, self.heat - heat_decay)
def is_critical(self) -> bool: def is_critical(self) -> bool:
@ -217,6 +233,11 @@ class Agent:
Stats, inventory slots, and starting money are loaded from config.json. Stats, inventory slots, and starting money are loaded from config.json.
Each agent now has unique personality traits and skills that create Each agent now has unique personality traits and skills that create
emergent behaviors and professions. emergent behaviors and professions.
Age affects skills, energy costs, and survival:
- Young (< 25): Learning faster, lower skill effectiveness, less energy cost
- Prime (25-45): Peak performance
- Old (> 45): Higher skill effectiveness (wisdom), but higher energy costs
""" """
id: str = field(default_factory=lambda: str(uuid4())[:8]) id: str = field(default_factory=lambda: str(uuid4())[:8])
name: str = "" name: str = ""
@ -230,6 +251,15 @@ class Agent:
personality: PersonalityTraits = field(default_factory=PersonalityTraits) personality: PersonalityTraits = field(default_factory=PersonalityTraits)
skills: Skills = field(default_factory=Skills) skills: Skills = field(default_factory=Skills)
# Age system - age is in "years" where 1 year = 1 simulation day
age: int = field(default=-1) # -1 signals to use random start age
max_age: int = field(default=-1) # -1 signals to calculate from config
birth_day: int = 0 # Day this agent was born (0 = initial spawn)
last_birth_day: int = -1000 # Last day this agent gave birth (for cooldown)
parent_ids: list[str] = field(default_factory=list) # IDs of parents (for lineage)
children_ids: list[str] = field(default_factory=list) # IDs of children
generation: int = 0 # 0 = initial spawn, 1+ = born in simulation
# Movement and action tracking # Movement and action tracking
home_position: Position = field(default_factory=Position) home_position: Position = field(default_factory=Position)
current_action: AgentAction = field(default_factory=AgentAction) current_action: AgentAction = field(default_factory=AgentAction)
@ -267,6 +297,21 @@ class Agent:
if self.INVENTORY_SLOTS == -1: if self.INVENTORY_SLOTS == -1:
self.INVENTORY_SLOTS = config.inventory_slots self.INVENTORY_SLOTS = config.inventory_slots
# Initialize age system
age_config = _get_age_config()
if self.age == -1:
# Random starting age within configured range
self.age = random.randint(age_config.min_start_age, age_config.max_start_age)
if self.max_age == -1:
# Calculate max age with variance
variance = random.randint(-age_config.max_age_variance, age_config.max_age_variance)
self.max_age = age_config.base_max_age + variance
# Apply age-based max energy adjustment for old agents
if self.get_age_category() == "old":
self.stats.MAX_ENERGY = int(self.stats.MAX_ENERGY * age_config.old_max_energy_multiplier)
self.stats.energy = min(self.stats.energy, self.stats.MAX_ENERGY)
# Update profession based on personality and skills # Update profession based on personality and skills
self._update_profession() self._update_profession()
@ -282,6 +327,111 @@ class Agent:
} }
self.profession = profession_map.get(prof_type, Profession.VILLAGER) self.profession = profession_map.get(prof_type, Profession.VILLAGER)
def get_age_category(self) -> str:
"""Get the agent's age category: 'young', 'prime', or 'old'."""
age_config = _get_age_config()
if self.age < age_config.young_age_threshold:
return "young"
elif self.age <= age_config.old_age_threshold:
return "prime"
else:
return "old"
def get_skill_modifier(self) -> float:
"""Get skill effectiveness modifier based on age.
Young agents are less effective but learn faster.
Old agents are more effective (wisdom) but learn slower.
"""
age_config = _get_age_config()
category = self.get_age_category()
if category == "young":
return age_config.young_skill_multiplier
elif category == "prime":
return age_config.prime_skill_multiplier
else:
return age_config.old_skill_multiplier
def get_learning_modifier(self) -> float:
"""Get learning rate modifier based on age."""
age_config = _get_age_config()
category = self.get_age_category()
if category == "young":
return age_config.young_learning_multiplier
elif category == "prime":
return age_config.prime_learning_multiplier
else:
return age_config.old_learning_multiplier
def get_energy_cost_modifier(self) -> float:
"""Get energy cost modifier based on age.
Young agents use less energy.
Old agents use more energy.
"""
age_config = _get_age_config()
category = self.get_age_category()
if category == "young":
return age_config.young_energy_cost_multiplier
elif category == "prime":
return age_config.prime_energy_cost_multiplier
else:
return age_config.old_energy_cost_multiplier
def get_decay_modifier(self) -> float:
"""Get stat decay modifier based on age.
Old agents decay faster (frailer).
"""
age_config = _get_age_config()
if self.get_age_category() == "old":
return age_config.old_decay_multiplier
return 1.0
def age_one_day(self) -> None:
"""Age the agent by one day (called at day transition)."""
age_config = _get_age_config()
self.age += age_config.age_per_day
# Check if agent just became old - reduce max energy
if self.age == age_config.old_age_threshold + 1:
self.stats.MAX_ENERGY = int(self.stats.MAX_ENERGY * age_config.old_max_energy_multiplier)
self.stats.energy = min(self.stats.energy, self.stats.MAX_ENERGY)
def is_too_old(self) -> bool:
"""Check if agent has exceeded their maximum age."""
return self.age >= self.max_age
def can_give_birth(self, current_day: int) -> bool:
"""Check if agent is eligible to give birth."""
age_config = _get_age_config()
# Age check
if self.age < age_config.min_birth_age or self.age > age_config.max_birth_age:
return False
# Cooldown check
days_since_birth = current_day - self.last_birth_day
if days_since_birth < age_config.birth_cooldown_days:
return False
# Resource check
if self.stats.hunger < age_config.birth_food_requirement:
return False
if self.stats.energy < age_config.birth_energy_requirement:
return False
return True
def record_birth(self, current_day: int, child_id: str) -> None:
"""Record that this agent gave birth."""
self.last_birth_day = current_day
self.children_ids.append(child_id)
# Birth is exhausting - reduce stats
self.stats.energy = max(0, self.stats.energy - 20)
self.stats.hunger = max(0, self.stats.hunger - 30)
def record_action(self, action_type: str) -> None: def record_action(self, action_type: str) -> None:
"""Record an action for profession tracking.""" """Record an action for profession tracking."""
if action_type in self.actions_performed: if action_type in self.actions_performed:
@ -308,11 +458,13 @@ class Agent:
def is_alive(self) -> bool: def is_alive(self) -> bool:
"""Check if the agent is still alive.""" """Check if the agent is still alive."""
return ( # Death by needs
self.stats.hunger > 0 and if self.stats.hunger <= 0 or self.stats.thirst <= 0 or self.stats.heat <= 0:
self.stats.thirst > 0 and return False
self.stats.heat > 0 # Death by old age
) if self.is_too_old():
return False
return True
def is_corpse(self) -> bool: def is_corpse(self) -> bool:
"""Check if this agent is a corpse (died but still visible).""" """Check if this agent is a corpse (died but still visible)."""
@ -493,8 +645,9 @@ class Agent:
return expired return expired
def apply_passive_decay(self) -> None: def apply_passive_decay(self) -> None:
"""Apply passive stat decay for this turn.""" """Apply passive stat decay for this turn, modified by age."""
self.stats.apply_passive_decay(has_clothes=self.has_clothes()) decay_modifier = self.get_decay_modifier()
self.stats.apply_passive_decay(has_clothes=self.has_clothes(), decay_modifier=decay_modifier)
def mark_dead(self, turn: int, reason: str) -> None: def mark_dead(self, turn: int, reason: str) -> None:
"""Mark this agent as dead.""" """Mark this agent as dead."""
@ -522,6 +675,18 @@ class Agent:
"last_action_result": self.last_action_result, "last_action_result": self.last_action_result,
"death_turn": self.death_turn, "death_turn": self.death_turn,
"death_reason": self.death_reason, "death_reason": self.death_reason,
# Age system
"age": self.age,
"max_age": self.max_age,
"age_category": self.get_age_category(),
"birth_day": self.birth_day,
"generation": self.generation,
"parent_ids": self.parent_ids.copy(),
"children_count": len(self.children_ids),
# Age modifiers (for UI display)
"skill_modifier": round(self.get_skill_modifier(), 2),
"energy_cost_modifier": round(self.get_energy_cost_modifier(), 2),
"learning_modifier": round(self.get_learning_modifier(), 2),
# New fields for agent diversity # New fields for agent diversity
"personality": self.personality.to_dict(), "personality": self.personality.to_dict(),
"skills": self.skills.to_dict(), "skills": self.skills.to_dict(),

View File

@ -112,11 +112,20 @@ class Skills:
# Minimum skill level # Minimum skill level
MIN_SKILL: float = 0.5 MIN_SKILL: float = 0.5
def improve(self, skill_name: str, amount: Optional[float] = None) -> None: def improve(self, skill_name: str, amount: Optional[float] = None, learning_modifier: float = 1.0) -> None:
"""Improve a skill through practice.""" """Improve a skill through practice.
Args:
skill_name: Name of the skill to improve
amount: Base improvement amount (defaults to IMPROVEMENT_RATE)
learning_modifier: Age-based modifier (young learn faster, old learn slower)
"""
if amount is None: if amount is None:
amount = self.IMPROVEMENT_RATE amount = self.IMPROVEMENT_RATE
# Apply learning modifier (young agents learn faster)
amount = amount * learning_modifier
if hasattr(self, skill_name): if hasattr(self, skill_name):
current = getattr(self, skill_name) current = getattr(self, skill_name)
new_value = min(self.MAX_SKILL, current + amount) new_value = min(self.MAX_SKILL, current + amount)
@ -209,22 +218,35 @@ def generate_random_personality(archetype: Optional[str] = None) -> PersonalityT
return traits return traits
def generate_random_skills(personality: PersonalityTraits) -> Skills: def generate_random_skills(personality: PersonalityTraits, age: Optional[int] = None) -> Skills:
"""Generate starting skills influenced by personality. """Generate starting skills influenced by personality and age.
Agents with strong preferences start with slightly better skills Agents with strong preferences start with slightly better skills
in those areas (natural talent). in those areas (natural talent).
Older agents start with higher skills (life experience).
""" """
# Base skill level with small random variation # Base skill level with small random variation
base = 1.0 base = 1.0
variance = 0.15 variance = 0.15
# Age bonus: older agents have more experience
age_bonus = 0.0
if age is not None:
# Young agents (< 25): no bonus
# Prime agents (25-45): small bonus
# Old agents (> 45): larger bonus (wisdom)
if age >= 45:
age_bonus = 0.3 + random.uniform(0, 0.2)
elif age >= 25:
age_bonus = (age - 25) * 0.01 + random.uniform(0, 0.1)
skills = Skills( skills = Skills(
hunting=base + random.uniform(-variance, variance) + (personality.hunt_preference - 1.0) * 0.1, hunting=base + random.uniform(-variance, variance) + (personality.hunt_preference - 1.0) * 0.1 + age_bonus,
gathering=base + random.uniform(-variance, variance) + (personality.gather_preference - 1.0) * 0.1, gathering=base + random.uniform(-variance, variance) + (personality.gather_preference - 1.0) * 0.1 + age_bonus,
woodcutting=base + random.uniform(-variance, variance) + (personality.woodcut_preference - 1.0) * 0.1, woodcutting=base + random.uniform(-variance, variance) + (personality.woodcut_preference - 1.0) * 0.1 + age_bonus,
trading=base + random.uniform(-variance, variance) + (personality.trade_preference - 1.0) * 0.1, trading=base + random.uniform(-variance, variance) + (personality.trade_preference - 1.0) * 0.1 + age_bonus,
crafting=base + random.uniform(-variance, variance), crafting=base + random.uniform(-variance, variance) + age_bonus,
) )
# Clamp all skills to valid range # Clamp all skills to valid range

View File

@ -4,6 +4,7 @@ import os
import uvicorn import uvicorn
from fastapi import FastAPI from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import RedirectResponse
from fastapi.staticfiles import StaticFiles from fastapi.staticfiles import StaticFiles
from backend.api.routes import router from backend.api.routes import router
@ -48,14 +49,8 @@ async def startup_event():
@app.get("/", tags=["root"]) @app.get("/", tags=["root"])
def root(): def root():
"""Root endpoint with API information.""" """Root endpoint - redirect to web frontend."""
return { return RedirectResponse(url="/web/")
"name": "Village Simulation API",
"version": "1.0.0",
"docs": "/docs",
"web_frontend": "/web/",
"status": "running",
}
@app.get("/health", tags=["health"]) @app.get("/health", tags=["health"])
@ -69,12 +64,19 @@ def health_check():
} }
# ============== Web Frontend Static Files ============== # ============== Web Frontend ==============
# Mount static files for web frontend @app.get("/web", include_in_schema=False)
# Access at http://localhost:8000/web/ def redirect_to_web_frontend():
"""Redirect /web to /web/ for static file serving."""
return RedirectResponse(url="/web/")
# Mount static files for web frontend (access at http://localhost:8000/web/)
if os.path.exists(WEB_FRONTEND_PATH): if os.path.exists(WEB_FRONTEND_PATH):
app.mount("/web", StaticFiles(directory=WEB_FRONTEND_PATH, html=True), name="web_frontend") app.mount("/web", StaticFiles(directory=WEB_FRONTEND_PATH, html=True), name="web_frontend")
else:
print(f"Warning: Web frontend not found at {WEB_FRONTEND_PATH}")
def main(): def main():

View File

@ -1,51 +1,133 @@
{ {
"performance": {
"logging_enabled": false,
"detailed_logging": false,
"async_logging": true,
"log_flush_interval": 50,
"max_turn_logs": 100,
"stats_update_interval": 10,
"state_storage_enabled": true
},
"ai": {
"goap_max_iterations": 30,
"goap_max_plan_depth": 2,
"reactive_fallback": true,
"use_bdi": true
},
"bdi": {
"thinking_interval": 1,
"max_consecutive_failures": 2,
"priority_switch_threshold": 1.5,
"memory_max_events": 50,
"memory_decay_rate": 0.1
},
"redis": {
"enabled": false,
"host": "localhost",
"port": 6379,
"db": 0,
"password": null,
"prefix": "villsim:",
"ttl_seconds": 3600
},
"agent_stats": { "agent_stats": {
"max_energy": 50, "max_energy": 50,
"max_hunger": 100, "max_hunger": 100,
"max_thirst": 100, "max_thirst": 100,
"max_heat": 100, "max_heat": 100,
"start_energy": 50, "start_energy": 50,
"start_hunger": 70, "start_hunger": 80,
"start_thirst": 75, "start_thirst": 85,
"start_heat": 100, "start_heat": 100,
"energy_decay": 1, "energy_decay": 1,
"hunger_decay": 2, "hunger_decay": 2,
"thirst_decay": 3, "thirst_decay": 2,
"heat_decay": 3, "heat_decay": 2,
"critical_threshold": 0.25, "critical_threshold": 0.25,
"low_energy_threshold": 12 "low_energy_threshold": 12
}, },
"age": {
"min_start_age": 18,
"max_start_age": 28,
"young_age_threshold": 25,
"prime_age_start": 25,
"prime_age_end": 50,
"old_age_threshold": 50,
"base_max_age": 75,
"max_age_variance": 8,
"age_per_day": 1,
"birth_cooldown_days": 8,
"min_birth_age": 20,
"max_birth_age": 50,
"birth_base_chance": 0.06,
"birth_prosperity_multiplier": 2.5,
"birth_food_requirement": 40,
"birth_energy_requirement": 15,
"birth_wealth_transfer": 0.15,
"inheritance_enabled": true,
"child_start_age": 18,
"young_skill_multiplier": 0.85,
"young_learning_multiplier": 1.3,
"young_energy_cost_multiplier": 0.9,
"prime_skill_multiplier": 1.0,
"prime_learning_multiplier": 1.0,
"prime_energy_cost_multiplier": 1.0,
"old_skill_multiplier": 1.1,
"old_learning_multiplier": 0.7,
"old_energy_cost_multiplier": 1.15,
"old_max_energy_multiplier": 0.8,
"old_decay_multiplier": 1.1
},
"storage": {
"village_meat_limit": 200,
"village_berries_limit": 300,
"village_water_limit": 400,
"village_wood_limit": 400,
"village_hide_limit": 150,
"village_clothes_limit": 100,
"market_order_limit_per_agent": 5,
"market_total_order_limit": 500
},
"sinks": {
"daily_village_decay_rate": 0.01,
"daily_tax_rate": 0.005,
"random_event_chance": 0.02,
"fire_event_resource_loss": 0.05,
"theft_event_money_loss": 0.03,
"clothes_maintenance_per_day": 1,
"fire_wood_cost_per_night": 1
},
"resources": { "resources": {
"meat_decay": 10, "meat_decay": 12,
"berries_decay": 6, "berries_decay": 8,
"clothes_decay": 20, "clothes_decay": 30,
"meat_hunger": 35, "meat_hunger": 45,
"meat_energy": 12, "meat_energy": 15,
"berries_hunger": 10, "berries_hunger": 10,
"berries_thirst": 4, "berries_thirst": 3,
"water_thirst": 50, "water_thirst": 50,
"fire_heat": 20 "fire_heat": 25
}, },
"actions": { "actions": {
"sleep_energy": 55, "sleep_energy": 55,
"rest_energy": 12, "rest_energy": 12,
"hunt_energy": -7, "hunt_energy": -5,
"gather_energy": -3, "gather_energy": -3,
"chop_wood_energy": -6, "chop_wood_energy": -5,
"get_water_energy": -2, "get_water_energy": -2,
"weave_energy": -6, "weave_energy": -5,
"build_fire_energy": -4, "build_fire_energy": -3,
"trade_energy": -1, "trade_energy": -1,
"hunt_success": 0.70, "hunt_success": 0.85,
"chop_wood_success": 0.90, "chop_wood_success": 0.9,
"hunt_meat_min": 2, "hunt_meat_min": 2,
"hunt_meat_max": 5, "hunt_meat_max": 5,
"hunt_hide_min": 0, "hunt_hide_min": 0,
"hunt_hide_max": 2, "hunt_hide_max": 2,
"gather_min": 2, "gather_min": 3,
"gather_max": 4, "gather_max": 5,
"chop_wood_min": 1, "chop_wood_min": 2,
"chop_wood_max": 3 "chop_wood_max": 4
}, },
"world": { "world": {
"width": 25, "width": 25,
@ -53,8 +135,8 @@
"initial_agents": 25, "initial_agents": 25,
"day_steps": 10, "day_steps": 10,
"night_steps": 1, "night_steps": 1,
"inventory_slots": 12, "inventory_slots": 15,
"starting_money": 80 "starting_money": 8000
}, },
"market": { "market": {
"turns_before_discount": 15, "turns_before_discount": 15,
@ -62,10 +144,11 @@
"base_price_multiplier": 1.3 "base_price_multiplier": 1.3
}, },
"economy": { "economy": {
"energy_to_money_ratio": 1.5, "energy_to_money_ratio": 150,
"min_price": 100,
"wealth_desire": 0.35, "wealth_desire": 0.35,
"buy_efficiency_threshold": 0.75, "buy_efficiency_threshold": 0.75,
"min_wealth_target": 50, "min_wealth_target": 5000,
"max_price_markup": 2.5, "max_price_markup": 2.5,
"min_price_discount": 0.4 "min_price_discount": 0.4
}, },

130
config_goap_optimized.json Normal file
View File

@ -0,0 +1,130 @@
{
"ai": {
"goap_max_iterations": 50,
"goap_max_plan_depth": 3,
"reactive_fallback": true
},
"agent_stats": {
"max_energy": 50,
"max_hunger": 100,
"max_thirst": 100,
"max_heat": 100,
"start_energy": 50,
"start_hunger": 80,
"start_thirst": 85,
"start_heat": 100,
"energy_decay": 1,
"hunger_decay": 2,
"thirst_decay": 2,
"heat_decay": 2,
"critical_threshold": 0.25,
"low_energy_threshold": 12
},
"age": {
"min_start_age": 18,
"max_start_age": 28,
"young_age_threshold": 25,
"prime_age_start": 25,
"prime_age_end": 50,
"old_age_threshold": 50,
"base_max_age": 75,
"max_age_variance": 8,
"age_per_day": 1,
"birth_cooldown_days": 8,
"min_birth_age": 20,
"max_birth_age": 50,
"birth_base_chance": 0.06,
"birth_prosperity_multiplier": 2.5,
"birth_food_requirement": 40,
"birth_energy_requirement": 15,
"birth_wealth_transfer": 0.15,
"inheritance_enabled": true,
"child_start_age": 18,
"young_skill_multiplier": 0.85,
"young_learning_multiplier": 1.3,
"young_energy_cost_multiplier": 0.9,
"prime_skill_multiplier": 1.0,
"prime_learning_multiplier": 1.0,
"prime_energy_cost_multiplier": 1.0,
"old_skill_multiplier": 1.1,
"old_learning_multiplier": 0.7,
"old_energy_cost_multiplier": 1.15,
"old_max_energy_multiplier": 0.8,
"old_decay_multiplier": 1.1
},
"storage": {
"village_meat_limit": 200,
"village_berries_limit": 300,
"village_water_limit": 400,
"village_wood_limit": 400,
"village_hide_limit": 150,
"village_clothes_limit": 100,
"market_order_limit_per_agent": 5,
"market_total_order_limit": 500
},
"sinks": {
"daily_village_decay_rate": 0.01,
"daily_tax_rate": 0.005,
"random_event_chance": 0.02,
"fire_event_resource_loss": 0.05,
"theft_event_money_loss": 0.03,
"clothes_maintenance_per_day": 1,
"fire_wood_cost_per_night": 1
},
"resources": {
"meat_decay": 12,
"berries_decay": 8,
"clothes_decay": 30,
"meat_hunger": 45,
"meat_energy": 15,
"berries_hunger": 10,
"berries_thirst": 3,
"water_thirst": 50,
"fire_heat": 25
},
"actions": {
"sleep_energy": 55,
"rest_energy": 12,
"hunt_energy": -5,
"gather_energy": -3,
"chop_wood_energy": -5,
"get_water_energy": -2,
"weave_energy": -5,
"build_fire_energy": -3,
"trade_energy": -1,
"hunt_success": 0.85,
"chop_wood_success": 0.9,
"hunt_meat_min": 2,
"hunt_meat_max": 5,
"hunt_hide_min": 0,
"hunt_hide_max": 2,
"gather_min": 3,
"gather_max": 5,
"chop_wood_min": 2,
"chop_wood_max": 4
},
"world": {
"width": 25,
"height": 25,
"initial_agents": 25,
"day_steps": 10,
"night_steps": 1,
"inventory_slots": 15,
"starting_money": 80
},
"market": {
"turns_before_discount": 15,
"discount_rate": 0.12,
"base_price_multiplier": 1.3
},
"economy": {
"energy_to_money_ratio": 1.5,
"min_price": 1,
"wealth_desire": 0.35,
"buy_efficiency_threshold": 0.75,
"min_wealth_target": 50,
"max_price_markup": 2.5,
"min_price_discount": 0.4
},
"auto_step_interval": 0.15
}

View File

@ -5,10 +5,10 @@ This document outlines the architecture for the Village Simulation based on [Vil
## 1. System Overview ## 1. System Overview
The system consists of two distinct applications communicating via HTTP (REST API): The system consists of two distinct applications communicating via HTTP (REST API):
1. **Backend (Server)**: Responsible for the entire simulation state, economic logic, AI decision-making, and turn management. 1. **Backend (Server)**: Responsible for the entire simulation state, economic logic, AI decision-making (GOAP-based), and turn management.
2. **Frontend (Client)**: A "dumb" terminal using **Pygame** that queries the current state to render it and sends user commands (if any) to the server. 2. **Frontend (Client)**: A web-based frontend (HTML/JavaScript) that queries the current state to render it and sends user commands to the server.
This separation allows replacing the Pygame frontend with Web (React/Vue) or Unity in the future without changing the backend logic. This separation allows replacing the web frontend with other technologies (React/Vue, Unity, etc.) without changing the backend logic.
--- ---
@ -54,21 +54,24 @@ backend/
--- ---
## 3. Frontend Architecture (Pygame) ## 3. Frontend Architecture (Web)
The frontend acts as a **Visualizer**. It does not calculate simulation logic. The frontend acts as a **Visualizer**. It does not calculate simulation logic.
### 3.1. Structure ### 3.1. Structure
```text ```text
frontend/ web_frontend/
├── main.py # Pygame Game Loop ├── index.html # Main HTML page
├── client.py # Network Client (requests lib) ├── goap_debug.html # GOAP debugging view
├── assets/ # Sprites/Fonts ├── styles.css # Styling
└── renderer/ # Drawing Logic └── src/
├── map_renderer.py # Draws the grid/terrain ├── main.js # Application entry point
├── agent_renderer.py # Draws agents and their status bars ├── api.js # Network client (fetch API)
└── ui_renderer.py # Draws text info (Market prices, Day/Night) ├── constants.js # Configuration constants
└── scenes/ # Game scenes (Phaser.js)
├── BootScene.js # Loading scene
└── GameScene.js # Main game visualization
``` ```
### 3.2. Flow ### 3.2. Flow
@ -77,12 +80,11 @@ frontend/
* Call `GET http://localhost:8000/state`. * Call `GET http://localhost:8000/state`.
* Receive JSON: `{"turn": 5, "time_of_day": "day", "agents": [...], "market": [...]}`. * Receive JSON: `{"turn": 5, "time_of_day": "day", "agents": [...], "market": [...]}`.
2. **Update Step**: 2. **Update Step**:
* Parse JSON into local simplified objects. * Parse JSON into JavaScript objects.
3. **Draw Step**: 3. **Draw Step**:
* Clear screen. * Update Phaser.js game scene.
* Render Agents at their coordinates. * Render Agents at their coordinates.
* Render UI overlays (e.g., "Day 1, Step 5", "Total Coins: 500"). * Render UI overlays (e.g., "Day 1, Step 5", "Total Coins: 500").
* `pygame.display.flip()`.
--- ---
@ -97,12 +99,10 @@ Since the simulation involves AI agents acting autonomously, the Frontend is pri
* Frontend updates the screen. * Frontend updates the screen.
### 4.1. The "God Mode" Problem ### 4.1. The "God Mode" Problem
To test the simulation efficiently, the Server will expose a **Simulation Controller**: To test the simulation efficiently, the Server exposes a **Simulation Controller**:
* **Manual Mode**: The server waits for a `POST /next_step` call to advance. The User presses `SPACE` in Pygame -> Pygame sends request -> Server updates -> Pygame fetches new state. * **Manual Mode**: The server waits for a `POST /next_step` call to advance. The User clicks the advance button in the web frontend -> Frontend sends request -> Server updates -> Frontend fetches new state.
* **Auto Mode**: Server runs a background thread updating every N seconds. Frontend just polls. * **Auto Mode**: Server runs a background thread updating every N seconds. Frontend just polls.
*Recommended for MVP: Manual Mode (Spacebar to advance turn).*
--- ---
## 5. Technology Stack ## 5. Technology Stack
@ -110,12 +110,13 @@ To test the simulation efficiently, the Server will expose a **Simulation Contro
* **Language**: Python 3.11+ * **Language**: Python 3.11+
* **Backend Framework**: FastAPI (for speed and auto-generated docs). * **Backend Framework**: FastAPI (for speed and auto-generated docs).
* **Data Validation**: Pydantic. * **Data Validation**: Pydantic.
* **Frontend**: Pygame Community Edition (pygame-ce). * **AI System**: GOAP (Goal-Oriented Action Planning).
* **Communication**: HTTP (Requests/Uvicorn). * **Frontend**: HTML/JavaScript with Phaser.js for rendering.
* **Communication**: HTTP (Fetch API/Uvicorn).
## 6. Future Extensibility (Why this architecture?) ## 6. Future Extensibility (Why this architecture?)
* **Switch to Web**: Replace `frontend/` folder with a React app. The React app simply calls the same `GET /state` endpoint. * **Switch to React/Vue**: Replace `web_frontend/` folder with a React app. The React app simply calls the same `GET /state` endpoint.
* **Switch to Unity**: Unity `UnityWebRequest` calls `GET /state`. * **Switch to Unity**: Unity `UnityWebRequest` calls `GET /state`.
* **Database**: Currently state is in-memory (`core/engine.py`). Easy to swap for SQLite/Postgres later by adding a `repository` layer in Backend. * **Database**: Currently state is in-memory (`core/engine.py`). Easy to swap for SQLite/Postgres later by adding a `repository` layer in Backend.

View File

@ -1,2 +0,0 @@
"""Frontend package for Village Simulation visualization."""

View File

@ -1,180 +0,0 @@
"""HTTP client for communicating with the Village Simulation backend."""
import time
from dataclasses import dataclass
from typing import Optional, Any
import requests
from requests.exceptions import RequestException
@dataclass
class SimulationState:
"""Parsed simulation state from the API."""
turn: int
day: int
step_in_day: int
time_of_day: str
world_width: int
world_height: int
agents: list[dict]
market_orders: list[dict]
market_prices: dict
statistics: dict
mode: str
is_running: bool
recent_logs: list[dict]
@classmethod
def from_api_response(cls, data: dict) -> "SimulationState":
"""Create from API response data."""
return cls(
turn=data.get("turn", 0),
day=data.get("day", 1),
step_in_day=data.get("step_in_day", 0),
time_of_day=data.get("time_of_day", "day"),
world_width=data.get("world_size", {}).get("width", 20),
world_height=data.get("world_size", {}).get("height", 20),
agents=data.get("agents", []),
market_orders=data.get("market", {}).get("orders", []),
market_prices=data.get("market", {}).get("prices", {}),
statistics=data.get("statistics", {}),
mode=data.get("mode", "manual"),
is_running=data.get("is_running", False),
recent_logs=data.get("recent_logs", []),
)
def get_living_agents(self) -> list[dict]:
"""Get only living agents."""
return [a for a in self.agents if a.get("is_alive", False)]
class SimulationClient:
"""HTTP client for the Village Simulation backend."""
def __init__(self, base_url: str = "http://localhost:8000"):
self.base_url = base_url.rstrip("/")
self.api_url = f"{self.base_url}/api"
self.session = requests.Session()
self.last_state: Optional[SimulationState] = None
self.connected = False
self._retry_count = 0
self._max_retries = 3
def _request(
self,
method: str,
endpoint: str,
json: Optional[dict] = None,
timeout: float = 5.0,
) -> Optional[dict]:
"""Make an HTTP request to the API."""
url = f"{self.api_url}{endpoint}"
try:
response = self.session.request(
method=method,
url=url,
json=json,
timeout=timeout,
)
response.raise_for_status()
self.connected = True
self._retry_count = 0
return response.json()
except RequestException as e:
self._retry_count += 1
if self._retry_count >= self._max_retries:
self.connected = False
return None
def check_connection(self) -> bool:
"""Check if the backend is reachable."""
try:
response = self.session.get(
f"{self.base_url}/health",
timeout=2.0,
)
self.connected = response.status_code == 200
return self.connected
except RequestException:
self.connected = False
return False
def get_state(self) -> Optional[SimulationState]:
"""Fetch the current simulation state."""
data = self._request("GET", "/state")
if data:
self.last_state = SimulationState.from_api_response(data)
return self.last_state
return self.last_state # Return cached state if request failed
def advance_turn(self) -> bool:
"""Advance the simulation by one step."""
result = self._request("POST", "/control/next_step")
return result is not None and result.get("success", False)
def set_mode(self, mode: str) -> bool:
"""Set the simulation mode ('manual' or 'auto')."""
result = self._request("POST", "/control/mode", json={"mode": mode})
return result is not None and result.get("success", False)
def initialize(
self,
num_agents: int = 8,
world_width: int = 20,
world_height: int = 20,
) -> bool:
"""Initialize or reset the simulation."""
result = self._request("POST", "/control/initialize", json={
"num_agents": num_agents,
"world_width": world_width,
"world_height": world_height,
})
return result is not None and result.get("success", False)
def get_status(self) -> Optional[dict]:
"""Get simulation status."""
return self._request("GET", "/control/status")
def get_agents(self) -> Optional[list[dict]]:
"""Get all agents."""
result = self._request("GET", "/agents")
if result:
return result.get("agents", [])
return None
def get_market_orders(self) -> Optional[list[dict]]:
"""Get all market orders."""
result = self._request("GET", "/market/orders")
if result:
return result.get("orders", [])
return None
def get_market_prices(self) -> Optional[dict]:
"""Get market prices."""
return self._request("GET", "/market/prices")
def wait_for_connection(self, timeout: float = 30.0) -> bool:
"""Wait for backend connection with timeout."""
start = time.time()
while time.time() - start < timeout:
if self.check_connection():
return True
time.sleep(0.5)
return False
def get_config(self) -> Optional[dict]:
"""Get current simulation configuration."""
return self._request("GET", "/config")
def update_config(self, config_data: dict) -> bool:
"""Update simulation configuration."""
result = self._request("POST", "/config", json=config_data)
return result is not None and result.get("success", False)
def reset_config(self) -> bool:
"""Reset configuration to defaults."""
result = self._request("POST", "/config/reset")
return result is not None and result.get("success", False)

View File

@ -1,324 +0,0 @@
"""Main Pygame application for the Village Simulation frontend."""
import sys
import pygame
from frontend.client import SimulationClient, SimulationState
from frontend.renderer.map_renderer import MapRenderer
from frontend.renderer.agent_renderer import AgentRenderer
from frontend.renderer.ui_renderer import UIRenderer
from frontend.renderer.settings_renderer import SettingsRenderer
from frontend.renderer.stats_renderer import StatsRenderer
# Window configuration
WINDOW_WIDTH = 1200
WINDOW_HEIGHT = 800
WINDOW_TITLE = "Village Economy Simulation"
FPS = 30
# Layout configuration
TOP_PANEL_HEIGHT = 50
RIGHT_PANEL_WIDTH = 200
class VillageSimulationApp:
"""Main application class for the Village Simulation frontend."""
def __init__(self, server_url: str = "http://localhost:8000"):
# Initialize Pygame
pygame.init()
pygame.font.init()
# Create window
self.screen = pygame.display.set_mode((WINDOW_WIDTH, WINDOW_HEIGHT))
pygame.display.set_caption(WINDOW_TITLE)
# Clock for FPS control
self.clock = pygame.time.Clock()
# Fonts
self.font = pygame.font.Font(None, 24)
# Network client
self.client = SimulationClient(server_url)
# Calculate map area
self.map_rect = pygame.Rect(
0,
TOP_PANEL_HEIGHT,
WINDOW_WIDTH - RIGHT_PANEL_WIDTH,
WINDOW_HEIGHT - TOP_PANEL_HEIGHT,
)
# Initialize renderers
self.map_renderer = MapRenderer(self.screen, self.map_rect)
self.agent_renderer = AgentRenderer(self.screen, self.map_renderer, self.font)
self.ui_renderer = UIRenderer(self.screen, self.font)
self.settings_renderer = SettingsRenderer(self.screen)
self.stats_renderer = StatsRenderer(self.screen)
# State
self.state: SimulationState | None = None
self.running = True
self.hovered_agent: dict | None = None
self._last_turn: int = -1 # Track turn changes for stats update
# Polling interval (ms)
self.last_poll_time = 0
self.poll_interval = 100 # Poll every 100ms for smoother updates
# Setup settings callbacks
self._setup_settings_callbacks()
def _setup_settings_callbacks(self) -> None:
"""Set up callbacks for the settings panel."""
# Override the apply and reset callbacks
original_apply = self.settings_renderer._apply_config
original_reset = self.settings_renderer._reset_config
def apply_config():
config = self.settings_renderer.get_config()
if self.client.update_config(config):
# Restart simulation with new config
if self.client.initialize():
self.state = self.client.get_state()
self.settings_renderer.status_message = "Config applied & simulation restarted!"
self.settings_renderer.status_color = (80, 180, 100)
else:
self.settings_renderer.status_message = "Config saved but restart failed"
self.settings_renderer.status_color = (200, 160, 80)
else:
self.settings_renderer.status_message = "Failed to apply config"
self.settings_renderer.status_color = (200, 80, 80)
def reset_config():
if self.client.reset_config():
# Reload config from server
config = self.client.get_config()
if config:
self.settings_renderer.set_config(config)
self.settings_renderer.status_message = "Config reset to defaults"
self.settings_renderer.status_color = (200, 160, 80)
else:
self.settings_renderer.status_message = "Failed to reset config"
self.settings_renderer.status_color = (200, 80, 80)
self.settings_renderer._apply_config = apply_config
self.settings_renderer._reset_config = reset_config
def _load_config(self) -> None:
"""Load configuration from server into settings panel."""
config = self.client.get_config()
if config:
self.settings_renderer.set_config(config)
def handle_events(self) -> None:
"""Handle Pygame events."""
for event in pygame.event.get():
if event.type == pygame.QUIT:
self.running = False
# Let stats panel handle events first if visible
if self.stats_renderer.handle_event(event):
continue
# Let settings panel handle events first if visible
if self.settings_renderer.handle_event(event):
continue
if event.type == pygame.KEYDOWN:
self._handle_keydown(event)
elif event.type == pygame.MOUSEMOTION:
self._handle_mouse_motion(event)
def _handle_keydown(self, event: pygame.event.Event) -> None:
"""Handle keyboard input."""
if event.key == pygame.K_ESCAPE:
if self.stats_renderer.visible:
self.stats_renderer.toggle()
elif self.settings_renderer.visible:
self.settings_renderer.toggle()
else:
self.running = False
elif event.key == pygame.K_SPACE:
# Advance one turn
if self.client.connected and not self.settings_renderer.visible and not self.stats_renderer.visible:
if self.client.advance_turn():
# Immediately fetch new state
self.state = self.client.get_state()
elif event.key == pygame.K_r:
# Reset simulation
if self.client.connected and not self.settings_renderer.visible and not self.stats_renderer.visible:
if self.client.initialize():
self.state = self.client.get_state()
self.stats_renderer.clear_history()
self._last_turn = -1
elif event.key == pygame.K_m:
# Toggle mode
if self.client.connected and self.state and not self.settings_renderer.visible and not self.stats_renderer.visible:
new_mode = "auto" if self.state.mode == "manual" else "manual"
if self.client.set_mode(new_mode):
self.state = self.client.get_state()
elif event.key == pygame.K_g:
# Toggle statistics/graphs panel
if not self.settings_renderer.visible:
self.stats_renderer.toggle()
elif event.key == pygame.K_s:
# Toggle settings panel
if not self.stats_renderer.visible:
if not self.settings_renderer.visible:
self._load_config()
self.settings_renderer.toggle()
def _handle_mouse_motion(self, event: pygame.event.Event) -> None:
"""Handle mouse motion for agent hover detection."""
if not self.state or self.settings_renderer.visible:
self.hovered_agent = None
return
mouse_pos = event.pos
self.hovered_agent = None
# Check if mouse is in map area
if not self.map_rect.collidepoint(mouse_pos):
return
# Check each agent
for agent in self.state.agents:
if not agent.get("is_alive", False):
continue
pos = agent.get("position", {"x": 0, "y": 0})
screen_x, screen_y = self.map_renderer.grid_to_screen(pos["x"], pos["y"])
# Check if mouse is near agent
dx = mouse_pos[0] - screen_x
dy = mouse_pos[1] - screen_y
distance = (dx * dx + dy * dy) ** 0.5
cell_w, cell_h = self.map_renderer.get_cell_size()
agent_radius = min(cell_w, cell_h) / 2
if distance < agent_radius + 5:
self.hovered_agent = agent
break
def update(self) -> None:
"""Update game state by polling the server."""
current_time = pygame.time.get_ticks()
# Check if we need to poll
if current_time - self.last_poll_time >= self.poll_interval:
self.last_poll_time = current_time
if not self.client.connected:
self.client.check_connection()
if self.client.connected:
new_state = self.client.get_state()
if new_state:
# Update map dimensions if changed
if (
new_state.world_width != self.map_renderer.world_width or
new_state.world_height != self.map_renderer.world_height
):
self.map_renderer.update_dimensions(
new_state.world_width,
new_state.world_height,
)
self.state = new_state
# Update stats history when turn changes
if new_state.turn != self._last_turn:
self.stats_renderer.update_history(new_state)
self._last_turn = new_state.turn
def draw(self) -> None:
"""Draw all elements."""
# Clear screen
self.screen.fill((30, 35, 45))
if self.state:
# Draw map
self.map_renderer.draw(self.state)
# Draw agents
self.agent_renderer.draw(self.state)
# Draw UI
self.ui_renderer.draw(self.state)
# Draw agent tooltip if hovering
if self.hovered_agent and not self.settings_renderer.visible:
mouse_pos = pygame.mouse.get_pos()
self.agent_renderer.draw_agent_tooltip(self.hovered_agent, mouse_pos)
# Draw connection status overlay if disconnected
if not self.client.connected:
self.ui_renderer.draw_connection_status(self.client.connected)
# Draw settings panel if visible
self.settings_renderer.draw()
# Draw stats panel if visible
self.stats_renderer.draw(self.state)
# Draw hints at bottom
if not self.settings_renderer.visible and not self.stats_renderer.visible:
hint_font = pygame.font.Font(None, 18)
hint = hint_font.render("S: Settings | G: Statistics & Graphs", True, (100, 100, 120))
self.screen.blit(hint, (5, self.screen.get_height() - 20))
# Update display
pygame.display.flip()
def run(self) -> None:
"""Main game loop."""
print("Starting Village Simulation Frontend...")
print("Connecting to backend at http://localhost:8000...")
# Try to connect initially
if not self.client.check_connection():
print("Backend not available. Will retry in the main loop.")
else:
print("Connected!")
self.state = self.client.get_state()
print("\nControls:")
print(" SPACE - Advance turn")
print(" R - Reset simulation")
print(" M - Toggle auto/manual mode")
print(" S - Open settings")
print(" G - Open statistics & graphs")
print(" ESC - Close panel / Quit")
print()
while self.running:
self.handle_events()
self.update()
self.draw()
self.clock.tick(FPS)
pygame.quit()
def main():
"""Entry point for the frontend application."""
# Get server URL from command line if provided
server_url = "http://localhost:8000"
if len(sys.argv) > 1:
server_url = sys.argv[1]
app = VillageSimulationApp(server_url)
app.run()
if __name__ == "__main__":
main()

View File

@ -1,9 +0,0 @@
"""Renderer components for the Village Simulation frontend."""
from .map_renderer import MapRenderer
from .agent_renderer import AgentRenderer
from .ui_renderer import UIRenderer
from .settings_renderer import SettingsRenderer
__all__ = ["MapRenderer", "AgentRenderer", "UIRenderer", "SettingsRenderer"]

View File

@ -1,430 +0,0 @@
"""Agent renderer for the Village Simulation."""
import math
import pygame
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from frontend.client import SimulationState
from frontend.renderer.map_renderer import MapRenderer
# Profession colors (villager is the default now)
PROFESSION_COLORS = {
"villager": (100, 140, 180), # Blue-gray for generic villager
"hunter": (180, 80, 80), # Red
"gatherer": (80, 160, 80), # Green
"woodcutter": (139, 90, 43), # Brown
"crafter": (160, 120, 200), # Purple
}
# Corpse color
CORPSE_COLOR = (60, 60, 60) # Dark gray
# Status bar colors
BAR_COLORS = {
"energy": (255, 220, 80), # Yellow
"hunger": (220, 140, 80), # Orange
"thirst": (80, 160, 220), # Blue
"heat": (220, 80, 80), # Red
}
# Action icons/symbols
ACTION_SYMBOLS = {
"hunt": "🏹",
"gather": "🍇",
"chop_wood": "🪓",
"get_water": "💧",
"weave": "🧵",
"build_fire": "🔥",
"trade": "💰",
"rest": "💤",
"sleep": "😴",
"consume": "🍖",
"dead": "💀",
}
# Fallback ASCII symbols for systems without emoji support
ACTION_LETTERS = {
"hunt": "H",
"gather": "G",
"chop_wood": "W",
"get_water": "~",
"weave": "C",
"build_fire": "F",
"trade": "$",
"rest": "R",
"sleep": "Z",
"consume": "E",
"dead": "X",
}
class AgentRenderer:
"""Renders agents on the map with movement and action indicators."""
def __init__(
self,
screen: pygame.Surface,
map_renderer: "MapRenderer",
font: pygame.font.Font,
):
self.screen = screen
self.map_renderer = map_renderer
self.font = font
self.small_font = pygame.font.Font(None, 16)
self.action_font = pygame.font.Font(None, 20)
# Animation state
self.animation_tick = 0
def _get_agent_color(self, agent: dict) -> tuple[int, int, int]:
"""Get the color for an agent based on state."""
# Corpses are dark gray
if agent.get("is_corpse", False) or not agent.get("is_alive", True):
return CORPSE_COLOR
profession = agent.get("profession", "villager")
base_color = PROFESSION_COLORS.get(profession, (100, 140, 180))
if not agent.get("can_act", True):
# Slightly dimmed for exhausted agents
return tuple(int(c * 0.7) for c in base_color)
return base_color
def _draw_status_bar(
self,
x: int,
y: int,
width: int,
height: int,
value: int,
max_value: int,
color: tuple[int, int, int],
) -> None:
"""Draw a single status bar."""
# Background
pygame.draw.rect(self.screen, (40, 40, 40), (x, y, width, height))
# Fill
fill_width = int((value / max_value) * width) if max_value > 0 else 0
if fill_width > 0:
pygame.draw.rect(self.screen, color, (x, y, fill_width, height))
# Border
pygame.draw.rect(self.screen, (80, 80, 80), (x, y, width, height), 1)
def _draw_status_bars(self, agent: dict, center_x: int, center_y: int, size: int) -> None:
"""Draw status bars below the agent."""
stats = agent.get("stats", {})
bar_width = size + 10
bar_height = 3
bar_spacing = 4
start_y = center_y + size // 2 + 4
bars = [
("energy", stats.get("energy", 0), stats.get("max_energy", 100)),
("hunger", stats.get("hunger", 0), stats.get("max_hunger", 100)),
("thirst", stats.get("thirst", 0), stats.get("max_thirst", 50)),
("heat", stats.get("heat", 0), stats.get("max_heat", 100)),
]
for i, (stat_name, value, max_value) in enumerate(bars):
bar_y = start_y + i * bar_spacing
self._draw_status_bar(
center_x - bar_width // 2,
bar_y,
bar_width,
bar_height,
value,
max_value,
BAR_COLORS[stat_name],
)
def _draw_action_indicator(
self,
agent: dict,
center_x: int,
center_y: int,
agent_size: int,
) -> None:
"""Draw action indicator above the agent."""
current_action = agent.get("current_action", {})
action_type = current_action.get("action_type", "")
is_moving = current_action.get("is_moving", False)
message = current_action.get("message", "")
if not action_type:
return
# Get action symbol
symbol = ACTION_LETTERS.get(action_type, "?")
# Draw action bubble above agent
bubble_y = center_y - agent_size // 2 - 20
# Animate if moving
if is_moving:
# Bouncing animation
offset = int(3 * math.sin(self.animation_tick * 0.3))
bubble_y += offset
# Draw bubble background
bubble_width = 22
bubble_height = 18
bubble_rect = pygame.Rect(
center_x - bubble_width // 2,
bubble_y - bubble_height // 2,
bubble_width,
bubble_height,
)
# Color based on action success/failure
if "Failed" in message:
bg_color = (120, 60, 60)
border_color = (180, 80, 80)
elif is_moving:
bg_color = (60, 80, 120)
border_color = (100, 140, 200)
else:
bg_color = (50, 70, 50)
border_color = (80, 140, 80)
pygame.draw.rect(self.screen, bg_color, bubble_rect, border_radius=4)
pygame.draw.rect(self.screen, border_color, bubble_rect, 1, border_radius=4)
# Draw action letter
text = self.action_font.render(symbol, True, (255, 255, 255))
text_rect = text.get_rect(center=(center_x, bubble_y))
self.screen.blit(text, text_rect)
# Draw movement trail if moving
if is_moving:
target_pos = current_action.get("target_position")
if target_pos:
target_x, target_y = self.map_renderer.grid_to_screen(
target_pos.get("x", 0),
target_pos.get("y", 0),
)
# Draw dotted line to target
self._draw_dotted_line(
(center_x, center_y),
(target_x, target_y),
(100, 100, 100),
4,
)
def _draw_dotted_line(
self,
start: tuple[int, int],
end: tuple[int, int],
color: tuple[int, int, int],
dot_spacing: int = 5,
) -> None:
"""Draw a dotted line between two points."""
dx = end[0] - start[0]
dy = end[1] - start[1]
distance = max(1, int((dx ** 2 + dy ** 2) ** 0.5))
for i in range(0, distance, dot_spacing * 2):
t = i / distance
x = int(start[0] + dx * t)
y = int(start[1] + dy * t)
pygame.draw.circle(self.screen, color, (x, y), 1)
def _draw_last_action_result(
self,
agent: dict,
center_x: int,
center_y: int,
agent_size: int,
) -> None:
"""Draw the last action result as floating text."""
result = agent.get("last_action_result", "")
if not result:
return
# Truncate long messages
if len(result) > 25:
result = result[:22] + "..."
# Draw text below status bars
text_y = center_y + agent_size // 2 + 22
text = self.small_font.render(result, True, (180, 180, 180))
text_rect = text.get_rect(center=(center_x, text_y))
# Background for readability
bg_rect = text_rect.inflate(4, 2)
pygame.draw.rect(self.screen, (30, 30, 40, 180), bg_rect)
self.screen.blit(text, text_rect)
def draw(self, state: "SimulationState") -> None:
"""Draw all agents (including corpses for one turn)."""
self.animation_tick += 1
cell_w, cell_h = self.map_renderer.get_cell_size()
agent_size = min(cell_w, cell_h) - 8
agent_size = max(10, min(agent_size, 30)) # Clamp size
for agent in state.agents:
is_corpse = agent.get("is_corpse", False)
is_alive = agent.get("is_alive", True)
# Get screen position from agent's current position
pos = agent.get("position", {"x": 0, "y": 0})
screen_x, screen_y = self.map_renderer.grid_to_screen(pos["x"], pos["y"])
if is_corpse:
# Draw corpse with death indicator
self._draw_corpse(agent, screen_x, screen_y, agent_size)
continue
if not is_alive:
continue
# Draw movement trail/line to target first (behind agent)
self._draw_action_indicator(agent, screen_x, screen_y, agent_size)
# Draw agent circle
color = self._get_agent_color(agent)
pygame.draw.circle(self.screen, color, (screen_x, screen_y), agent_size // 2)
# Draw border - animated if moving
current_action = agent.get("current_action", {})
is_moving = current_action.get("is_moving", False)
if is_moving:
# Pulsing border when moving
pulse = int(127 + 127 * math.sin(self.animation_tick * 0.2))
border_color = (pulse, pulse, 255)
elif agent.get("can_act"):
border_color = (255, 255, 255)
else:
border_color = (100, 100, 100)
pygame.draw.circle(self.screen, border_color, (screen_x, screen_y), agent_size // 2, 2)
# Draw money indicator (small coin icon)
money = agent.get("money", 0)
if money > 0:
coin_x = screen_x + agent_size // 2 - 4
coin_y = screen_y - agent_size // 2 - 4
pygame.draw.circle(self.screen, (255, 215, 0), (coin_x, coin_y), 4)
pygame.draw.circle(self.screen, (200, 160, 0), (coin_x, coin_y), 4, 1)
# Draw "V" for villager
text = self.small_font.render("V", True, (255, 255, 255))
text_rect = text.get_rect(center=(screen_x, screen_y))
self.screen.blit(text, text_rect)
# Draw status bars
self._draw_status_bars(agent, screen_x, screen_y, agent_size)
# Draw last action result
self._draw_last_action_result(agent, screen_x, screen_y, agent_size)
def _draw_corpse(
self,
agent: dict,
center_x: int,
center_y: int,
agent_size: int,
) -> None:
"""Draw a corpse with death reason displayed."""
# Draw corpse circle (dark gray)
pygame.draw.circle(self.screen, CORPSE_COLOR, (center_x, center_y), agent_size // 2)
# Draw red X border
pygame.draw.circle(self.screen, (150, 50, 50), (center_x, center_y), agent_size // 2, 2)
# Draw skull symbol
text = self.action_font.render("X", True, (180, 80, 80))
text_rect = text.get_rect(center=(center_x, center_y))
self.screen.blit(text, text_rect)
# Draw death reason above corpse
death_reason = agent.get("death_reason", "unknown")
name = agent.get("name", "Unknown")
# Death indicator bubble
bubble_y = center_y - agent_size // 2 - 20
bubble_text = f"💀 {death_reason}"
text = self.small_font.render(bubble_text, True, (255, 100, 100))
text_rect = text.get_rect(center=(center_x, bubble_y))
# Background for readability
bg_rect = text_rect.inflate(8, 4)
pygame.draw.rect(self.screen, (40, 20, 20), bg_rect, border_radius=3)
pygame.draw.rect(self.screen, (120, 50, 50), bg_rect, 1, border_radius=3)
self.screen.blit(text, text_rect)
# Draw name below
name_y = center_y + agent_size // 2 + 8
name_text = self.small_font.render(name, True, (150, 150, 150))
name_rect = name_text.get_rect(center=(center_x, name_y))
self.screen.blit(name_text, name_rect)
def draw_agent_tooltip(self, agent: dict, mouse_pos: tuple[int, int]) -> None:
"""Draw a tooltip for an agent when hovered."""
# Build tooltip text
lines = [
agent.get("name", "Unknown"),
f"Profession: {agent.get('profession', '?').capitalize()}",
f"Money: {agent.get('money', 0)} coins",
"",
]
# Current action
current_action = agent.get("current_action", {})
action_type = current_action.get("action_type", "")
if action_type:
action_msg = current_action.get("message", action_type)
lines.append(f"Action: {action_msg[:40]}")
if current_action.get("is_moving"):
lines.append(" (moving to location)")
lines.append("")
lines.append("Stats:")
stats = agent.get("stats", {})
lines.append(f" Energy: {stats.get('energy', 0)}/{stats.get('max_energy', 100)}")
lines.append(f" Hunger: {stats.get('hunger', 0)}/{stats.get('max_hunger', 100)}")
lines.append(f" Thirst: {stats.get('thirst', 0)}/{stats.get('max_thirst', 50)}")
lines.append(f" Heat: {stats.get('heat', 0)}/{stats.get('max_heat', 100)}")
inventory = agent.get("inventory", [])
if inventory:
lines.append("")
lines.append("Inventory:")
for item in inventory[:5]:
lines.append(f" {item.get('type', '?')}: {item.get('quantity', 0)}")
# Last action result
last_result = agent.get("last_action_result", "")
if last_result:
lines.append("")
lines.append(f"Last: {last_result[:35]}")
# Calculate tooltip size
line_height = 16
max_width = max(self.small_font.size(line)[0] for line in lines) + 20
height = len(lines) * line_height + 10
# Position tooltip near mouse but not off screen
x = min(mouse_pos[0] + 15, self.screen.get_width() - max_width - 5)
y = min(mouse_pos[1] + 15, self.screen.get_height() - height - 5)
# Draw background
tooltip_rect = pygame.Rect(x, y, max_width, height)
pygame.draw.rect(self.screen, (30, 30, 40), tooltip_rect)
pygame.draw.rect(self.screen, (100, 100, 120), tooltip_rect, 1)
# Draw text
for i, line in enumerate(lines):
text = self.small_font.render(line, True, (220, 220, 220))
self.screen.blit(text, (x + 10, y + 5 + i * line_height))

View File

@ -1,146 +0,0 @@
"""Map renderer for the Village Simulation."""
import pygame
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from frontend.client import SimulationState
# Color palette
class Colors:
# Background colors
DAY_BG = (180, 200, 160) # Soft green for day
NIGHT_BG = (40, 45, 60) # Dark blue for night
GRID_LINE = (120, 140, 110) # Subtle grid lines
GRID_LINE_NIGHT = (60, 65, 80)
# Terrain features (for visual variety)
GRASS_LIGHT = (160, 190, 140)
GRASS_DARK = (140, 170, 120)
WATER_SPOT = (100, 140, 180)
class MapRenderer:
"""Renders the map/terrain background."""
def __init__(
self,
screen: pygame.Surface,
map_rect: pygame.Rect,
world_width: int = 20,
world_height: int = 20,
):
self.screen = screen
self.map_rect = map_rect
self.world_width = world_width
self.world_height = world_height
self._cell_width = map_rect.width / world_width
self._cell_height = map_rect.height / world_height
# Pre-generate some terrain variation
self._terrain_cache = self._generate_terrain()
def _generate_terrain(self) -> list[list[int]]:
"""Generate simple terrain variation (0 = light, 1 = dark, 2 = water)."""
import random
terrain = []
for y in range(self.world_height):
row = []
for x in range(self.world_width):
# Simple pattern: mostly grass with occasional water spots
if random.random() < 0.05:
row.append(2) # Water spot
elif (x + y) % 3 == 0:
row.append(1) # Dark grass
else:
row.append(0) # Light grass
terrain.append(row)
return terrain
def update_dimensions(self, world_width: int, world_height: int) -> None:
"""Update world dimensions and recalculate cell sizes."""
if world_width != self.world_width or world_height != self.world_height:
self.world_width = world_width
self.world_height = world_height
self._cell_width = self.map_rect.width / world_width
self._cell_height = self.map_rect.height / world_height
self._terrain_cache = self._generate_terrain()
def grid_to_screen(self, grid_x: int, grid_y: int) -> tuple[int, int]:
"""Convert grid coordinates to screen coordinates (center of cell)."""
screen_x = self.map_rect.left + (grid_x + 0.5) * self._cell_width
screen_y = self.map_rect.top + (grid_y + 0.5) * self._cell_height
return int(screen_x), int(screen_y)
def get_cell_size(self) -> tuple[int, int]:
"""Get the size of a single cell."""
return int(self._cell_width), int(self._cell_height)
def draw(self, state: "SimulationState") -> None:
"""Draw the map background."""
is_night = state.time_of_day == "night"
# Fill background
bg_color = Colors.NIGHT_BG if is_night else Colors.DAY_BG
pygame.draw.rect(self.screen, bg_color, self.map_rect)
# Draw terrain cells
for y in range(self.world_height):
for x in range(self.world_width):
cell_rect = pygame.Rect(
self.map_rect.left + x * self._cell_width,
self.map_rect.top + y * self._cell_height,
self._cell_width + 1, # +1 to avoid gaps
self._cell_height + 1,
)
terrain_type = self._terrain_cache[y][x]
if is_night:
# Darker colors at night
if terrain_type == 2:
color = (60, 80, 110)
elif terrain_type == 1:
color = (35, 40, 55)
else:
color = (45, 50, 65)
else:
if terrain_type == 2:
color = Colors.WATER_SPOT
elif terrain_type == 1:
color = Colors.GRASS_DARK
else:
color = Colors.GRASS_LIGHT
pygame.draw.rect(self.screen, color, cell_rect)
# Draw grid lines
grid_color = Colors.GRID_LINE_NIGHT if is_night else Colors.GRID_LINE
# Vertical lines
for x in range(self.world_width + 1):
start_x = self.map_rect.left + x * self._cell_width
pygame.draw.line(
self.screen,
grid_color,
(start_x, self.map_rect.top),
(start_x, self.map_rect.bottom),
1,
)
# Horizontal lines
for y in range(self.world_height + 1):
start_y = self.map_rect.top + y * self._cell_height
pygame.draw.line(
self.screen,
grid_color,
(self.map_rect.left, start_y),
(self.map_rect.right, start_y),
1,
)
# Draw border
border_color = (80, 90, 70) if not is_night else (80, 85, 100)
pygame.draw.rect(self.screen, border_color, self.map_rect, 2)

View File

@ -1,448 +0,0 @@
"""Settings UI renderer with sliders for the Village Simulation."""
import pygame
from dataclasses import dataclass
from typing import Optional, Callable, Any
class Colors:
"""Color palette for settings UI."""
BG = (25, 28, 35)
PANEL_BG = (35, 40, 50)
PANEL_BORDER = (70, 80, 95)
TEXT_PRIMARY = (230, 230, 235)
TEXT_SECONDARY = (160, 165, 175)
TEXT_HIGHLIGHT = (100, 180, 255)
SLIDER_BG = (50, 55, 65)
SLIDER_FILL = (80, 140, 200)
SLIDER_HANDLE = (220, 220, 230)
BUTTON_BG = (60, 100, 160)
BUTTON_HOVER = (80, 120, 180)
BUTTON_TEXT = (255, 255, 255)
SUCCESS = (80, 180, 100)
WARNING = (200, 160, 80)
@dataclass
class SliderConfig:
"""Configuration for a slider widget."""
name: str
key: str # Dot-separated path like "agent_stats.max_energy"
min_val: float
max_val: float
step: float = 1.0
is_int: bool = True
description: str = ""
# Define all configurable parameters with sliders
SLIDER_CONFIGS = [
# Agent Stats Section
SliderConfig("Max Energy", "agent_stats.max_energy", 50, 200, 10, True, "Maximum energy capacity"),
SliderConfig("Max Hunger", "agent_stats.max_hunger", 50, 200, 10, True, "Maximum hunger capacity"),
SliderConfig("Max Thirst", "agent_stats.max_thirst", 25, 100, 5, True, "Maximum thirst capacity"),
SliderConfig("Max Heat", "agent_stats.max_heat", 50, 200, 10, True, "Maximum heat capacity"),
SliderConfig("Energy Decay", "agent_stats.energy_decay", 1, 10, 1, True, "Energy lost per turn"),
SliderConfig("Hunger Decay", "agent_stats.hunger_decay", 1, 10, 1, True, "Hunger lost per turn"),
SliderConfig("Thirst Decay", "agent_stats.thirst_decay", 1, 10, 1, True, "Thirst lost per turn"),
SliderConfig("Heat Decay", "agent_stats.heat_decay", 1, 10, 1, True, "Heat lost per turn"),
SliderConfig("Critical %", "agent_stats.critical_threshold", 0.1, 0.5, 0.05, False, "Threshold for survival mode"),
# World Section
SliderConfig("World Width", "world.width", 10, 50, 5, True, "World grid width"),
SliderConfig("World Height", "world.height", 10, 50, 5, True, "World grid height"),
SliderConfig("Initial Agents", "world.initial_agents", 2, 20, 1, True, "Starting agent count"),
SliderConfig("Day Steps", "world.day_steps", 5, 20, 1, True, "Steps per day"),
SliderConfig("Inventory Slots", "world.inventory_slots", 5, 20, 1, True, "Agent inventory size"),
SliderConfig("Starting Money", "world.starting_money", 50, 500, 50, True, "Initial coins per agent"),
# Actions Section
SliderConfig("Hunt Energy Cost", "actions.hunt_energy", -30, -5, 5, True, "Energy spent hunting"),
SliderConfig("Gather Energy Cost", "actions.gather_energy", -20, -1, 1, True, "Energy spent gathering"),
SliderConfig("Hunt Success %", "actions.hunt_success", 0.3, 1.0, 0.1, False, "Hunting success chance"),
SliderConfig("Sleep Restore", "actions.sleep_energy", 30, 100, 10, True, "Energy restored by sleep"),
SliderConfig("Rest Restore", "actions.rest_energy", 5, 30, 5, True, "Energy restored by rest"),
# Resources Section
SliderConfig("Meat Decay", "resources.meat_decay", 2, 20, 1, True, "Turns until meat spoils"),
SliderConfig("Berries Decay", "resources.berries_decay", 10, 50, 5, True, "Turns until berries spoil"),
SliderConfig("Meat Hunger +", "resources.meat_hunger", 10, 60, 5, True, "Hunger restored by meat"),
SliderConfig("Water Thirst +", "resources.water_thirst", 20, 60, 5, True, "Thirst restored by water"),
# Market Section
SliderConfig("Discount Turns", "market.turns_before_discount", 1, 10, 1, True, "Turns before price drop"),
SliderConfig("Discount Rate %", "market.discount_rate", 0.05, 0.30, 0.05, False, "Price reduction per period"),
# Simulation Section
SliderConfig("Auto Step (s)", "auto_step_interval", 0.2, 3.0, 0.2, False, "Seconds between auto steps"),
]
class Slider:
"""A slider widget for adjusting numeric values."""
def __init__(
self,
rect: pygame.Rect,
config: SliderConfig,
font: pygame.font.Font,
small_font: pygame.font.Font,
):
self.rect = rect
self.config = config
self.font = font
self.small_font = small_font
self.value = config.min_val
self.dragging = False
self.hovered = False
def set_value(self, value: float) -> None:
"""Set the slider value."""
self.value = max(self.config.min_val, min(self.config.max_val, value))
if self.config.is_int:
self.value = int(round(self.value))
def get_value(self) -> Any:
"""Get the current value."""
return int(self.value) if self.config.is_int else round(self.value, 2)
def handle_event(self, event: pygame.event.Event) -> bool:
"""Handle input events. Returns True if value changed."""
if event.type == pygame.MOUSEBUTTONDOWN:
if self._slider_area().collidepoint(event.pos):
self.dragging = True
return self._update_from_mouse(event.pos[0])
elif event.type == pygame.MOUSEBUTTONUP:
self.dragging = False
elif event.type == pygame.MOUSEMOTION:
self.hovered = self.rect.collidepoint(event.pos)
if self.dragging:
return self._update_from_mouse(event.pos[0])
return False
def _slider_area(self) -> pygame.Rect:
"""Get the actual slider track area."""
return pygame.Rect(
self.rect.x + 120, # Leave space for label
self.rect.y + 15,
self.rect.width - 180, # Leave space for value display
20,
)
def _update_from_mouse(self, mouse_x: int) -> bool:
"""Update value based on mouse position."""
slider_area = self._slider_area()
# Calculate position as 0-1
rel_x = mouse_x - slider_area.x
ratio = max(0, min(1, rel_x / slider_area.width))
# Calculate value
range_val = self.config.max_val - self.config.min_val
new_value = self.config.min_val + ratio * range_val
# Apply step
if self.config.step > 0:
new_value = round(new_value / self.config.step) * self.config.step
old_value = self.value
self.set_value(new_value)
return abs(old_value - self.value) > 0.001
def draw(self, screen: pygame.Surface) -> None:
"""Draw the slider."""
# Background
if self.hovered:
pygame.draw.rect(screen, (45, 50, 60), self.rect)
# Label
label = self.small_font.render(self.config.name, True, Colors.TEXT_PRIMARY)
screen.blit(label, (self.rect.x + 5, self.rect.y + 5))
# Slider track
slider_area = self._slider_area()
pygame.draw.rect(screen, Colors.SLIDER_BG, slider_area, border_radius=3)
# Slider fill
ratio = (self.value - self.config.min_val) / (self.config.max_val - self.config.min_val)
fill_width = int(ratio * slider_area.width)
fill_rect = pygame.Rect(slider_area.x, slider_area.y, fill_width, slider_area.height)
pygame.draw.rect(screen, Colors.SLIDER_FILL, fill_rect, border_radius=3)
# Handle
handle_x = slider_area.x + fill_width
handle_rect = pygame.Rect(handle_x - 4, slider_area.y - 2, 8, slider_area.height + 4)
pygame.draw.rect(screen, Colors.SLIDER_HANDLE, handle_rect, border_radius=2)
# Value display
value_str = str(self.get_value())
value_text = self.small_font.render(value_str, True, Colors.TEXT_HIGHLIGHT)
value_x = self.rect.right - 50
screen.blit(value_text, (value_x, self.rect.y + 5))
# Description on hover
if self.hovered and self.config.description:
desc = self.small_font.render(self.config.description, True, Colors.TEXT_SECONDARY)
screen.blit(desc, (self.rect.x + 5, self.rect.y + 25))
class Button:
"""A simple button widget."""
def __init__(
self,
rect: pygame.Rect,
text: str,
font: pygame.font.Font,
callback: Optional[Callable] = None,
color: tuple = Colors.BUTTON_BG,
):
self.rect = rect
self.text = text
self.font = font
self.callback = callback
self.color = color
self.hovered = False
def handle_event(self, event: pygame.event.Event) -> bool:
"""Handle input events. Returns True if clicked."""
if event.type == pygame.MOUSEMOTION:
self.hovered = self.rect.collidepoint(event.pos)
elif event.type == pygame.MOUSEBUTTONDOWN:
if self.rect.collidepoint(event.pos):
if self.callback:
self.callback()
return True
return False
def draw(self, screen: pygame.Surface) -> None:
"""Draw the button."""
color = Colors.BUTTON_HOVER if self.hovered else self.color
pygame.draw.rect(screen, color, self.rect, border_radius=5)
pygame.draw.rect(screen, Colors.PANEL_BORDER, self.rect, 1, border_radius=5)
text = self.font.render(self.text, True, Colors.BUTTON_TEXT)
text_rect = text.get_rect(center=self.rect.center)
screen.blit(text, text_rect)
class SettingsRenderer:
"""Renders the settings UI panel with sliders."""
def __init__(self, screen: pygame.Surface):
self.screen = screen
self.font = pygame.font.Font(None, 24)
self.small_font = pygame.font.Font(None, 18)
self.title_font = pygame.font.Font(None, 32)
self.visible = False
self.scroll_offset = 0
self.max_scroll = 0
# Create sliders
self.sliders: list[Slider] = []
self.buttons: list[Button] = []
self.config_data: dict = {}
self._create_widgets()
self.status_message = ""
self.status_color = Colors.TEXT_SECONDARY
def _create_widgets(self) -> None:
"""Create slider widgets."""
panel_width = 400
slider_height = 45
start_y = 80
panel_x = (self.screen.get_width() - panel_width) // 2
for i, config in enumerate(SLIDER_CONFIGS):
rect = pygame.Rect(
panel_x + 10,
start_y + i * slider_height,
panel_width - 20,
slider_height,
)
slider = Slider(rect, config, self.font, self.small_font)
self.sliders.append(slider)
# Calculate max scroll
total_height = len(SLIDER_CONFIGS) * slider_height + 150
visible_height = self.screen.get_height() - 150
self.max_scroll = max(0, total_height - visible_height)
# Create buttons at the bottom
button_y = self.screen.get_height() - 60
button_width = 100
button_height = 35
buttons_data = [
("Apply & Restart", self._apply_config, Colors.SUCCESS),
("Reset Defaults", self._reset_config, Colors.WARNING),
("Close", self.toggle, Colors.PANEL_BORDER),
]
total_button_width = len(buttons_data) * button_width + (len(buttons_data) - 1) * 10
start_x = (self.screen.get_width() - total_button_width) // 2
for i, (text, callback, color) in enumerate(buttons_data):
rect = pygame.Rect(
start_x + i * (button_width + 10),
button_y,
button_width,
button_height,
)
self.buttons.append(Button(rect, text, self.small_font, callback, color))
def toggle(self) -> None:
"""Toggle settings visibility."""
self.visible = not self.visible
if self.visible:
self.scroll_offset = 0
def set_config(self, config_data: dict) -> None:
"""Set slider values from config data."""
self.config_data = config_data
for slider in self.sliders:
value = self._get_nested_value(config_data, slider.config.key)
if value is not None:
slider.set_value(value)
def get_config(self) -> dict:
"""Get current config from slider values."""
result = {}
for slider in self.sliders:
self._set_nested_value(result, slider.config.key, slider.get_value())
return result
def _get_nested_value(self, data: dict, key: str) -> Any:
"""Get a value from nested dict using dot notation."""
parts = key.split(".")
current = data
for part in parts:
if isinstance(current, dict) and part in current:
current = current[part]
else:
return None
return current
def _set_nested_value(self, data: dict, key: str, value: Any) -> None:
"""Set a value in nested dict using dot notation."""
parts = key.split(".")
current = data
for part in parts[:-1]:
if part not in current:
current[part] = {}
current = current[part]
current[parts[-1]] = value
def _apply_config(self) -> None:
"""Apply configuration callback (to be set externally)."""
self.status_message = "Config applied - restart to see changes"
self.status_color = Colors.SUCCESS
def _reset_config(self) -> None:
"""Reset configuration callback (to be set externally)."""
self.status_message = "Config reset to defaults"
self.status_color = Colors.WARNING
def handle_event(self, event: pygame.event.Event) -> bool:
"""Handle input events. Returns True if event was consumed."""
if not self.visible:
return False
# Handle scrolling
if event.type == pygame.MOUSEWHEEL:
self.scroll_offset -= event.y * 30
self.scroll_offset = max(0, min(self.max_scroll, self.scroll_offset))
return True
# Handle sliders
for slider in self.sliders:
# Adjust slider position for scroll
original_y = slider.rect.y
slider.rect.y -= self.scroll_offset
if slider.handle_event(event):
slider.rect.y = original_y
return True
slider.rect.y = original_y
# Handle buttons
for button in self.buttons:
if button.handle_event(event):
return True
# Consume all clicks when settings are visible
if event.type == pygame.MOUSEBUTTONDOWN:
return True
return False
def draw(self) -> None:
"""Draw the settings panel."""
if not self.visible:
return
# Dim background
overlay = pygame.Surface(self.screen.get_size(), pygame.SRCALPHA)
overlay.fill((0, 0, 0, 200))
self.screen.blit(overlay, (0, 0))
# Panel background
panel_width = 420
panel_height = self.screen.get_height() - 40
panel_x = (self.screen.get_width() - panel_width) // 2
panel_rect = pygame.Rect(panel_x, 20, panel_width, panel_height)
pygame.draw.rect(self.screen, Colors.PANEL_BG, panel_rect, border_radius=10)
pygame.draw.rect(self.screen, Colors.PANEL_BORDER, panel_rect, 2, border_radius=10)
# Title
title = self.title_font.render("Simulation Settings", True, Colors.TEXT_PRIMARY)
title_rect = title.get_rect(centerx=self.screen.get_width() // 2, y=35)
self.screen.blit(title, title_rect)
# Create clipping region for scrollable area
clip_rect = pygame.Rect(panel_x, 70, panel_width, panel_height - 130)
# Draw sliders with scroll offset
for slider in self.sliders:
# Adjust position for scroll
adjusted_rect = slider.rect.copy()
adjusted_rect.y -= self.scroll_offset
# Only draw if visible
if clip_rect.colliderect(adjusted_rect):
# Temporarily move slider for drawing
original_y = slider.rect.y
slider.rect.y = adjusted_rect.y
slider.draw(self.screen)
slider.rect.y = original_y
# Draw scroll indicator
if self.max_scroll > 0:
scroll_ratio = self.scroll_offset / self.max_scroll
scroll_height = max(30, int((clip_rect.height / (clip_rect.height + self.max_scroll)) * clip_rect.height))
scroll_y = clip_rect.y + int(scroll_ratio * (clip_rect.height - scroll_height))
scroll_rect = pygame.Rect(panel_rect.right - 8, scroll_y, 4, scroll_height)
pygame.draw.rect(self.screen, Colors.SLIDER_FILL, scroll_rect, border_radius=2)
# Draw buttons
for button in self.buttons:
button.draw(self.screen)
# Status message
if self.status_message:
status = self.small_font.render(self.status_message, True, self.status_color)
status_rect = status.get_rect(centerx=self.screen.get_width() // 2, y=self.screen.get_height() - 90)
self.screen.blit(status, status_rect)

View File

@ -1,770 +0,0 @@
"""Real-time statistics and charts renderer for the Village Simulation.
Uses matplotlib to render charts to pygame surfaces for a seamless visualization experience.
"""
import io
from dataclasses import dataclass, field
from collections import deque
from typing import TYPE_CHECKING, Optional
import pygame
import matplotlib
matplotlib.use('Agg') # Use non-interactive backend for pygame integration
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
from matplotlib.figure import Figure
import numpy as np
if TYPE_CHECKING:
from frontend.client import SimulationState
# Color scheme - dark cyberpunk inspired
class ChartColors:
"""Color palette for charts - dark theme with neon accents."""
BG = '#1a1d26'
PANEL = '#252a38'
GRID = '#2f3545'
TEXT = '#e0e0e8'
TEXT_DIM = '#7a7e8c'
# Neon accents for data series
CYAN = '#00d4ff'
MAGENTA = '#ff0099'
LIME = '#39ff14'
ORANGE = '#ff6600'
PURPLE = '#9d4edd'
YELLOW = '#ffcc00'
TEAL = '#00ffa3'
PINK = '#ff1493'
# Series colors for different resources/categories
SERIES = [CYAN, MAGENTA, LIME, ORANGE, PURPLE, YELLOW, TEAL, PINK]
class UIColors:
"""Color palette for pygame UI elements."""
BG = (26, 29, 38)
PANEL_BG = (37, 42, 56)
PANEL_BORDER = (70, 80, 100)
TEXT_PRIMARY = (224, 224, 232)
TEXT_SECONDARY = (122, 126, 140)
TEXT_HIGHLIGHT = (0, 212, 255)
TAB_ACTIVE = (0, 212, 255)
TAB_INACTIVE = (55, 60, 75)
TAB_HOVER = (75, 85, 110)
@dataclass
class HistoryData:
"""Stores historical simulation data for charting."""
max_history: int = 200
# Time series data
turns: deque = field(default_factory=lambda: deque(maxlen=200))
population: deque = field(default_factory=lambda: deque(maxlen=200))
deaths_cumulative: deque = field(default_factory=lambda: deque(maxlen=200))
# Money/Wealth data
total_money: deque = field(default_factory=lambda: deque(maxlen=200))
avg_wealth: deque = field(default_factory=lambda: deque(maxlen=200))
gini_coefficient: deque = field(default_factory=lambda: deque(maxlen=200))
# Price history per resource
prices: dict = field(default_factory=dict) # resource -> deque of prices
# Trade statistics
trade_volume: deque = field(default_factory=lambda: deque(maxlen=200))
# Profession counts over time
professions: dict = field(default_factory=dict) # profession -> deque of counts
def clear(self) -> None:
"""Clear all history data."""
self.turns.clear()
self.population.clear()
self.deaths_cumulative.clear()
self.total_money.clear()
self.avg_wealth.clear()
self.gini_coefficient.clear()
self.prices.clear()
self.trade_volume.clear()
self.professions.clear()
def update(self, state: "SimulationState") -> None:
"""Update history with new state data."""
turn = state.turn
# Avoid duplicate entries for the same turn
if self.turns and self.turns[-1] == turn:
return
self.turns.append(turn)
# Population
living = len([a for a in state.agents if a.get("is_alive", False)])
self.population.append(living)
self.deaths_cumulative.append(state.statistics.get("total_agents_died", 0))
# Wealth data
stats = state.statistics
self.total_money.append(stats.get("total_money_in_circulation", 0))
self.avg_wealth.append(stats.get("avg_money", 0))
self.gini_coefficient.append(stats.get("gini_coefficient", 0))
# Price history from market
for resource, data in state.market_prices.items():
if resource not in self.prices:
self.prices[resource] = deque(maxlen=self.max_history)
# Track lowest price (current market rate)
lowest = data.get("lowest_price")
avg = data.get("avg_sale_price")
# Use lowest price if available, else avg sale price
price = lowest if lowest is not None else avg
self.prices[resource].append(price)
# Trade volume (from recent trades in market orders)
trades = len(state.market_orders) # Active orders as proxy
self.trade_volume.append(trades)
# Profession distribution
professions = stats.get("professions", {})
for prof, count in professions.items():
if prof not in self.professions:
self.professions[prof] = deque(maxlen=self.max_history)
self.professions[prof].append(count)
# Pad missing professions with 0
for prof in self.professions:
if prof not in professions:
self.professions[prof].append(0)
class ChartRenderer:
"""Renders matplotlib charts to pygame surfaces."""
def __init__(self, width: int, height: int):
self.width = width
self.height = height
self.dpi = 100
# Configure matplotlib style
plt.style.use('dark_background')
plt.rcParams.update({
'figure.facecolor': ChartColors.BG,
'axes.facecolor': ChartColors.PANEL,
'axes.edgecolor': ChartColors.GRID,
'axes.labelcolor': ChartColors.TEXT,
'text.color': ChartColors.TEXT,
'xtick.color': ChartColors.TEXT_DIM,
'ytick.color': ChartColors.TEXT_DIM,
'grid.color': ChartColors.GRID,
'grid.alpha': 0.3,
'legend.facecolor': ChartColors.PANEL,
'legend.edgecolor': ChartColors.GRID,
'font.size': 9,
'axes.titlesize': 11,
'axes.titleweight': 'bold',
})
def _fig_to_surface(self, fig: Figure) -> pygame.Surface:
"""Convert a matplotlib figure to a pygame surface."""
buf = io.BytesIO()
fig.savefig(buf, format='png', dpi=self.dpi,
facecolor=ChartColors.BG, edgecolor='none',
bbox_inches='tight', pad_inches=0.1)
buf.seek(0)
surface = pygame.image.load(buf, 'png')
buf.close()
plt.close(fig)
return surface
def render_price_history(self, history: HistoryData, width: int, height: int) -> pygame.Surface:
"""Render price history chart for all resources."""
fig, ax = plt.subplots(figsize=(width/self.dpi, height/self.dpi), dpi=self.dpi)
turns = list(history.turns) if history.turns else [0]
has_data = False
for i, (resource, prices) in enumerate(history.prices.items()):
if prices and any(p is not None for p in prices):
color = ChartColors.SERIES[i % len(ChartColors.SERIES)]
# Filter out None values
valid_prices = [p if p is not None else 0 for p in prices]
# Align with turns
min_len = min(len(turns), len(valid_prices))
ax.plot(list(turns)[-min_len:], valid_prices[-min_len:],
color=color, linewidth=1.5, label=resource.capitalize(), alpha=0.9)
has_data = True
ax.set_title('Market Prices', color=ChartColors.CYAN)
ax.set_xlabel('Turn')
ax.set_ylabel('Price (coins)')
ax.grid(True, alpha=0.2)
if has_data:
ax.legend(loc='upper left', fontsize=8, framealpha=0.8)
ax.set_ylim(bottom=0)
ax.yaxis.set_major_locator(mticker.MaxNLocator(integer=True))
fig.tight_layout()
return self._fig_to_surface(fig)
def render_population(self, history: HistoryData, width: int, height: int) -> pygame.Surface:
"""Render population over time chart."""
fig, ax = plt.subplots(figsize=(width/self.dpi, height/self.dpi), dpi=self.dpi)
turns = list(history.turns) if history.turns else [0]
population = list(history.population) if history.population else [0]
deaths = list(history.deaths_cumulative) if history.deaths_cumulative else [0]
min_len = min(len(turns), len(population))
# Population line
ax.fill_between(turns[-min_len:], population[-min_len:],
alpha=0.3, color=ChartColors.CYAN)
ax.plot(turns[-min_len:], population[-min_len:],
color=ChartColors.CYAN, linewidth=2, label='Living')
# Deaths line
if deaths:
ax.plot(turns[-min_len:], deaths[-min_len:],
color=ChartColors.MAGENTA, linewidth=1.5, linestyle='--',
label='Total Deaths', alpha=0.8)
ax.set_title('Population Over Time', color=ChartColors.LIME)
ax.set_xlabel('Turn')
ax.set_ylabel('Count')
ax.grid(True, alpha=0.2)
ax.legend(loc='upper right', fontsize=8)
ax.set_ylim(bottom=0)
ax.yaxis.set_major_locator(mticker.MaxNLocator(integer=True))
fig.tight_layout()
return self._fig_to_surface(fig)
def render_wealth_distribution(self, state: "SimulationState", width: int, height: int) -> pygame.Surface:
"""Render current wealth distribution as a bar chart."""
fig, ax = plt.subplots(figsize=(width/self.dpi, height/self.dpi), dpi=self.dpi)
# Get agent wealth data
agents = [a for a in state.agents if a.get("is_alive", False)]
if not agents:
ax.text(0.5, 0.5, 'No living agents', ha='center', va='center',
color=ChartColors.TEXT_DIM, fontsize=12)
ax.set_title('Wealth Distribution', color=ChartColors.ORANGE)
fig.tight_layout()
return self._fig_to_surface(fig)
# Sort by wealth
agents_sorted = sorted(agents, key=lambda a: a.get("money", 0), reverse=True)
names = [a.get("name", "?")[:8] for a in agents_sorted]
wealth = [a.get("money", 0) for a in agents_sorted]
# Create gradient colors based on wealth ranking
colors = []
for i in range(len(agents_sorted)):
ratio = i / max(1, len(agents_sorted) - 1)
# Gradient from cyan (rich) to magenta (poor)
r = int(0 + ratio * 255)
g = int(212 - ratio * 212)
b = int(255 - ratio * 102)
colors.append(f'#{r:02x}{g:02x}{b:02x}')
bars = ax.barh(range(len(agents_sorted)), wealth, color=colors, alpha=0.85)
ax.set_yticks(range(len(agents_sorted)))
ax.set_yticklabels(names, fontsize=7)
ax.invert_yaxis() # Rich at top
# Add value labels
for bar, val in zip(bars, wealth):
ax.text(bar.get_width() + 1, bar.get_y() + bar.get_height()/2,
f'{val}', va='center', fontsize=7, color=ChartColors.TEXT_DIM)
ax.set_title('Wealth Distribution', color=ChartColors.ORANGE)
ax.set_xlabel('Coins')
ax.grid(True, alpha=0.2, axis='x')
fig.tight_layout()
return self._fig_to_surface(fig)
def render_wealth_over_time(self, history: HistoryData, width: int, height: int) -> pygame.Surface:
"""Render wealth metrics over time (total money, avg, gini)."""
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(width/self.dpi, height/self.dpi),
dpi=self.dpi, height_ratios=[2, 1])
turns = list(history.turns) if history.turns else [0]
total = list(history.total_money) if history.total_money else [0]
avg = list(history.avg_wealth) if history.avg_wealth else [0]
gini = list(history.gini_coefficient) if history.gini_coefficient else [0]
min_len = min(len(turns), len(total), len(avg))
# Total and average wealth
ax1.plot(turns[-min_len:], total[-min_len:],
color=ChartColors.CYAN, linewidth=2, label='Total Money')
ax1.fill_between(turns[-min_len:], total[-min_len:],
alpha=0.2, color=ChartColors.CYAN)
ax1_twin = ax1.twinx()
ax1_twin.plot(turns[-min_len:], avg[-min_len:],
color=ChartColors.LIME, linewidth=1.5, linestyle='--', label='Avg Wealth')
ax1_twin.set_ylabel('Avg Wealth', color=ChartColors.LIME)
ax1_twin.tick_params(axis='y', labelcolor=ChartColors.LIME)
ax1.set_title('Money in Circulation', color=ChartColors.YELLOW)
ax1.set_ylabel('Total Money', color=ChartColors.CYAN)
ax1.tick_params(axis='y', labelcolor=ChartColors.CYAN)
ax1.grid(True, alpha=0.2)
ax1.set_ylim(bottom=0)
# Gini coefficient (inequality)
min_len_gini = min(len(turns), len(gini))
ax2.fill_between(turns[-min_len_gini:], gini[-min_len_gini:],
alpha=0.4, color=ChartColors.MAGENTA)
ax2.plot(turns[-min_len_gini:], gini[-min_len_gini:],
color=ChartColors.MAGENTA, linewidth=1.5)
ax2.set_xlabel('Turn')
ax2.set_ylabel('Gini')
ax2.set_title('Inequality Index', color=ChartColors.MAGENTA, fontsize=9)
ax2.set_ylim(0, 1)
ax2.grid(True, alpha=0.2)
# Add reference lines for gini
ax2.axhline(y=0.4, color=ChartColors.YELLOW, linestyle=':', alpha=0.5, linewidth=1)
ax2.text(turns[-1] if turns else 0, 0.42, 'Moderate', fontsize=7,
color=ChartColors.YELLOW, alpha=0.7)
fig.tight_layout()
return self._fig_to_surface(fig)
def render_professions(self, state: "SimulationState", history: HistoryData,
width: int, height: int) -> pygame.Surface:
"""Render profession distribution as pie chart and area chart."""
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(width/self.dpi, height/self.dpi), dpi=self.dpi)
# Current profession pie chart
professions = state.statistics.get("professions", {})
if professions:
labels = list(professions.keys())
sizes = list(professions.values())
colors = [ChartColors.SERIES[i % len(ChartColors.SERIES)] for i in range(len(labels))]
wedges, texts, autotexts = ax1.pie(
sizes, labels=labels, colors=colors, autopct='%1.0f%%',
startangle=90, pctdistance=0.75,
textprops={'fontsize': 8, 'color': ChartColors.TEXT}
)
for autotext in autotexts:
autotext.set_color(ChartColors.BG)
autotext.set_fontweight('bold')
ax1.set_title('Current Distribution', color=ChartColors.PURPLE, fontsize=10)
else:
ax1.text(0.5, 0.5, 'No data', ha='center', va='center', color=ChartColors.TEXT_DIM)
ax1.set_title('Current Distribution', color=ChartColors.PURPLE)
# Profession history as stacked area
turns = list(history.turns) if history.turns else [0]
if history.professions and turns:
profs_list = list(history.professions.keys())
data = []
for prof in profs_list:
prof_data = list(history.professions[prof])
# Pad to match turns length
while len(prof_data) < len(turns):
prof_data.insert(0, 0)
data.append(prof_data[-len(turns):])
colors = [ChartColors.SERIES[i % len(ChartColors.SERIES)] for i in range(len(profs_list))]
ax2.stackplot(turns, *data, labels=profs_list, colors=colors, alpha=0.8)
ax2.legend(loc='upper left', fontsize=7, framealpha=0.8)
ax2.set_xlabel('Turn')
ax2.set_ylabel('Count')
ax2.set_title('Over Time', color=ChartColors.PURPLE, fontsize=10)
ax2.grid(True, alpha=0.2)
fig.tight_layout()
return self._fig_to_surface(fig)
def render_market_activity(self, state: "SimulationState", history: HistoryData,
width: int, height: int) -> pygame.Surface:
"""Render market activity - orders by resource, supply/demand."""
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(width/self.dpi, height/self.dpi), dpi=self.dpi)
# Current market orders by resource type
prices = state.market_prices
resources = []
quantities = []
colors = []
for i, (resource, data) in enumerate(prices.items()):
qty = data.get("total_available", 0)
if qty > 0:
resources.append(resource.capitalize())
quantities.append(qty)
colors.append(ChartColors.SERIES[i % len(ChartColors.SERIES)])
if resources:
bars = ax1.bar(resources, quantities, color=colors, alpha=0.85)
ax1.set_ylabel('Available')
for bar, val in zip(bars, quantities):
ax1.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.3,
str(val), ha='center', fontsize=8, color=ChartColors.TEXT)
else:
ax1.text(0.5, 0.5, 'No orders', ha='center', va='center', color=ChartColors.TEXT_DIM)
ax1.set_title('Market Supply', color=ChartColors.TEAL, fontsize=10)
ax1.tick_params(axis='x', rotation=45, labelsize=7)
ax1.grid(True, alpha=0.2, axis='y')
# Supply/Demand scores
resources_sd = []
supply_scores = []
demand_scores = []
for resource, data in prices.items():
resources_sd.append(resource[:6])
supply_scores.append(data.get("supply_score", 0.5))
demand_scores.append(data.get("demand_score", 0.5))
if resources_sd:
x = np.arange(len(resources_sd))
width_bar = 0.35
ax2.bar(x - width_bar/2, supply_scores, width_bar, label='Supply',
color=ChartColors.CYAN, alpha=0.8)
ax2.bar(x + width_bar/2, demand_scores, width_bar, label='Demand',
color=ChartColors.MAGENTA, alpha=0.8)
ax2.set_xticks(x)
ax2.set_xticklabels(resources_sd, fontsize=7, rotation=45)
ax2.set_ylabel('Score')
ax2.legend(fontsize=7)
ax2.set_ylim(0, 1.2)
ax2.set_title('Supply/Demand', color=ChartColors.TEAL, fontsize=10)
ax2.grid(True, alpha=0.2, axis='y')
fig.tight_layout()
return self._fig_to_surface(fig)
def render_agent_stats(self, state: "SimulationState", width: int, height: int) -> pygame.Surface:
"""Render aggregate agent statistics - energy, hunger, thirst distributions."""
fig, axes = plt.subplots(2, 2, figsize=(width/self.dpi, height/self.dpi), dpi=self.dpi)
agents = [a for a in state.agents if a.get("is_alive", False)]
if not agents:
for ax in axes.flat:
ax.text(0.5, 0.5, 'No agents', ha='center', va='center', color=ChartColors.TEXT_DIM)
fig.suptitle('Agent Statistics', color=ChartColors.CYAN)
fig.tight_layout()
return self._fig_to_surface(fig)
# Extract stats
energies = [a.get("stats", {}).get("energy", 0) for a in agents]
hungers = [a.get("stats", {}).get("hunger", 0) for a in agents]
thirsts = [a.get("stats", {}).get("thirst", 0) for a in agents]
heats = [a.get("stats", {}).get("heat", 0) for a in agents]
max_energy = agents[0].get("stats", {}).get("max_energy", 100)
max_hunger = agents[0].get("stats", {}).get("max_hunger", 100)
max_thirst = agents[0].get("stats", {}).get("max_thirst", 100)
max_heat = agents[0].get("stats", {}).get("max_heat", 100)
stats_data = [
(energies, max_energy, 'Energy', ChartColors.LIME),
(hungers, max_hunger, 'Hunger', ChartColors.ORANGE),
(thirsts, max_thirst, 'Thirst', ChartColors.CYAN),
(heats, max_heat, 'Heat', ChartColors.MAGENTA),
]
for ax, (values, max_val, name, color) in zip(axes.flat, stats_data):
# Histogram
bins = np.linspace(0, max_val, 11)
ax.hist(values, bins=bins, color=color, alpha=0.7, edgecolor=ChartColors.PANEL)
# Mean line
mean_val = np.mean(values)
ax.axvline(x=mean_val, color=ChartColors.TEXT, linestyle='--',
linewidth=1.5, label=f'Avg: {mean_val:.0f}')
# Critical threshold
critical = max_val * 0.25
ax.axvline(x=critical, color=ChartColors.MAGENTA, linestyle=':',
linewidth=1, alpha=0.7)
ax.set_title(name, color=color, fontsize=9)
ax.set_xlim(0, max_val)
ax.legend(fontsize=7, loc='upper right')
ax.grid(True, alpha=0.2)
fig.suptitle('Agent Statistics Distribution', color=ChartColors.CYAN, fontsize=11)
fig.tight_layout()
return self._fig_to_surface(fig)
class StatsRenderer:
"""Main statistics panel with tabs and charts."""
TABS = [
("Prices", "price_history"),
("Wealth", "wealth"),
("Population", "population"),
("Professions", "professions"),
("Market", "market"),
("Agent Stats", "agent_stats"),
]
def __init__(self, screen: pygame.Surface):
self.screen = screen
self.visible = False
self.font = pygame.font.Font(None, 24)
self.small_font = pygame.font.Font(None, 18)
self.title_font = pygame.font.Font(None, 32)
self.current_tab = 0
self.tab_hovered = -1
# History data
self.history = HistoryData()
# Chart renderer
self.chart_renderer: Optional[ChartRenderer] = None
# Cached chart surfaces
self._chart_cache: dict[str, pygame.Surface] = {}
self._cache_turn: int = -1
# Layout
self._calculate_layout()
def _calculate_layout(self) -> None:
"""Calculate panel layout based on screen size."""
screen_w, screen_h = self.screen.get_size()
# Panel takes most of the screen with some margin
margin = 30
self.panel_rect = pygame.Rect(
margin, margin,
screen_w - margin * 2,
screen_h - margin * 2
)
# Tab bar
self.tab_height = 40
self.tab_rect = pygame.Rect(
self.panel_rect.x,
self.panel_rect.y,
self.panel_rect.width,
self.tab_height
)
# Chart area
self.chart_rect = pygame.Rect(
self.panel_rect.x + 10,
self.panel_rect.y + self.tab_height + 10,
self.panel_rect.width - 20,
self.panel_rect.height - self.tab_height - 20
)
# Initialize chart renderer with chart area size
self.chart_renderer = ChartRenderer(
self.chart_rect.width,
self.chart_rect.height
)
# Calculate tab widths
self.tab_width = self.panel_rect.width // len(self.TABS)
def toggle(self) -> None:
"""Toggle visibility of the stats panel."""
self.visible = not self.visible
if self.visible:
self._invalidate_cache()
def update_history(self, state: "SimulationState") -> None:
"""Update history data with new state."""
if state:
self.history.update(state)
def clear_history(self) -> None:
"""Clear all history data (e.g., on simulation reset)."""
self.history.clear()
self._invalidate_cache()
def _invalidate_cache(self) -> None:
"""Invalidate chart cache to force re-render."""
self._chart_cache.clear()
self._cache_turn = -1
def handle_event(self, event: pygame.event.Event) -> bool:
"""Handle input events. Returns True if event was consumed."""
if not self.visible:
return False
if event.type == pygame.MOUSEMOTION:
self._handle_mouse_motion(event.pos)
return True
elif event.type == pygame.MOUSEBUTTONDOWN:
if self._handle_click(event.pos):
return True
# Consume clicks when visible
return True
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_ESCAPE:
self.toggle()
return True
elif event.key == pygame.K_LEFT:
self.current_tab = (self.current_tab - 1) % len(self.TABS)
self._invalidate_cache()
return True
elif event.key == pygame.K_RIGHT:
self.current_tab = (self.current_tab + 1) % len(self.TABS)
self._invalidate_cache()
return True
return False
def _handle_mouse_motion(self, pos: tuple[int, int]) -> None:
"""Handle mouse motion for tab hover effects."""
self.tab_hovered = -1
if self.tab_rect.collidepoint(pos):
rel_x = pos[0] - self.tab_rect.x
tab_idx = rel_x // self.tab_width
if 0 <= tab_idx < len(self.TABS):
self.tab_hovered = tab_idx
def _handle_click(self, pos: tuple[int, int]) -> bool:
"""Handle mouse click. Returns True if click was on a tab."""
if self.tab_rect.collidepoint(pos):
rel_x = pos[0] - self.tab_rect.x
tab_idx = rel_x // self.tab_width
if 0 <= tab_idx < len(self.TABS) and tab_idx != self.current_tab:
self.current_tab = tab_idx
self._invalidate_cache()
return True
return False
def _render_chart(self, state: "SimulationState") -> pygame.Surface:
"""Render the current tab's chart."""
tab_name, tab_key = self.TABS[self.current_tab]
# Check cache
current_turn = state.turn if state else 0
if tab_key in self._chart_cache and self._cache_turn == current_turn:
return self._chart_cache[tab_key]
# Render chart based on current tab
width = self.chart_rect.width
height = self.chart_rect.height
if tab_key == "price_history":
surface = self.chart_renderer.render_price_history(self.history, width, height)
elif tab_key == "wealth":
# Split into two charts
half_height = height // 2
dist_surface = self.chart_renderer.render_wealth_distribution(state, width, half_height)
time_surface = self.chart_renderer.render_wealth_over_time(self.history, width, half_height)
surface = pygame.Surface((width, height))
surface.fill(UIColors.BG)
surface.blit(dist_surface, (0, 0))
surface.blit(time_surface, (0, half_height))
elif tab_key == "population":
surface = self.chart_renderer.render_population(self.history, width, height)
elif tab_key == "professions":
surface = self.chart_renderer.render_professions(state, self.history, width, height)
elif tab_key == "market":
surface = self.chart_renderer.render_market_activity(state, self.history, width, height)
elif tab_key == "agent_stats":
surface = self.chart_renderer.render_agent_stats(state, width, height)
else:
# Fallback empty surface
surface = pygame.Surface((width, height))
surface.fill(UIColors.BG)
# Cache the result
self._chart_cache[tab_key] = surface
self._cache_turn = current_turn
return surface
def draw(self, state: "SimulationState") -> None:
"""Draw the statistics panel."""
if not self.visible:
return
# Dim background
overlay = pygame.Surface(self.screen.get_size(), pygame.SRCALPHA)
overlay.fill((0, 0, 0, 220))
self.screen.blit(overlay, (0, 0))
# Panel background
pygame.draw.rect(self.screen, UIColors.PANEL_BG, self.panel_rect, border_radius=12)
pygame.draw.rect(self.screen, UIColors.PANEL_BORDER, self.panel_rect, 2, border_radius=12)
# Draw tabs
self._draw_tabs()
# Draw chart
if state:
chart_surface = self._render_chart(state)
self.screen.blit(chart_surface, self.chart_rect.topleft)
# Draw close hint
hint = self.small_font.render("Press G or ESC to close | ← → to switch tabs",
True, UIColors.TEXT_SECONDARY)
hint_rect = hint.get_rect(centerx=self.panel_rect.centerx,
y=self.panel_rect.bottom - 25)
self.screen.blit(hint, hint_rect)
def _draw_tabs(self) -> None:
"""Draw the tab bar."""
for i, (tab_name, _) in enumerate(self.TABS):
tab_x = self.tab_rect.x + i * self.tab_width
tab_rect = pygame.Rect(tab_x, self.tab_rect.y, self.tab_width, self.tab_height)
# Tab background
if i == self.current_tab:
color = UIColors.TAB_ACTIVE
elif i == self.tab_hovered:
color = UIColors.TAB_HOVER
else:
color = UIColors.TAB_INACTIVE
# Draw tab with rounded top corners
tab_surface = pygame.Surface((self.tab_width, self.tab_height), pygame.SRCALPHA)
pygame.draw.rect(tab_surface, color, (0, 0, self.tab_width, self.tab_height),
border_top_left_radius=8, border_top_right_radius=8)
if i == self.current_tab:
# Active tab - solid color
tab_surface.set_alpha(255)
else:
tab_surface.set_alpha(180)
self.screen.blit(tab_surface, (tab_x, self.tab_rect.y))
# Tab text
text_color = UIColors.BG if i == self.current_tab else UIColors.TEXT_PRIMARY
text = self.small_font.render(tab_name, True, text_color)
text_rect = text.get_rect(center=tab_rect.center)
self.screen.blit(text, text_rect)
# Tab border
if i != self.current_tab:
pygame.draw.line(self.screen, UIColors.PANEL_BORDER,
(tab_x + self.tab_width - 1, self.tab_rect.y + 5),
(tab_x + self.tab_width - 1, self.tab_rect.y + self.tab_height - 5))

View File

@ -1,239 +0,0 @@
"""UI renderer for the Village Simulation."""
import pygame
from typing import TYPE_CHECKING, Optional
if TYPE_CHECKING:
from frontend.client import SimulationState
class Colors:
# UI colors
PANEL_BG = (35, 40, 50)
PANEL_BORDER = (70, 80, 95)
TEXT_PRIMARY = (230, 230, 235)
TEXT_SECONDARY = (160, 165, 175)
TEXT_HIGHLIGHT = (100, 180, 255)
TEXT_WARNING = (255, 180, 80)
TEXT_DANGER = (255, 100, 100)
# Day/Night indicator
DAY_COLOR = (255, 220, 100)
NIGHT_COLOR = (100, 120, 180)
class UIRenderer:
"""Renders UI elements (HUD, panels, text info)."""
def __init__(self, screen: pygame.Surface, font: pygame.font.Font):
self.screen = screen
self.font = font
self.small_font = pygame.font.Font(None, 20)
self.title_font = pygame.font.Font(None, 28)
# Panel dimensions
self.top_panel_height = 50
self.right_panel_width = 200
def _draw_panel(self, rect: pygame.Rect, title: Optional[str] = None) -> None:
"""Draw a panel background."""
pygame.draw.rect(self.screen, Colors.PANEL_BG, rect)
pygame.draw.rect(self.screen, Colors.PANEL_BORDER, rect, 1)
if title:
title_text = self.small_font.render(title, True, Colors.TEXT_SECONDARY)
self.screen.blit(title_text, (rect.x + 8, rect.y + 4))
def draw_top_bar(self, state: "SimulationState") -> None:
"""Draw the top information bar."""
rect = pygame.Rect(0, 0, self.screen.get_width(), self.top_panel_height)
pygame.draw.rect(self.screen, Colors.PANEL_BG, rect)
pygame.draw.line(
self.screen,
Colors.PANEL_BORDER,
(0, self.top_panel_height),
(self.screen.get_width(), self.top_panel_height),
)
# Day/Night and Turn info
is_night = state.time_of_day == "night"
time_color = Colors.NIGHT_COLOR if is_night else Colors.DAY_COLOR
time_text = "NIGHT" if is_night else "DAY"
# Draw time indicator circle
pygame.draw.circle(self.screen, time_color, (25, 25), 12)
pygame.draw.circle(self.screen, Colors.PANEL_BORDER, (25, 25), 12, 1)
# Time/day text
info_text = f"{time_text} | Day {state.day}, Step {state.step_in_day} | Turn {state.turn}"
text = self.font.render(info_text, True, Colors.TEXT_PRIMARY)
self.screen.blit(text, (50, 15))
# Mode indicator
mode_color = Colors.TEXT_HIGHLIGHT if state.mode == "auto" else Colors.TEXT_SECONDARY
mode_text = f"Mode: {state.mode.upper()}"
text = self.small_font.render(mode_text, True, mode_color)
self.screen.blit(text, (self.screen.get_width() - 120, 8))
# Running indicator
if state.is_running:
status_text = "RUNNING"
status_color = (100, 200, 100)
else:
status_text = "STOPPED"
status_color = Colors.TEXT_DANGER
text = self.small_font.render(status_text, True, status_color)
self.screen.blit(text, (self.screen.get_width() - 120, 28))
def draw_right_panel(self, state: "SimulationState") -> None:
"""Draw the right information panel."""
panel_x = self.screen.get_width() - self.right_panel_width
rect = pygame.Rect(
panel_x,
self.top_panel_height,
self.right_panel_width,
self.screen.get_height() - self.top_panel_height,
)
pygame.draw.rect(self.screen, Colors.PANEL_BG, rect)
pygame.draw.line(
self.screen,
Colors.PANEL_BORDER,
(panel_x, self.top_panel_height),
(panel_x, self.screen.get_height()),
)
y = self.top_panel_height + 10
# Statistics section
y = self._draw_statistics_section(state, panel_x + 10, y)
# Market section
y = self._draw_market_section(state, panel_x + 10, y + 20)
# Controls help section
self._draw_controls_help(panel_x + 10, self.screen.get_height() - 100)
def _draw_statistics_section(self, state: "SimulationState", x: int, y: int) -> int:
"""Draw the statistics section."""
# Title
title = self.title_font.render("Statistics", True, Colors.TEXT_PRIMARY)
self.screen.blit(title, (x, y))
y += 30
stats = state.statistics
living = len(state.get_living_agents())
# Population
pop_color = Colors.TEXT_PRIMARY if living > 2 else Colors.TEXT_DANGER
text = self.small_font.render(f"Population: {living}", True, pop_color)
self.screen.blit(text, (x, y))
y += 18
# Deaths
deaths = stats.get("total_agents_died", 0)
if deaths > 0:
text = self.small_font.render(f"Deaths: {deaths}", True, Colors.TEXT_WARNING)
self.screen.blit(text, (x, y))
y += 18
# Total money
total_money = stats.get("total_money_in_circulation", 0)
text = self.small_font.render(f"Total Coins: {total_money}", True, Colors.TEXT_SECONDARY)
self.screen.blit(text, (x, y))
y += 18
# Professions
professions = stats.get("professions", {})
if professions:
y += 5
text = self.small_font.render("Professions:", True, Colors.TEXT_SECONDARY)
self.screen.blit(text, (x, y))
y += 16
for prof, count in professions.items():
text = self.small_font.render(f" {prof}: {count}", True, Colors.TEXT_SECONDARY)
self.screen.blit(text, (x, y))
y += 14
return y
def _draw_market_section(self, state: "SimulationState", x: int, y: int) -> int:
"""Draw the market section."""
# Title
title = self.title_font.render("Market", True, Colors.TEXT_PRIMARY)
self.screen.blit(title, (x, y))
y += 30
# Order count
order_count = len(state.market_orders)
text = self.small_font.render(f"Active Orders: {order_count}", True, Colors.TEXT_SECONDARY)
self.screen.blit(text, (x, y))
y += 20
# Price summary for each resource with available stock
prices = state.market_prices
for resource, data in prices.items():
if data.get("total_available", 0) > 0:
price = data.get("lowest_price", "?")
qty = data.get("total_available", 0)
text = self.small_font.render(
f"{resource}: {qty}x @ {price}c",
True,
Colors.TEXT_SECONDARY,
)
self.screen.blit(text, (x, y))
y += 16
return y
def _draw_controls_help(self, x: int, y: int) -> None:
"""Draw controls help at bottom of panel."""
pygame.draw.line(
self.screen,
Colors.PANEL_BORDER,
(x - 5, y - 10),
(self.screen.get_width() - 5, y - 10),
)
title = self.small_font.render("Controls", True, Colors.TEXT_PRIMARY)
self.screen.blit(title, (x, y))
y += 20
controls = [
"SPACE - Next Turn",
"R - Reset Simulation",
"M - Toggle Mode",
"S - Settings",
"ESC - Quit",
]
for control in controls:
text = self.small_font.render(control, True, Colors.TEXT_SECONDARY)
self.screen.blit(text, (x, y))
y += 16
def draw_connection_status(self, connected: bool) -> None:
"""Draw connection status overlay when disconnected."""
if connected:
return
# Semi-transparent overlay
overlay = pygame.Surface(self.screen.get_size(), pygame.SRCALPHA)
overlay.fill((0, 0, 0, 180))
self.screen.blit(overlay, (0, 0))
# Connection message
text = self.title_font.render("Connecting to server...", True, Colors.TEXT_WARNING)
text_rect = text.get_rect(center=(self.screen.get_width() // 2, self.screen.get_height() // 2))
self.screen.blit(text, text_rect)
hint = self.small_font.render("Make sure the backend is running on localhost:8000", True, Colors.TEXT_SECONDARY)
hint_rect = hint.get_rect(center=(self.screen.get_width() // 2, self.screen.get_height() // 2 + 30))
self.screen.blit(hint, hint_rect)
def draw(self, state: "SimulationState") -> None:
"""Draw all UI elements."""
self.draw_top_bar(state)
self.draw_right_panel(state)

View File

@ -5,8 +5,7 @@ fastapi>=0.104.0
uvicorn[standard]>=0.24.0 uvicorn[standard]>=0.24.0
pydantic>=2.5.0 pydantic>=2.5.0
# Frontend # HTTP client (for web frontend communication)
pygame-ce>=2.4.0
requests>=2.31.0 requests>=2.31.0
# Tools (balance sheet export/import) # Tools (balance sheet export/import)

496
tools/optimize_goap.py Normal file
View File

@ -0,0 +1,496 @@
#!/usr/bin/env python3
"""
GOAP Economy Optimizer for Village Simulation
This script optimizes the simulation parameters specifically for the GOAP AI system.
The goal is to achieve:
- Balanced action diversity (hunting, gathering, trading)
- Active economy with trading
- Good survival rates
- Meat production through hunting
Key insight: GOAP uses action COSTS to choose actions. Lower cost = preferred.
We need to tune:
1. Action energy costs (config.json)
2. GOAP action cost functions (goap/actions.py)
3. Goal priorities (goap/goals.py)
Usage:
python tools/optimize_goap.py [--iterations 15] [--steps 300]
python tools/optimize_goap.py --analyze # Analyze current GOAP behavior
"""
import argparse
import json
import random
import sys
from collections import defaultdict
from datetime import datetime
from pathlib import Path
# Add parent directory for imports
sys.path.insert(0, str(Path(__file__).parent.parent))
from backend.config import get_config, reload_config
from backend.core.engine import GameEngine
from backend.domain.action import reset_action_config_cache
from backend.domain.resources import reset_resource_cache
def analyze_goap_behavior(num_steps: int = 100, num_agents: int = 10):
"""Analyze current GOAP behavior in detail."""
print("\n" + "=" * 70)
print("🔍 GOAP BEHAVIOR ANALYSIS")
print("=" * 70)
# Reset engine
GameEngine._instance = None
engine = GameEngine()
engine.initialize(num_agents=num_agents)
# Track statistics
action_counts = defaultdict(int)
goal_counts = defaultdict(int)
reactive_count = 0
planned_count = 0
# Resource tracking
resources_produced = defaultdict(int)
resources_consumed = defaultdict(int)
# Run simulation
for step in range(num_steps):
if not engine.is_running:
print(f" Simulation ended at step {step}")
break
log = engine.next_step()
for action_data in log.agent_actions:
decision = action_data.get("decision", {})
result = action_data.get("result", {})
action_type = decision.get("action", "unknown")
action_counts[action_type] += 1
# Track goal/reactive
goal_name = decision.get("goal_name", "")
reason = decision.get("reason", "")
if goal_name:
goal_counts[goal_name] += 1
planned_count += 1
elif "Reactive" in reason:
goal_counts["(reactive)"] += 1
reactive_count += 1
# Track resources
if result and result.get("success"):
for res in result.get("resources_gained", []):
resources_produced[res.get("type", "unknown")] += res.get("quantity", 0)
for res in result.get("resources_consumed", []):
resources_consumed[res.get("type", "unknown")] += res.get("quantity", 0)
# Print results
total_actions = sum(action_counts.values())
print(f"\n📊 Action Distribution ({num_steps} turns, {num_agents} agents)")
print("-" * 50)
for action, count in sorted(action_counts.items(), key=lambda x: -x[1]):
pct = count * 100 / total_actions if total_actions > 0 else 0
bar = "" * int(pct / 2)
print(f" {action:12} {count:4} ({pct:5.1f}%) {bar}")
print(f"\n🎯 Goal Distribution")
print("-" * 50)
total_goals = sum(goal_counts.values())
for goal, count in sorted(goal_counts.items(), key=lambda x: -x[1])[:15]:
pct = count * 100 / total_goals if total_goals > 0 else 0
print(f" {goal:20} {count:4} ({pct:5.1f}%)")
print(f"\n Planned actions: {planned_count} ({planned_count*100/total_actions:.1f}%)")
print(f" Reactive actions: {reactive_count} ({reactive_count*100/total_actions:.1f}%)")
print(f"\n📦 Resources Produced")
print("-" * 50)
for res, qty in sorted(resources_produced.items(), key=lambda x: -x[1]):
print(f" {res:12} {qty:4}")
print(f"\n🔥 Resources Consumed")
print("-" * 50)
for res, qty in sorted(resources_consumed.items(), key=lambda x: -x[1]):
print(f" {res:12} {qty:4}")
# Diagnose issues
print(f"\n⚠️ ISSUES DETECTED:")
print("-" * 50)
hunt_pct = action_counts.get("hunt", 0) * 100 / total_actions if total_actions > 0 else 0
gather_pct = action_counts.get("gather", 0) * 100 / total_actions if total_actions > 0 else 0
if hunt_pct < 5:
print(" ❌ Almost no hunting! Hunt action cost too high or meat not valued enough.")
print(" → Reduce hunt energy cost or increase meat benefits")
if resources_produced.get("meat", 0) == 0:
print(" ❌ No meat produced! Agents never hunt successfully.")
trade_pct = action_counts.get("trade", 0) * 100 / total_actions if total_actions > 0 else 0
if trade_pct < 5:
print(" ❌ Low trading activity. Market goals not prioritized.")
if reactive_count > planned_count:
print(" ⚠️ More reactive than planned actions. Goals may be too easily satisfied.")
return {
"action_counts": dict(action_counts),
"goal_counts": dict(goal_counts),
"resources_produced": dict(resources_produced),
"resources_consumed": dict(resources_consumed),
}
def test_config(config_overrides: dict, num_steps: int = 200, num_agents: int = 10, verbose: bool = True):
"""Test a configuration and return metrics."""
# Save original config
config_path = Path("config.json")
with open(config_path) as f:
original_config = json.load(f)
# Apply overrides
test_config = json.loads(json.dumps(original_config))
for section, values in config_overrides.items():
if section in test_config:
test_config[section].update(values)
else:
test_config[section] = values
# Save temp config
temp_path = Path("config_temp.json")
with open(temp_path, 'w') as f:
json.dump(test_config, f, indent=2)
# Reload config
reload_config(str(temp_path))
reset_action_config_cache()
reset_resource_cache()
# Run simulation
GameEngine._instance = None
engine = GameEngine()
engine.initialize(num_agents=num_agents)
action_counts = defaultdict(int)
resources_produced = defaultdict(int)
deaths = 0
trades_completed = 0
for step in range(num_steps):
if not engine.is_running:
break
log = engine.next_step()
deaths += len(log.deaths)
for action_data in log.agent_actions:
decision = action_data.get("decision", {})
result = action_data.get("result", {})
action_type = decision.get("action", "unknown")
action_counts[action_type] += 1
if result and result.get("success"):
for res in result.get("resources_gained", []):
resources_produced[res.get("type", "unknown")] += res.get("quantity", 0)
if action_type == "trade" and "Bought" in result.get("message", ""):
trades_completed += 1
final_pop = len(engine.world.get_living_agents())
# Cleanup
engine.logger.close()
temp_path.unlink(missing_ok=True)
# Restore original config
reload_config(str(config_path))
reset_action_config_cache()
reset_resource_cache()
# Calculate score
total_actions = sum(action_counts.values())
hunt_ratio = action_counts.get("hunt", 0) / total_actions if total_actions > 0 else 0
gather_ratio = action_counts.get("gather", 0) / total_actions if total_actions > 0 else 0
trade_ratio = action_counts.get("trade", 0) / total_actions if total_actions > 0 else 0
survival_rate = final_pop / num_agents
# Score components
# 1. Hunt ratio: want 10-25%
hunt_score = min(25, hunt_ratio * 100) if hunt_ratio > 0.05 else 0
# 2. Trade activity: want 5-15%
trade_score = min(20, trade_ratio * 100 * 2)
# 3. Resource diversity
has_meat = resources_produced.get("meat", 0) > 0
has_berries = resources_produced.get("berries", 0) > 0
has_wood = resources_produced.get("wood", 0) > 0
has_water = resources_produced.get("water", 0) > 0
diversity_score = (int(has_meat) + int(has_berries) + int(has_wood) + int(has_water)) * 5
# 4. Survival
survival_score = survival_rate * 30
# 5. Meat production bonus
meat_score = min(15, resources_produced.get("meat", 0) / 5)
total_score = hunt_score + trade_score + diversity_score + survival_score + meat_score
if verbose:
print(f"\n Score: {total_score:.1f}/100")
print(f" ├─ Hunt: {hunt_ratio*100:.1f}% ({hunt_score:.1f} pts)")
print(f" ├─ Trade: {trade_ratio*100:.1f}% ({trade_score:.1f} pts)")
print(f" ├─ Diversity: {diversity_score:.1f} pts")
print(f" ├─ Survival: {survival_rate*100:.0f}% ({survival_score:.1f} pts)")
print(f" └─ Meat produced: {resources_produced.get('meat', 0)} ({meat_score:.1f} pts)")
print(f" Actions: hunt={action_counts.get('hunt',0)}, gather={action_counts.get('gather',0)}, trade={action_counts.get('trade',0)}")
return {
"score": total_score,
"action_counts": dict(action_counts),
"resources": dict(resources_produced),
"survival_rate": survival_rate,
"deaths": deaths,
}
def optimize_for_goap(iterations: int = 15, steps: int = 300):
"""Run optimization focused on GOAP-specific parameters."""
print("\n" + "=" * 70)
print("🧬 GOAP ECONOMY OPTIMIZER")
print("=" * 70)
print(f" Iterations: {iterations}")
print(f" Steps per test: {steps}")
print("=" * 70)
# Key parameters to optimize for GOAP
# Focus on making hunting more attractive
configs_to_test = [
# Baseline
{
"name": "Baseline (current)",
"config": {}
},
# Cheaper hunting
{
"name": "Cheaper Hunt (-5 energy)",
"config": {
"actions": {
"hunt_energy": -5,
"hunt_success": 0.8,
}
}
},
# More valuable meat
{
"name": "Valuable Meat (+45 hunger)",
"config": {
"resources": {
"meat_hunger": 45,
"meat_energy": 15,
},
"actions": {
"hunt_energy": -6,
"hunt_success": 0.8,
}
}
},
# Make berries less attractive
{
"name": "Nerfed Berries",
"config": {
"resources": {
"meat_hunger": 45,
"meat_energy": 15,
"berries_hunger": 8,
"berries_thirst": 2,
},
"actions": {
"hunt_energy": -5,
"gather_energy": -4,
"hunt_success": 0.85,
"hunt_meat_min": 2,
"hunt_meat_max": 4,
}
}
},
# Higher hunt output
{
"name": "High Hunt Output",
"config": {
"resources": {
"meat_hunger": 40,
"meat_energy": 12,
},
"actions": {
"hunt_energy": -6,
"hunt_success": 0.85,
"hunt_meat_min": 3,
"hunt_meat_max": 6,
"hunt_hide_min": 1,
"hunt_hide_max": 2,
}
}
},
# Balanced economy
{
"name": "Balanced Economy",
"config": {
"resources": {
"meat_hunger": 40,
"meat_energy": 15,
"berries_hunger": 8,
},
"actions": {
"hunt_energy": -5,
"gather_energy": -4,
"hunt_success": 0.8,
"hunt_meat_min": 2,
"hunt_meat_max": 5,
},
"economy": {
"buy_efficiency_threshold": 0.9,
"min_wealth_target": 40,
}
}
},
# Pro-hunting config
{
"name": "Pro-Hunting",
"config": {
"agent_stats": {
"hunger_decay": 3, # Higher hunger decay = need more food
},
"resources": {
"meat_hunger": 50, # Meat is very filling
"meat_energy": 15,
"berries_hunger": 6, # Berries less filling
},
"actions": {
"hunt_energy": -4, # Very cheap to hunt
"gather_energy": -4,
"hunt_success": 0.85,
"hunt_meat_min": 3,
"hunt_meat_max": 5,
}
}
},
# Full rebalance
{
"name": "Full Rebalance",
"config": {
"agent_stats": {
"start_hunger": 70,
"hunger_decay": 3,
"thirst_decay": 3,
},
"resources": {
"meat_hunger": 50,
"meat_energy": 15,
"berries_hunger": 8,
"berries_thirst": 3,
"water_thirst": 45,
},
"actions": {
"hunt_energy": -5,
"gather_energy": -4,
"chop_wood_energy": -5,
"get_water_energy": -3,
"hunt_success": 0.8,
"hunt_meat_min": 2,
"hunt_meat_max": 5,
"hunt_hide_min": 0,
"hunt_hide_max": 1,
"gather_min": 2,
"gather_max": 3,
}
}
},
]
best_config = None
best_score = 0
best_name = ""
for cfg in configs_to_test:
print(f"\n🧪 Testing: {cfg['name']}")
print("-" * 50)
result = test_config(cfg["config"], steps, verbose=True)
if result["score"] > best_score:
best_score = result["score"]
best_config = cfg["config"]
best_name = cfg["name"]
print(f" ⭐ New best!")
print("\n" + "=" * 70)
print("🏆 OPTIMIZATION COMPLETE")
print("=" * 70)
print(f"\n Best Config: {best_name}")
print(f" Best Score: {best_score:.1f}/100")
if best_config:
print("\n 📝 Configuration to apply:")
print("-" * 50)
print(json.dumps(best_config, indent=2))
# Ask to apply
print("\n Would you like to apply this configuration? (y/n)")
# Save as optimized config
output_path = Path("config_goap_optimized.json")
with open("config.json") as f:
full_config = json.load(f)
for section, values in best_config.items():
if section in full_config:
full_config[section].update(values)
else:
full_config[section] = values
with open(output_path, 'w') as f:
json.dump(full_config, f, indent=2)
print(f"\n ✅ Saved to: {output_path}")
print(" To apply: cp config_goap_optimized.json config.json")
return best_config
def main():
parser = argparse.ArgumentParser(description="Optimize GOAP economy parameters")
parser.add_argument("--analyze", "-a", action="store_true", help="Analyze current behavior")
parser.add_argument("--iterations", "-i", type=int, default=15, help="Optimization iterations")
parser.add_argument("--steps", "-s", type=int, default=200, help="Steps per simulation")
parser.add_argument("--apply", action="store_true", help="Auto-apply best config")
args = parser.parse_args()
if args.analyze:
analyze_goap_behavior(args.steps)
else:
best = optimize_for_goap(args.iterations, args.steps)
if args.apply and best:
# Apply the config
import shutil
shutil.copy("config_goap_optimized.json", "config.json")
print("\n ✅ Configuration applied!")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,820 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>GOAP Debug Visualizer - VillSim</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Sans:wght@400;500;600&family=IBM+Plex+Mono:wght@400;500&display=swap" rel="stylesheet">
<script src="https://cdn.jsdelivr.net/npm/chart.js@4.4.1/dist/chart.umd.min.js"></script>
<style>
:root {
--bg-primary: #0d1117;
--bg-secondary: #161b22;
--bg-tertiary: #21262d;
--border-color: #30363d;
--text-primary: #e6edf3;
--text-secondary: #8b949e;
--text-muted: #6e7681;
--accent-blue: #58a6ff;
--accent-green: #3fb950;
--accent-orange: #d29922;
--accent-red: #f85149;
--accent-purple: #a371f7;
--accent-cyan: #39c5cf;
}
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: 'IBM Plex Sans', -apple-system, BlinkMacSystemFont, sans-serif;
background: var(--bg-primary);
color: var(--text-primary);
min-height: 100vh;
}
.header {
background: var(--bg-secondary);
border-bottom: 1px solid var(--border-color);
padding: 16px 24px;
display: flex;
align-items: center;
justify-content: space-between;
}
.header h1 {
font-size: 20px;
font-weight: 600;
color: var(--accent-cyan);
}
.header-controls {
display: flex;
gap: 12px;
align-items: center;
}
.btn {
padding: 8px 16px;
border: 1px solid var(--border-color);
border-radius: 6px;
background: var(--bg-tertiary);
color: var(--text-primary);
font-family: inherit;
font-size: 14px;
cursor: pointer;
transition: all 0.15s ease;
}
.btn:hover {
background: var(--border-color);
border-color: var(--text-muted);
}
.btn-primary {
background: var(--accent-blue);
border-color: var(--accent-blue);
color: #fff;
}
.btn-primary:hover {
background: #4c9aff;
}
.status-badge {
padding: 4px 12px;
border-radius: 20px;
font-size: 12px;
font-weight: 500;
}
.status-connected {
background: rgba(63, 185, 80, 0.2);
color: var(--accent-green);
}
.status-disconnected {
background: rgba(248, 81, 73, 0.2);
color: var(--accent-red);
}
.main-content {
display: grid;
grid-template-columns: 280px 1fr 400px;
height: calc(100vh - 65px);
}
.panel {
background: var(--bg-secondary);
border-right: 1px solid var(--border-color);
overflow-y: auto;
}
.panel:last-child {
border-right: none;
border-left: 1px solid var(--border-color);
}
.panel-header {
padding: 16px;
border-bottom: 1px solid var(--border-color);
background: var(--bg-tertiary);
position: sticky;
top: 0;
z-index: 10;
}
.panel-header h2 {
font-size: 14px;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.5px;
color: var(--text-secondary);
}
.agent-list {
padding: 8px;
}
.agent-item {
padding: 12px;
border-radius: 8px;
cursor: pointer;
margin-bottom: 4px;
transition: background 0.15s ease;
}
.agent-item:hover {
background: var(--bg-tertiary);
}
.agent-item.selected {
background: rgba(88, 166, 255, 0.15);
border: 1px solid var(--accent-blue);
}
.agent-item .agent-name {
font-weight: 500;
margin-bottom: 4px;
}
.agent-item .agent-action {
font-size: 12px;
color: var(--text-secondary);
font-family: 'IBM Plex Mono', monospace;
}
.agent-item .agent-goal {
font-size: 11px;
color: var(--accent-cyan);
margin-top: 4px;
}
.center-panel {
display: flex;
flex-direction: column;
background: var(--bg-primary);
}
.plan-view {
padding: 24px;
flex: 1;
overflow-y: auto;
}
.plan-header {
display: flex;
align-items: center;
gap: 16px;
margin-bottom: 24px;
}
.plan-header h2 {
font-size: 24px;
font-weight: 600;
}
.plan-goal-badge {
padding: 6px 16px;
background: rgba(163, 113, 247, 0.2);
color: var(--accent-purple);
border-radius: 20px;
font-size: 14px;
font-weight: 500;
}
.world-state-grid {
display: grid;
grid-template-columns: repeat(4, 1fr);
gap: 12px;
margin-bottom: 32px;
}
.state-card {
background: var(--bg-secondary);
border: 1px solid var(--border-color);
border-radius: 8px;
padding: 16px;
}
.state-card .label {
font-size: 11px;
color: var(--text-muted);
text-transform: uppercase;
letter-spacing: 0.5px;
margin-bottom: 8px;
}
.state-card .value {
font-size: 24px;
font-weight: 600;
font-family: 'IBM Plex Mono', monospace;
}
.state-card .bar {
height: 4px;
background: var(--bg-tertiary);
border-radius: 2px;
margin-top: 8px;
overflow: hidden;
}
.state-card .bar-fill {
height: 100%;
border-radius: 2px;
transition: width 0.3s ease;
}
.bar-thirst .bar-fill { background: var(--accent-blue); }
.bar-hunger .bar-fill { background: var(--accent-orange); }
.bar-heat .bar-fill { background: var(--accent-red); }
.bar-energy .bar-fill { background: var(--accent-green); }
.plan-visualization {
background: var(--bg-secondary);
border: 1px solid var(--border-color);
border-radius: 12px;
padding: 24px;
margin-bottom: 24px;
}
.plan-visualization h3 {
font-size: 14px;
color: var(--text-secondary);
margin-bottom: 16px;
}
.plan-steps {
display: flex;
align-items: center;
gap: 8px;
flex-wrap: wrap;
}
.plan-step {
display: flex;
align-items: center;
gap: 8px;
}
.step-node {
padding: 12px 20px;
background: var(--bg-tertiary);
border: 2px solid var(--border-color);
border-radius: 8px;
font-family: 'IBM Plex Mono', monospace;
font-size: 14px;
font-weight: 500;
}
.step-node.current {
border-color: var(--accent-green);
background: rgba(63, 185, 80, 0.15);
color: var(--accent-green);
}
.step-arrow {
color: var(--text-muted);
font-size: 20px;
}
.goal-result {
padding: 12px 20px;
background: rgba(163, 113, 247, 0.15);
border: 2px solid var(--accent-purple);
border-radius: 8px;
color: var(--accent-purple);
font-weight: 500;
}
.no-plan {
text-align: center;
padding: 40px;
color: var(--text-muted);
}
.goals-chart-container {
background: var(--bg-secondary);
border: 1px solid var(--border-color);
border-radius: 12px;
padding: 24px;
}
.goals-chart-container h3 {
font-size: 14px;
color: var(--text-secondary);
margin-bottom: 16px;
}
.chart-wrapper {
height: 300px;
}
.detail-section {
padding: 16px;
border-bottom: 1px solid var(--border-color);
}
.detail-section h3 {
font-size: 12px;
color: var(--text-muted);
text-transform: uppercase;
letter-spacing: 0.5px;
margin-bottom: 12px;
}
.detail-grid {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 8px;
}
.detail-item {
display: flex;
justify-content: space-between;
padding: 6px 0;
font-size: 13px;
}
.detail-item .label {
color: var(--text-secondary);
}
.detail-item .value {
font-family: 'IBM Plex Mono', monospace;
color: var(--text-primary);
}
.action-list {
max-height: 300px;
overflow-y: auto;
}
.action-item {
display: flex;
align-items: center;
padding: 8px 12px;
border-radius: 6px;
margin-bottom: 4px;
font-size: 13px;
}
.action-item.valid {
background: var(--bg-tertiary);
}
.action-item.invalid {
background: transparent;
opacity: 0.5;
}
.action-item.in-plan {
background: rgba(63, 185, 80, 0.15);
border: 1px solid var(--accent-green);
}
.action-item .action-name {
flex: 1;
font-family: 'IBM Plex Mono', monospace;
}
.action-item .action-cost {
font-size: 11px;
color: var(--text-muted);
margin-left: 8px;
}
.action-item .action-order {
width: 20px;
height: 20px;
background: var(--accent-green);
border-radius: 50%;
display: flex;
align-items: center;
justify-content: center;
font-size: 11px;
font-weight: 600;
color: #000;
margin-right: 8px;
}
.inventory-grid {
display: grid;
grid-template-columns: repeat(2, 1fr);
gap: 8px;
}
.inv-item {
display: flex;
align-items: center;
gap: 8px;
padding: 8px;
background: var(--bg-tertiary);
border-radius: 6px;
font-size: 13px;
}
.inv-item .icon {
font-size: 16px;
}
.inv-item .count {
margin-left: auto;
font-family: 'IBM Plex Mono', monospace;
font-weight: 500;
}
.loading {
display: flex;
align-items: center;
justify-content: center;
height: 100%;
color: var(--text-muted);
}
.urgency-indicator {
display: inline-block;
width: 8px;
height: 8px;
border-radius: 50%;
margin-left: 8px;
}
.urgency-none { background: var(--accent-green); }
.urgency-low { background: var(--accent-orange); }
.urgency-high { background: var(--accent-red); }
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.updating {
animation: pulse 1s infinite;
}
</style>
</head>
<body>
<header class="header">
<h1>🧠 GOAP Debug Visualizer</h1>
<div class="header-controls">
<span id="turn-display">Turn 0</span>
<span id="status-badge" class="status-badge status-disconnected">Disconnected</span>
<button class="btn" onclick="refreshData()">↻ Refresh</button>
<button class="btn btn-primary" id="auto-refresh-btn" onclick="toggleAutoRefresh()">▶ Auto</button>
</div>
</header>
<main class="main-content">
<!-- Left Panel: Agent List -->
<div class="panel">
<div class="panel-header">
<h2>Agents</h2>
</div>
<div id="agent-list" class="agent-list">
<div class="loading">Loading...</div>
</div>
</div>
<!-- Center Panel: Plan Visualization -->
<div class="center-panel">
<div class="plan-view" id="plan-view">
<div class="loading">Select an agent to view GOAP details</div>
</div>
</div>
<!-- Right Panel: Details -->
<div class="panel">
<div class="panel-header">
<h2>Details</h2>
</div>
<div id="details-panel">
<div class="loading">Select an agent</div>
</div>
</div>
</main>
<script>
const API_BASE = 'http://localhost:8000/api';
let selectedAgentId = null;
let allAgentsData = [];
let autoRefreshInterval = null;
let goalsChart = null;
// Initialize
document.addEventListener('DOMContentLoaded', () => {
refreshData();
});
async function refreshData() {
try {
const response = await fetch(`${API_BASE}/goap/debug`);
if (!response.ok) throw new Error('API error');
const data = await response.json();
allAgentsData = data.agents;
document.getElementById('turn-display').textContent = `Turn ${data.current_turn}`;
document.getElementById('status-badge').className = 'status-badge status-connected';
document.getElementById('status-badge').textContent = data.is_night ? '🌙 Night' : '☀️ Connected';
renderAgentList();
if (selectedAgentId) {
const agent = allAgentsData.find(a => a.agent_id === selectedAgentId);
if (agent) {
renderAgentDetails(agent);
}
}
} catch (error) {
console.error('Failed to fetch data:', error);
document.getElementById('status-badge').className = 'status-badge status-disconnected';
document.getElementById('status-badge').textContent = 'Disconnected';
}
}
function renderAgentList() {
const container = document.getElementById('agent-list');
if (allAgentsData.length === 0) {
container.innerHTML = '<div class="loading">No agents found</div>';
return;
}
container.innerHTML = allAgentsData.map(agent => `
<div class="agent-item ${agent.agent_id === selectedAgentId ? 'selected' : ''}"
onclick="selectAgent('${agent.agent_id}')">
<div class="agent-name">${agent.agent_name}</div>
<div class="agent-action">${agent.selected_action || 'No action'}</div>
<div class="agent-goal">${agent.current_plan ? '🎯 ' + agent.current_plan.goal_name : '(reactive)'}</div>
</div>
`).join('');
}
function selectAgent(agentId) {
selectedAgentId = agentId;
renderAgentList();
const agent = allAgentsData.find(a => a.agent_id === agentId);
if (agent) {
renderAgentDetails(agent);
}
}
function renderAgentDetails(agent) {
renderPlanView(agent);
renderDetailsPanel(agent);
}
function getUrgencyClass(urgency) {
if (urgency <= 0) return 'urgency-none';
if (urgency <= 1) return 'urgency-low';
return 'urgency-high';
}
function renderPlanView(agent) {
const container = document.getElementById('plan-view');
const ws = agent.world_state;
const plan = agent.current_plan;
container.innerHTML = `
<div class="plan-header">
<h2>${agent.agent_name}</h2>
${plan ? `<span class="plan-goal-badge">🎯 ${plan.goal_name}</span>` : ''}
</div>
<div class="world-state-grid">
<div class="state-card bar-thirst">
<div class="label">Thirst</div>
<div class="value">${Math.round(ws.vitals.thirst * 100)}%
<span class="urgency-indicator ${getUrgencyClass(ws.urgencies.thirst)}"></span>
</div>
<div class="bar"><div class="bar-fill" style="width: ${ws.vitals.thirst * 100}%"></div></div>
</div>
<div class="state-card bar-hunger">
<div class="label">Hunger</div>
<div class="value">${Math.round(ws.vitals.hunger * 100)}%
<span class="urgency-indicator ${getUrgencyClass(ws.urgencies.hunger)}"></span>
</div>
<div class="bar"><div class="bar-fill" style="width: ${ws.vitals.hunger * 100}%"></div></div>
</div>
<div class="state-card bar-heat">
<div class="label">Heat</div>
<div class="value">${Math.round(ws.vitals.heat * 100)}%
<span class="urgency-indicator ${getUrgencyClass(ws.urgencies.heat)}"></span>
</div>
<div class="bar"><div class="bar-fill" style="width: ${ws.vitals.heat * 100}%"></div></div>
</div>
<div class="state-card bar-energy">
<div class="label">Energy</div>
<div class="value">${Math.round(ws.vitals.energy * 100)}%
<span class="urgency-indicator ${getUrgencyClass(ws.urgencies.energy)}"></span>
</div>
<div class="bar"><div class="bar-fill" style="width: ${ws.vitals.energy * 100}%"></div></div>
</div>
</div>
<div class="plan-visualization">
<h3>Current Plan</h3>
${plan && plan.actions.length > 0 ? `
<div class="plan-steps">
${plan.actions.map((action, i) => `
<div class="plan-step">
<div class="step-node ${i === 0 ? 'current' : ''}">${action}</div>
${i < plan.actions.length - 1 ? '<span class="step-arrow"></span>' : ''}
</div>
`).join('')}
<span class="step-arrow"></span>
<div class="goal-result">✓ ${plan.goal_name}</div>
</div>
<div style="margin-top: 12px; font-size: 13px; color: var(--text-muted);">
Total Cost: ${plan.total_cost.toFixed(1)} | Steps: ${plan.plan_length}
</div>
` : `
<div class="no-plan">
<p style="font-size: 16px; margin-bottom: 8px;">No plan - using reactive selection</p>
<p>Selected: <strong>${agent.selected_action || 'None'}</strong></p>
</div>
`}
</div>
<div class="goals-chart-container">
<h3>Goal Priorities</h3>
<div class="chart-wrapper">
<canvas id="goals-chart"></canvas>
</div>
</div>
`;
renderGoalsChart(agent);
}
function renderGoalsChart(agent) {
const ctx = document.getElementById('goals-chart');
if (!ctx) return;
// Sort goals by priority
const sortedGoals = [...agent.goals].sort((a, b) => b.priority - a.priority);
const topGoals = sortedGoals.slice(0, 10);
if (goalsChart) {
goalsChart.destroy();
}
goalsChart = new Chart(ctx, {
type: 'bar',
data: {
labels: topGoals.map(g => g.name),
datasets: [{
label: 'Priority',
data: topGoals.map(g => g.priority),
backgroundColor: topGoals.map(g => {
if (g.is_selected) return 'rgba(163, 113, 247, 0.8)';
if (g.is_satisfied) return 'rgba(63, 185, 80, 0.5)';
if (g.priority > 0) return 'rgba(88, 166, 255, 0.7)';
return 'rgba(110, 118, 129, 0.3)';
}),
borderColor: topGoals.map(g => {
if (g.is_selected) return '#a371f7';
if (g.is_satisfied) return '#3fb950';
if (g.priority > 0) return '#58a6ff';
return '#6e7681';
}),
borderWidth: 2,
}]
},
options: {
indexAxis: 'y',
responsive: true,
maintainAspectRatio: false,
plugins: {
legend: { display: false },
},
scales: {
x: {
beginAtZero: true,
grid: { color: '#30363d' },
ticks: { color: '#8b949e' },
},
y: {
grid: { display: false },
ticks: {
color: '#e6edf3',
font: { family: 'IBM Plex Mono', size: 11 }
},
}
}
}
});
}
function renderDetailsPanel(agent) {
const container = document.getElementById('details-panel');
const ws = agent.world_state;
const validActions = agent.actions.filter(a => a.is_valid);
const inPlanActions = agent.actions.filter(a => a.is_in_plan).sort((a, b) => a.plan_order - b.plan_order);
container.innerHTML = `
<div class="detail-section">
<h3>Inventory</h3>
<div class="inventory-grid">
<div class="inv-item"><span class="icon">💧</span> Water <span class="count">${ws.inventory.water}</span></div>
<div class="inv-item"><span class="icon">🍖</span> Meat <span class="count">${ws.inventory.meat}</span></div>
<div class="inv-item"><span class="icon">🫐</span> Berries <span class="count">${ws.inventory.berries}</span></div>
<div class="inv-item"><span class="icon">🪵</span> Wood <span class="count">${ws.inventory.wood}</span></div>
<div class="inv-item"><span class="icon">🥩</span> Hide <span class="count">${ws.inventory.hide}</span></div>
<div class="inv-item"><span class="icon">📦</span> Space <span class="count">${ws.inventory.space}</span></div>
</div>
</div>
<div class="detail-section">
<h3>Economy</h3>
<div class="detail-grid">
<div class="detail-item">
<span class="label">Money</span>
<span class="value" style="color: var(--accent-orange)">${ws.economy.money}c</span>
</div>
<div class="detail-item">
<span class="label">Wealthy</span>
<span class="value">${ws.economy.is_wealthy ? '✓' : '✗'}</span>
</div>
</div>
</div>
<div class="detail-section">
<h3>Market Access</h3>
<div class="detail-grid">
<div class="detail-item">
<span class="label">Buy Water</span>
<span class="value">${ws.market.can_buy_water ? '✓' : '✗'}</span>
</div>
<div class="detail-item">
<span class="label">Buy Food</span>
<span class="value">${ws.market.can_buy_food ? '✓' : '✗'}</span>
</div>
<div class="detail-item">
<span class="label">Buy Wood</span>
<span class="value">${ws.market.can_buy_wood ? '✓' : '✗'}</span>
</div>
</div>
</div>
<div class="detail-section">
<h3>Actions (${validActions.length} valid)</h3>
<div class="action-list">
${agent.actions.map(action => `
<div class="action-item ${action.is_valid ? 'valid' : 'invalid'} ${action.is_in_plan ? 'in-plan' : ''}">
${action.is_in_plan ? `<span class="action-order">${action.plan_order + 1}</span>` : ''}
<span class="action-name">${action.name}</span>
<span class="action-cost">${action.cost >= 0 ? action.cost.toFixed(1) : '∞'}</span>
</div>
`).join('')}
</div>
</div>
`;
}
function toggleAutoRefresh() {
const btn = document.getElementById('auto-refresh-btn');
if (autoRefreshInterval) {
clearInterval(autoRefreshInterval);
autoRefreshInterval = null;
btn.textContent = '▶ Auto';
btn.classList.remove('btn-primary');
} else {
autoRefreshInterval = setInterval(refreshData, 500);
btn.textContent = '⏸ Stop';
btn.classList.add('btn-primary');
}
}
</script>
</body>
</html>

View File

@ -135,6 +135,7 @@
<button class="tab-btn" data-tab="resources">Resources</button> <button class="tab-btn" data-tab="resources">Resources</button>
<button class="tab-btn" data-tab="market">Market</button> <button class="tab-btn" data-tab="market">Market</button>
<button class="tab-btn" data-tab="agents">Agents</button> <button class="tab-btn" data-tab="agents">Agents</button>
<button class="tab-btn" data-tab="goap">🧠 GOAP</button>
</div> </div>
</div> </div>
<div class="stats-header-right"> <div class="stats-header-right">
@ -240,8 +241,54 @@
</div> </div>
</div> </div>
</div> </div>
<!-- GOAP Tab -->
<div id="tab-goap" class="tab-panel">
<div class="goap-container">
<div class="goap-header">
<h3>Goal-Oriented Action Planning</h3>
<p class="goap-subtitle">Real-time visualization of agent decision-making</p>
</div>
<div class="goap-grid">
<div class="goap-panel goap-agents-panel">
<h4>Agents</h4>
<div id="goap-agent-list" class="goap-agent-list">
<p class="loading-text">Loading agents...</p>
</div>
</div>
<div class="goap-panel goap-plan-panel">
<h4>Current Plan</h4>
<div id="goap-plan-view" class="goap-plan-view">
<p class="no-selection-text">Select an agent to view their GOAP plan</p>
</div>
</div>
<div class="goap-panel goap-goals-panel">
<h4>Goal Priorities</h4>
<div class="chart-wrapper">
<canvas id="chart-goap-goals"></canvas>
</div>
</div>
<div class="goap-panel goap-actions-panel">
<h4>Available Actions</h4>
<div id="goap-actions-list" class="goap-actions-list">
<p class="no-selection-text">Select an agent</p>
</div>
</div>
</div>
</div>
</div>
</div> </div>
<div class="stats-footer"> <div class="stats-footer">
<div class="controls">
<button id="btn-initialize-stats" class="btn btn-secondary" title="Reset Simulation">
<span class="btn-icon"></span> Reset
</button>
<button id="btn-step-stats" class="btn btn-primary" title="Advance one turn">
<span class="btn-icon"></span> Step
</button>
<button id="btn-auto-stats" class="btn btn-toggle" title="Toggle auto mode">
<span class="btn-icon"></span> Auto
</button>
</div>
<div class="stats-summary-bar"> <div class="stats-summary-bar">
<div class="summary-item"> <div class="summary-item">
<span class="summary-label">Turn</span> <span class="summary-label">Turn</span>
@ -268,6 +315,11 @@
<span class="summary-value" id="stats-gini">0.00</span> <span class="summary-value" id="stats-gini">0.00</span>
</div> </div>
</div> </div>
<div class="speed-control">
<label for="speed-slider-stats">Speed</label>
<input type="range" id="speed-slider-stats" min="50" max="1000" value="150" step="50">
<span id="speed-display-stats">150ms</span>
</div>
</div> </div>
</div> </div>
</div> </div>

View File

@ -124,6 +124,21 @@ class SimulationAPI {
async getLogs(limit = 10) { async getLogs(limit = 10) {
return await this.request(`/api/logs?limit=${limit}`); return await this.request(`/api/logs?limit=${limit}`);
} }
// GOAP: Get debug info for all agents
async getGOAPDebug() {
return await this.request('/api/goap/debug');
}
// GOAP: Get debug info for specific agent
async getAgentGOAPDebug(agentId) {
return await this.request(`/api/goap/debug/${agentId}`);
}
// Generic GET helper (for compatibility)
async get(endpoint) {
return await this.request(`/api${endpoint}`);
}
} }
// Export singleton instance // Export singleton instance

View File

@ -127,7 +127,22 @@ export default class GameScene extends Phaser.Scene {
statsGold: document.getElementById('stats-gold'), statsGold: document.getElementById('stats-gold'),
statsAvgWealth: document.getElementById('stats-avg-wealth'), statsAvgWealth: document.getElementById('stats-avg-wealth'),
statsGini: document.getElementById('stats-gini'), statsGini: document.getElementById('stats-gini'),
// GOAP elements
goapAgentList: document.getElementById('goap-agent-list'),
goapPlanView: document.getElementById('goap-plan-view'),
goapActionsList: document.getElementById('goap-actions-list'),
chartGoapGoals: document.getElementById('chart-goap-goals'),
// Stats screen controls (duplicated for stats page)
btnStepStats: document.getElementById('btn-step-stats'),
btnAutoStats: document.getElementById('btn-auto-stats'),
btnInitializeStats: document.getElementById('btn-initialize-stats'),
speedSliderStats: document.getElementById('speed-slider-stats'),
speedDisplayStats: document.getElementById('speed-display-stats'),
}; };
// GOAP state
this.goapData = null;
this.selectedGoapAgentId = null;
} }
cleanup() { cleanup() {
@ -163,6 +178,21 @@ export default class GameScene extends Phaser.Scene {
btnCloseStats.removeEventListener('click', this.boundHandlers.closeStats); btnCloseStats.removeEventListener('click', this.boundHandlers.closeStats);
} }
// Stats screen controls cleanup
const { btnStepStats, btnAutoStats, btnInitializeStats, speedSliderStats } = this.domCache;
if (btnStepStats && this.boundHandlers.step) {
btnStepStats.removeEventListener('click', this.boundHandlers.step);
}
if (btnAutoStats && this.boundHandlers.auto) {
btnAutoStats.removeEventListener('click', this.boundHandlers.auto);
}
if (btnInitializeStats && this.boundHandlers.init) {
btnInitializeStats.removeEventListener('click', this.boundHandlers.init);
}
if (speedSliderStats && this.boundHandlers.speedStats) {
speedSliderStats.removeEventListener('input', this.boundHandlers.speedStats);
}
// Destroy charts // Destroy charts
Object.values(this.charts).forEach(chart => chart?.destroy()); Object.values(this.charts).forEach(chart => chart?.destroy());
this.charts = {}; this.charts = {};
@ -277,19 +307,34 @@ export default class GameScene extends Phaser.Scene {
setupUIControls() { setupUIControls() {
const { btnStep, btnAuto, btnInitialize, btnStats, btnCloseStats, speedSlider, speedDisplay, tabButtons } = this.domCache; const { btnStep, btnAuto, btnInitialize, btnStats, btnCloseStats, speedSlider, speedDisplay, tabButtons } = this.domCache;
const { btnStepStats, btnAutoStats, btnInitializeStats, speedSliderStats, speedDisplayStats } = this.domCache;
// Create bound handlers for later cleanup // Create bound handlers for later cleanup
this.boundHandlers.step = () => this.handleStep(); this.boundHandlers.step = () => this.handleStep();
this.boundHandlers.auto = () => this.toggleAutoMode(); this.boundHandlers.auto = () => this.toggleAutoMode();
this.boundHandlers.init = () => this.handleInitialize(); this.boundHandlers.init = () => this.handleInitialize();
// Speed handler that syncs both sliders
this.boundHandlers.speed = (e) => { this.boundHandlers.speed = (e) => {
this.autoSpeed = parseInt(e.target.value); this.autoSpeed = parseInt(e.target.value);
if (speedDisplay) speedDisplay.textContent = `${this.autoSpeed}ms`; if (speedDisplay) speedDisplay.textContent = `${this.autoSpeed}ms`;
if (speedDisplayStats) speedDisplayStats.textContent = `${this.autoSpeed}ms`;
if (speedSliderStats) speedSliderStats.value = this.autoSpeed;
if (this.isAutoMode) this.restartAutoMode(); if (this.isAutoMode) this.restartAutoMode();
}; };
this.boundHandlers.speedStats = (e) => {
this.autoSpeed = parseInt(e.target.value);
if (speedDisplay) speedDisplay.textContent = `${this.autoSpeed}ms`;
if (speedDisplayStats) speedDisplayStats.textContent = `${this.autoSpeed}ms`;
if (speedSlider) speedSlider.value = this.autoSpeed;
if (this.isAutoMode) this.restartAutoMode();
};
this.boundHandlers.openStats = () => this.showStatsScreen(); this.boundHandlers.openStats = () => this.showStatsScreen();
this.boundHandlers.closeStats = () => this.hideStatsScreen(); this.boundHandlers.closeStats = () => this.hideStatsScreen();
// Main controls
if (btnStep) btnStep.addEventListener('click', this.boundHandlers.step); if (btnStep) btnStep.addEventListener('click', this.boundHandlers.step);
if (btnAuto) btnAuto.addEventListener('click', this.boundHandlers.auto); if (btnAuto) btnAuto.addEventListener('click', this.boundHandlers.auto);
if (btnInitialize) btnInitialize.addEventListener('click', this.boundHandlers.init); if (btnInitialize) btnInitialize.addEventListener('click', this.boundHandlers.init);
@ -297,6 +342,12 @@ export default class GameScene extends Phaser.Scene {
if (btnStats) btnStats.addEventListener('click', this.boundHandlers.openStats); if (btnStats) btnStats.addEventListener('click', this.boundHandlers.openStats);
if (btnCloseStats) btnCloseStats.addEventListener('click', this.boundHandlers.closeStats); if (btnCloseStats) btnCloseStats.addEventListener('click', this.boundHandlers.closeStats);
// Stats screen controls (same handlers)
if (btnStepStats) btnStepStats.addEventListener('click', this.boundHandlers.step);
if (btnAutoStats) btnAutoStats.addEventListener('click', this.boundHandlers.auto);
if (btnInitializeStats) btnInitializeStats.addEventListener('click', this.boundHandlers.init);
if (speedSliderStats) speedSliderStats.addEventListener('input', this.boundHandlers.speedStats);
// Tab switching // Tab switching
tabButtons?.forEach(btn => { tabButtons?.forEach(btn => {
btn.addEventListener('click', (e) => this.switchTab(e.target.dataset.tab)); btn.addEventListener('click', (e) => this.switchTab(e.target.dataset.tab));
@ -371,15 +422,19 @@ export default class GameScene extends Phaser.Scene {
toggleAutoMode() { toggleAutoMode() {
this.isAutoMode = !this.isAutoMode; this.isAutoMode = !this.isAutoMode;
const { btnAuto, btnStep } = this.domCache; const { btnAuto, btnStep, btnAutoStats, btnStepStats } = this.domCache;
if (this.isAutoMode) { if (this.isAutoMode) {
btnAuto?.classList.add('active'); btnAuto?.classList.add('active');
btnAutoStats?.classList.add('active');
btnStep?.setAttribute('disabled', 'true'); btnStep?.setAttribute('disabled', 'true');
btnStepStats?.setAttribute('disabled', 'true');
this.startAutoMode(); this.startAutoMode();
} else { } else {
btnAuto?.classList.remove('active'); btnAuto?.classList.remove('active');
btnAutoStats?.classList.remove('active');
btnStep?.removeAttribute('disabled'); btnStep?.removeAttribute('disabled');
btnStepStats?.removeAttribute('disabled');
this.stopAutoMode(); this.stopAutoMode();
} }
} }
@ -741,6 +796,15 @@ export default class GameScene extends Phaser.Scene {
<span class="action-label">Current Action</span> <span class="action-label">Current Action</span>
<div>${actionData.icon} ${action.message || actionData.verb}</div> <div>${actionData.icon} ${action.message || actionData.verb}</div>
</div> </div>
<div class="agent-goap-info" id="agent-goap-section" data-agent-id="${agentData.id}">
<h5 class="subsection-title" style="display: flex; align-items: center; gap: 6px;">
🧠 GOAP Plan
<button class="btn-mini" onclick="window.villsimGame.scene.scenes[1].loadAgentGOAP('${agentData.id}')" style="font-size: 0.6rem; padding: 2px 6px;"></button>
</h5>
<div id="agent-goap-content" style="font-size: 0.75rem; color: var(--text-muted);">
Click to load GOAP info
</div>
</div>
<h5 class="subsection-title">Personal Log</h5> <h5 class="subsection-title">Personal Log</h5>
<div class="agent-log"> <div class="agent-log">
${renderActionLog()} ${renderActionLog()}
@ -1023,6 +1087,7 @@ export default class GameScene extends Phaser.Scene {
case 'resources': this.renderResourceCharts(); break; case 'resources': this.renderResourceCharts(); break;
case 'market': this.renderMarketCharts(); break; case 'market': this.renderMarketCharts(); break;
case 'agents': this.renderAgentStatsCharts(); break; case 'agents': this.renderAgentStatsCharts(); break;
case 'goap': this.fetchAndRenderGOAP(); break;
} }
} }
@ -1627,6 +1692,291 @@ export default class GameScene extends Phaser.Scene {
}; };
} }
// =================================
// GOAP Visualization Methods
// =================================
async loadAgentGOAP(agentId) {
const contentEl = document.getElementById('agent-goap-content');
if (!contentEl) return;
contentEl.innerHTML = '<span style="color: var(--text-muted);">Loading...</span>';
try {
const data = await api.getAgentGOAPDebug(agentId);
const plan = data.current_plan;
if (plan && plan.actions.length > 0) {
contentEl.innerHTML = `
<div style="margin-bottom: 4px;">
<strong style="color: var(--accent-sapphire);">Goal:</strong> ${plan.goal_name}
</div>
<div style="font-family: var(--font-mono); font-size: 0.7rem;">
${plan.actions.map((a, i) =>
`<span style="${i === 0 ? 'color: var(--accent-emerald);' : ''}">${a}</span>`
).join(' → ')}
</div>
<div style="margin-top: 4px; color: var(--text-muted); font-size: 0.65rem;">
Cost: ${plan.total_cost.toFixed(1)} | Steps: ${plan.plan_length}
</div>
`;
} else {
contentEl.innerHTML = `
<div style="color: var(--text-muted);">
No plan (reactive mode)<br>
<span style="color: var(--text-primary);">${data.selected_action || 'No action'}</span>
</div>
`;
}
} catch (error) {
console.error('Failed to load GOAP info:', error);
contentEl.innerHTML = '<span style="color: var(--accent-ruby);">Failed to load</span>';
}
}
async fetchAndRenderGOAP() {
try {
const response = await api.get('/goap/debug');
this.goapData = response;
this.renderGOAPAgentList();
// If we have a selected agent, render their details
if (this.selectedGoapAgentId) {
const agent = this.goapData.agents.find(a => a.agent_id === this.selectedGoapAgentId);
if (agent) {
this.renderGOAPAgentDetails(agent);
}
}
} catch (error) {
console.error('Failed to fetch GOAP data:', error);
const { goapAgentList } = this.domCache;
if (goapAgentList) {
goapAgentList.innerHTML = '<p class="loading-text">Failed to load GOAP data. Make sure the server is running.</p>';
}
}
}
renderGOAPAgentList() {
const { goapAgentList } = this.domCache;
if (!goapAgentList || !this.goapData) return;
if (this.goapData.agents.length === 0) {
goapAgentList.innerHTML = '<p class="loading-text">No agents found</p>';
return;
}
goapAgentList.innerHTML = this.goapData.agents.map(agent => `
<div class="goap-agent-item ${agent.agent_id === this.selectedGoapAgentId ? 'selected' : ''}"
data-agent-id="${agent.agent_id}">
<div class="agent-name">${agent.agent_name}</div>
<div class="agent-action">${agent.selected_action || 'No action'}</div>
<div class="agent-goal">${agent.current_plan ? '🎯 ' + agent.current_plan.goal_name : '(reactive)'}</div>
</div>
`).join('');
// Add click handlers
goapAgentList.querySelectorAll('.goap-agent-item').forEach(item => {
item.addEventListener('click', () => {
this.selectGoapAgent(item.dataset.agentId);
});
});
}
selectGoapAgent(agentId) {
this.selectedGoapAgentId = agentId;
// Update selection styling
const { goapAgentList } = this.domCache;
if (goapAgentList) {
goapAgentList.querySelectorAll('.goap-agent-item').forEach(item => {
item.classList.toggle('selected', item.dataset.agentId === agentId);
});
}
// Render details
if (this.goapData) {
const agent = this.goapData.agents.find(a => a.agent_id === agentId);
if (agent) {
this.renderGOAPAgentDetails(agent);
}
}
}
renderGOAPAgentDetails(agent) {
this.renderGOAPPlanView(agent);
this.renderGOAPActionsList(agent);
this.renderGOAPGoalsChart(agent);
}
getUrgencyClass(urgency) {
if (urgency <= 0) return 'none';
if (urgency <= 1) return 'low';
return 'high';
}
renderGOAPPlanView(agent) {
const { goapPlanView } = this.domCache;
if (!goapPlanView) return;
const ws = agent.world_state;
const plan = agent.current_plan;
goapPlanView.innerHTML = `
<div class="goap-world-state">
<div class="goap-stat-card thirst">
<div class="label">Thirst</div>
<div class="value">${Math.round(ws.vitals.thirst * 100)}%<span class="goap-urgency ${this.getUrgencyClass(ws.urgencies.thirst)}"></span></div>
<div class="bar"><div class="bar-fill" style="width: ${ws.vitals.thirst * 100}%"></div></div>
</div>
<div class="goap-stat-card hunger">
<div class="label">Hunger</div>
<div class="value">${Math.round(ws.vitals.hunger * 100)}%<span class="goap-urgency ${this.getUrgencyClass(ws.urgencies.hunger)}"></span></div>
<div class="bar"><div class="bar-fill" style="width: ${ws.vitals.hunger * 100}%"></div></div>
</div>
<div class="goap-stat-card heat">
<div class="label">Heat</div>
<div class="value">${Math.round(ws.vitals.heat * 100)}%<span class="goap-urgency ${this.getUrgencyClass(ws.urgencies.heat)}"></span></div>
<div class="bar"><div class="bar-fill" style="width: ${ws.vitals.heat * 100}%"></div></div>
</div>
<div class="goap-stat-card energy">
<div class="label">Energy</div>
<div class="value">${Math.round(ws.vitals.energy * 100)}%<span class="goap-urgency ${this.getUrgencyClass(ws.urgencies.energy)}"></span></div>
<div class="bar"><div class="bar-fill" style="width: ${ws.vitals.energy * 100}%"></div></div>
</div>
</div>
<div class="goap-plan-steps">
<h5>Current Plan</h5>
${plan && plan.actions.length > 0 ? `
<div class="goap-plan-flow">
${plan.actions.map((action, i) => `
<span class="goap-step-node ${i === 0 ? 'current' : ''}">${action}</span>
${i < plan.actions.length - 1 ? '<span class="goap-step-arrow">→</span>' : ''}
`).join('')}
<span class="goap-step-arrow"></span>
<span class="goap-goal-result"> ${plan.goal_name}</span>
</div>
<div style="margin-top: 8px; font-size: 0.75rem; color: var(--text-muted);">
Total Cost: ${plan.total_cost.toFixed(1)} | Steps: ${plan.plan_length}
</div>
` : `
<div style="color: var(--text-muted); font-size: 0.85rem;">
No plan - using reactive selection<br>
Selected: <strong>${agent.selected_action || 'None'}</strong>
</div>
`}
</div>
<div style="margin-top: 12px;">
<h5 style="font-size: 0.7rem; color: var(--text-muted); margin-bottom: 8px;">INVENTORY</h5>
<div class="goap-inventory">
<div class="goap-inv-item">💧<span class="count">${ws.inventory.water}</span></div>
<div class="goap-inv-item">🍖<span class="count">${ws.inventory.meat}</span></div>
<div class="goap-inv-item">🫐<span class="count">${ws.inventory.berries}</span></div>
<div class="goap-inv-item">🪵<span class="count">${ws.inventory.wood}</span></div>
<div class="goap-inv-item">🥩<span class="count">${ws.inventory.hide}</span></div>
<div class="goap-inv-item">📦<span class="count">${ws.inventory.space}</span></div>
</div>
</div>
<div style="margin-top: 12px; display: flex; gap: 16px; font-size: 0.8rem;">
<span style="color: var(--accent-gold);">💰 ${ws.economy.money}c</span>
<span style="color: var(--text-muted);">Wealthy: ${ws.economy.is_wealthy ? '✓' : '✗'}</span>
</div>
`;
}
renderGOAPActionsList(agent) {
const { goapActionsList } = this.domCache;
if (!goapActionsList) return;
// Sort: plan actions first, then valid, then invalid
const sortedActions = [...agent.actions].sort((a, b) => {
if (a.is_in_plan && !b.is_in_plan) return -1;
if (!a.is_in_plan && b.is_in_plan) return 1;
if (a.is_in_plan && b.is_in_plan) return a.plan_order - b.plan_order;
if (a.is_valid && !b.is_valid) return -1;
if (!a.is_valid && b.is_valid) return 1;
return (a.cost || 999) - (b.cost || 999);
});
goapActionsList.innerHTML = sortedActions.map(action => `
<div class="goap-action-item ${action.is_valid ? 'valid' : 'invalid'} ${action.is_in_plan ? 'in-plan' : ''}">
${action.is_in_plan ? `<span class="action-order">${action.plan_order + 1}</span>` : ''}
<span class="action-name">${action.name}</span>
<span class="action-cost">${action.cost >= 0 ? action.cost.toFixed(1) : '∞'}</span>
</div>
`).join('');
}
renderGOAPGoalsChart(agent) {
const { chartGoapGoals } = this.domCache;
if (!chartGoapGoals) return;
// Sort goals by priority and take top 10
const sortedGoals = [...agent.goals]
.sort((a, b) => b.priority - a.priority)
.slice(0, 10);
// Destroy existing chart
if (this.charts.goapGoals) {
this.charts.goapGoals.destroy();
}
this.charts.goapGoals = new Chart(chartGoapGoals, {
type: 'bar',
data: {
labels: sortedGoals.map(g => g.name),
datasets: [{
label: 'Priority',
data: sortedGoals.map(g => g.priority),
backgroundColor: sortedGoals.map(g => {
if (g.is_selected) return 'rgba(139, 111, 192, 0.8)';
if (g.is_satisfied) return 'rgba(74, 156, 109, 0.5)';
if (g.priority > 0) return 'rgba(90, 140, 200, 0.7)';
return 'rgba(107, 101, 96, 0.3)';
}),
borderColor: sortedGoals.map(g => {
if (g.is_selected) return '#8b6fc0';
if (g.is_satisfied) return '#4a9c6d';
if (g.priority > 0) return '#5a8cc8';
return '#6b6560';
}),
borderWidth: 2,
}]
},
options: {
indexAxis: 'y',
responsive: true,
maintainAspectRatio: false,
animation: false,
plugins: {
legend: { display: false },
title: {
display: true,
text: 'Goal Priorities',
color: '#e8e4dc',
font: { family: "'Crimson Pro', serif", size: 14 },
},
},
scales: {
x: {
beginAtZero: true,
grid: { color: 'rgba(58, 67, 89, 0.3)' },
ticks: { color: '#6b6560' },
},
y: {
grid: { display: false },
ticks: {
color: '#e8e4dc',
font: { family: "'JetBrains Mono', monospace", size: 10 }
},
}
}
}
});
}
update(time, delta) { update(time, delta) {
// Minimal update loop - no heavy operations here // Minimal update loop - no heavy operations here
} }

View File

@ -613,7 +613,8 @@ body {
font-weight: 500; font-weight: 500;
} }
#speed-slider { #speed-slider,
#speed-slider-stats {
width: 120px; width: 120px;
height: 4px; height: 4px;
-webkit-appearance: none; -webkit-appearance: none;
@ -623,7 +624,8 @@ body {
cursor: pointer; cursor: pointer;
} }
#speed-slider::-webkit-slider-thumb { #speed-slider::-webkit-slider-thumb,
#speed-slider-stats::-webkit-slider-thumb {
-webkit-appearance: none; -webkit-appearance: none;
appearance: none; appearance: none;
width: 14px; width: 14px;
@ -633,7 +635,8 @@ body {
cursor: pointer; cursor: pointer;
} }
#speed-display { #speed-display,
#speed-display-stats {
font-family: var(--font-mono); font-family: var(--font-mono);
min-width: 50px; min-width: 50px;
} }
@ -939,16 +942,21 @@ body {
/* Stats Footer */ /* Stats Footer */
.stats-footer { .stats-footer {
display: flex;
align-items: center;
justify-content: space-between;
padding: var(--space-sm) var(--space-lg); padding: var(--space-sm) var(--space-lg);
background: var(--bg-primary); background: var(--bg-primary);
border-top: 1px solid var(--border-color); border-top: 1px solid var(--border-color);
flex-shrink: 0; flex-shrink: 0;
height: 56px;
} }
.stats-summary-bar { .stats-summary-bar {
display: flex; display: flex;
justify-content: center; justify-content: center;
gap: var(--space-xl); gap: var(--space-xl);
flex: 1;
} }
.summary-item { .summary-item {
@ -1049,5 +1057,375 @@ body {
flex-wrap: wrap; flex-wrap: wrap;
gap: var(--space-md); gap: var(--space-md);
} }
.stats-footer {
flex-wrap: wrap;
height: auto;
gap: var(--space-sm);
padding: var(--space-sm);
}
.stats-footer .controls {
order: 1;
width: auto;
}
.stats-footer .stats-summary-bar {
order: 2;
width: 100%;
}
.stats-footer .speed-control {
order: 3;
width: auto;
}
}
/* =================================
GOAP Visualization Styles
================================= */
.goap-container {
padding: var(--space-lg);
height: 100%;
display: flex;
flex-direction: column;
}
.goap-header {
margin-bottom: var(--space-lg);
}
.goap-header h3 {
font-size: 1.5rem;
color: var(--accent-sapphire);
margin-bottom: var(--space-xs);
}
.goap-subtitle {
font-size: 0.85rem;
color: var(--text-muted);
}
.goap-grid {
display: grid;
grid-template-columns: 250px 1fr 300px;
grid-template-rows: 1fr 1fr;
gap: var(--space-md);
flex: 1;
min-height: 0;
}
.goap-panel {
background: var(--bg-secondary);
border: 1px solid var(--border-color);
border-radius: var(--radius-md);
padding: var(--space-md);
display: flex;
flex-direction: column;
overflow: hidden;
}
.goap-panel h4 {
font-size: 0.8rem;
text-transform: uppercase;
letter-spacing: 0.5px;
color: var(--text-muted);
margin-bottom: var(--space-md);
padding-bottom: var(--space-sm);
border-bottom: 1px solid var(--border-color);
}
.goap-agents-panel {
grid-row: span 2;
}
.goap-plan-panel {
grid-column: 2;
}
.goap-goals-panel {
grid-column: 3;
grid-row: span 2;
}
.goap-actions-panel {
grid-column: 2;
}
.goap-agent-list {
flex: 1;
overflow-y: auto;
}
.goap-agent-item {
padding: var(--space-sm) var(--space-md);
border-radius: var(--radius-sm);
margin-bottom: var(--space-xs);
cursor: pointer;
transition: background 0.15s ease;
}
.goap-agent-item:hover {
background: var(--bg-hover);
}
.goap-agent-item.selected {
background: rgba(90, 140, 200, 0.2);
border-left: 3px solid var(--accent-sapphire);
}
.goap-agent-item .agent-name {
font-weight: 600;
margin-bottom: 2px;
}
.goap-agent-item .agent-action {
font-size: 0.75rem;
font-family: var(--font-mono);
color: var(--text-secondary);
}
.goap-agent-item .agent-goal {
font-size: 0.7rem;
color: var(--accent-sapphire);
margin-top: 2px;
}
.goap-plan-view {
flex: 1;
overflow-y: auto;
}
.goap-world-state {
display: grid;
grid-template-columns: repeat(4, 1fr);
gap: var(--space-sm);
margin-bottom: var(--space-md);
}
.goap-stat-card {
background: var(--bg-elevated);
border-radius: var(--radius-sm);
padding: var(--space-sm);
text-align: center;
}
.goap-stat-card .label {
font-size: 0.65rem;
color: var(--text-muted);
text-transform: uppercase;
}
.goap-stat-card .value {
font-size: 1.1rem;
font-family: var(--font-mono);
font-weight: 600;
}
.goap-stat-card .bar {
height: 3px;
background: var(--bg-deep);
border-radius: 2px;
margin-top: 4px;
overflow: hidden;
}
.goap-stat-card .bar-fill {
height: 100%;
border-radius: 2px;
transition: width 0.3s ease;
}
.goap-stat-card.thirst .bar-fill { background: var(--stat-thirst); }
.goap-stat-card.hunger .bar-fill { background: var(--stat-hunger); }
.goap-stat-card.heat .bar-fill { background: var(--stat-heat); }
.goap-stat-card.energy .bar-fill { background: var(--stat-energy); }
.goap-plan-steps {
background: var(--bg-elevated);
border-radius: var(--radius-sm);
padding: var(--space-md);
margin-bottom: var(--space-md);
}
.goap-plan-steps h5 {
font-size: 0.75rem;
color: var(--text-muted);
margin-bottom: var(--space-sm);
}
.goap-plan-flow {
display: flex;
align-items: center;
flex-wrap: wrap;
gap: var(--space-sm);
}
.goap-step-node {
padding: var(--space-sm) var(--space-md);
background: var(--bg-secondary);
border: 2px solid var(--border-color);
border-radius: var(--radius-sm);
font-family: var(--font-mono);
font-size: 0.85rem;
}
.goap-step-node.current {
border-color: var(--accent-emerald);
background: rgba(74, 156, 109, 0.15);
color: var(--accent-emerald);
}
.goap-step-arrow {
color: var(--text-muted);
font-size: 1.2rem;
}
.goap-goal-result {
padding: var(--space-sm) var(--space-md);
background: rgba(139, 111, 192, 0.15);
border: 2px solid #8b6fc0;
border-radius: var(--radius-sm);
color: #8b6fc0;
font-weight: 600;
font-size: 0.85rem;
}
.goap-inventory {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: var(--space-xs);
}
.goap-inv-item {
display: flex;
align-items: center;
gap: var(--space-xs);
padding: var(--space-xs) var(--space-sm);
background: var(--bg-elevated);
border-radius: var(--radius-sm);
font-size: 0.75rem;
}
.goap-inv-item .count {
margin-left: auto;
font-family: var(--font-mono);
font-weight: 500;
}
.goap-actions-list {
flex: 1;
overflow-y: auto;
}
.goap-action-item {
display: flex;
align-items: center;
padding: var(--space-xs) var(--space-sm);
border-radius: var(--radius-sm);
margin-bottom: 2px;
font-size: 0.8rem;
}
.goap-action-item.valid {
background: var(--bg-elevated);
}
.goap-action-item.invalid {
opacity: 0.4;
}
.goap-action-item.in-plan {
background: rgba(74, 156, 109, 0.15);
border-left: 3px solid var(--accent-emerald);
}
.goap-action-item .action-name {
flex: 1;
font-family: var(--font-mono);
}
.goap-action-item .action-cost {
font-size: 0.7rem;
color: var(--text-muted);
}
.goap-action-item .action-order {
width: 18px;
height: 18px;
background: var(--accent-emerald);
border-radius: 50%;
display: flex;
align-items: center;
justify-content: center;
font-size: 0.65rem;
font-weight: 700;
color: var(--bg-deep);
margin-right: var(--space-sm);
}
.no-selection-text, .loading-text {
color: var(--text-muted);
font-size: 0.85rem;
text-align: center;
padding: var(--space-lg);
}
.btn-mini {
background: var(--bg-elevated);
border: 1px solid var(--border-color);
border-radius: var(--radius-sm);
color: var(--text-secondary);
cursor: pointer;
font-family: inherit;
transition: all 0.15s ease;
}
.btn-mini:hover {
background: var(--bg-hover);
color: var(--text-primary);
}
.agent-goap-info {
margin-top: var(--space-sm);
padding: var(--space-sm);
background: var(--bg-deep);
border-radius: var(--radius-sm);
border: 1px solid var(--border-color);
}
.goap-urgency {
display: inline-block;
width: 6px;
height: 6px;
border-radius: 50%;
margin-left: 4px;
}
.goap-urgency.none { background: var(--accent-emerald); }
.goap-urgency.low { background: var(--accent-gold); }
.goap-urgency.high { background: var(--accent-ruby); }
@media (max-width: 1400px) {
.goap-grid {
grid-template-columns: 200px 1fr 250px;
}
}
@media (max-width: 1000px) {
.goap-grid {
grid-template-columns: 1fr;
grid-template-rows: auto;
}
.goap-agents-panel,
.goap-goals-panel {
grid-row: auto;
}
.goap-panel {
max-height: 300px;
}
} }