ArgentOS Business — This feature is part of ArgentOS Business. The architecture is documented here for all users, but full functionality requires a Business license. Learn more about Business
Overview
Heartbeat Contracts are the structured interface between the operator and the heartbeat runner. The agent maintains a HEARTBEAT.md file in its workspace that defines what it should check during each heartbeat cycle. The contract parser converts this markdown into machine-readable tasks that the heartbeat runner processes, the verification sidecar validates, and the accountability system scores.
The HEARTBEAT.md file has two sections:
Freeform Context
Everything outside the ## Tasks section is passed as prompt context to the heartbeat runner. Use this for priorities, reminders, situational notes, or any information the agent should consider during the heartbeat cycle.
Structured Tasks
# Heartbeat
Focus on email monitoring during business hours.
Richard has a client demo on Thursday — prioritize any support tickets.
## Tasks
- [ ] check_email | Check for new important emails | required | verify: email_count
- [ ] review_tasks | Review and update task priorities | required | verify: task_list_updated
- [x] weather_brief | Prepare morning weather brief | optional | verify: weather_sent
- [ ] memory_cleanup | Run memory deduplication | optional | verify: dedup_count | max_attempts: 5
## Notes
Anything after the Tasks section is also captured as context.
Each task in the ## Tasks section follows this pipe-delimited format:
- [x] task_id | Description | required/optional | verify: hint | max_attempts: N
| Field | Required | Description |
|---|
Checkbox [ ] or [x] | Yes | Whether the agent pre-marked it as done |
task_id | Yes | Unique slug identifier (auto-slugified from text) |
| Description | Yes | Human-readable action description |
required / optional | No | Whether completion is mandatory (default: required) |
verify: hint | No | Hint for verification sidecar (default: task_completed) |
max_attempts: N | No | Maximum retry attempts (default: 3) |
Parsing Rules
- Task lines must start with a list marker (
-, *, or +) followed by a checkbox
- Checkbox state:
[ ] = unchecked, [x] or [X] = checked
- Fields are split by pipe (
|) characters
- The
task_id is auto-generated by lowercasing and replacing spaces with underscores
- If
required or optional is not specified, the task defaults to required
- The
verify: prefix is optional on the verification hint field
Progress Tracking
Per-task progress is tracked across heartbeat cycles:
Cycle Initialization
When a new heartbeat cycle starts, initCycleProgress() carries state from the previous cycle:
| Previous Status | New Cycle Behavior |
|---|
Agent pre-marked [x] | Starts as verified (0 attempts) |
Previously failed with max attempts reached | Stays failed (no retry) |
Previously failed with retries remaining | Resets to pending (attempt count preserved) |
verified or skipped | Resets to pending |
| New task (no previous) | Starts as pending |
Progress Persistence
Progress is persisted to ~/argent/memory/heartbeat-progress.json. The file is created automatically if it does not exist.
Verification Process
After the agent processes each task, the verification sidecar (heartbeat-verifier.ts) evaluates the outcome:
| Verdict | Description |
|---|
verified | Task completed, verification confirms |
not_verified | Task claimed complete, verification found otherwise |
unclear | Verification inconclusive |
A special groundTruthContradiction flag is set when the agent explicitly claimed a result that verification proves false — the harshest penalty in the accountability system.
Integration with Accountability Scoring
Verification verdicts feed directly into the Accountability System:
| Verdict | Points |
|---|
| Verified required task | +10 |
| Verified optional task | +5 |
| Not verified | -15 |
| Ground truth contradiction | -30 (stacks with -15) |
| Unclear | -2 |
The contract system is the primary driver of the daily accountability score. Well-written contracts with clear verification hints produce reliable scores.
Best Practices
Writing Good Tasks
- Use clear verification hints:
verify: email_count > 0 is better than verify: done
- Separate required from optional: Only mark tasks
required if they must complete every cycle
- Set appropriate max_attempts: Complex tasks may need 5+ attempts; simple checks need only 2
- Use descriptive task IDs:
check_email is better than task_1
Freeform Context
- Include priorities: “Focus on email monitoring during business hours”
- Note temporal context: “Richard has a client demo on Thursday”
- Set expectations: “Skip weather brief on weekends”
Task Granularity
- Too coarse: “Do everything” — impossible to verify
- Too fine: 20 tasks per cycle — overwhelming
- Right size: 3-7 tasks covering key responsibilities with specific verification criteria