- New lib/storage.mjs: async S3 backup on every queue/log save
- Versioned S3 bucket (claw-apply-data) keeps every revision
- Auto-restore from S3 if local file is missing or corrupt
- saveQueue/saveLog now validate data type before writing
(prevents the exact bug that corrupted the queue)
- IAM role attached to EC2 instance for credential-free S3 access
- Config: storage.type = "local" (default) or "s3"
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Iterate over the full queue array instead of getJobsByStatus() results,
and pass it to saveQueue(). The previous code passed no argument, which
would corrupt or silently fail the save.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When a batch completes but scores aren't written back (collection
error), jobs get stuck with filter_batch_id set and never re-submitted.
Now checks: if no filter_state.json exists (no batch in flight) but
jobs have batch markers without scores, clear them so they get
re-submitted on the next run.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Removed Telegram notification on batch submit (only notify on collect
when results are ready)
- After collecting, immediately submit remaining unscored jobs in the
same run instead of waiting for next cron cycle
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Search: show per-track breakdown (found/added per track name)
Filter: show top 5 scoring jobs with score, title, company and cost
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
logStream.end() callback wasn't firing reliably, leaving processes hanging.
process.exit() is synchronous and forces exit regardless of open handles.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- All entry points with log tee now call logStream.end() + process.exit()
(log stream kept event loop alive, blocking next cron run)
- easy_apply: detect "no longer accepting applications" and similar closed
listing text before reporting as unsupported
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Remove unused APPLY_PRIORITY array (replaced by score-based sort)
- Fix run timeout only breaking inner loop — now breaks outer platform loop too
- Remove dead lastProgress variable in easy_apply step loop
- Add stdout/stderr log tee to job_searcher, job_filter, telegram_poller
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- addJobs: allows same job on multiple tracks (dedup key = track::id)
- Cross-track copies get composite id (job.id_track) to avoid batch collisions
- dedupeAfterFilter(): after collect, keeps highest-scored copy per URL, marks rest as 'duplicate'
- Called automatically at end of collect phase
- submitBatch → submitBatches: groups jobs by track, submits one batch each
- filter_state.json now stores batches[] array instead of single batch_id
- Collect waits for all batches to finish before processing
- Each track gets its own cached system prompt = better caching + cleaner scoring
- Idempotent collect: skips already-scored jobs
- Submit phase now excludes jobs with filter_batch_id set (already in a batch)
- After submitting, stamps each job with filter_batch_id = batchId
- Filtered jobs already excluded by status='filtered'
- Prevents duplicate submissions when batch errors cause state to clear without scores
- updateJobStatus was called 4,652 times causing ~4,652 file reads/writes
- Now loads queue once, applies all updates in memory, saves once
- Model was using OpenClaw alias (sonnet-4-6) not native Anthropic ID
- Only claude-3-haiku-20240307 is available on this API key; update settings.example.json
- search_runs.json: append-only history of every searcher run
(started_at, finished, added, seen, platforms, lookback_days)
- search_progress_last.json: snapshot of final progress state after
each completed run — answers 'what keywords/tracks were searched?'
- filter_runs.json: append-only history of every filter batch
(batch_id, submitted/collected timestamps, model, passed/filtered/errors)
Fixes the 'did the 90-day run complete?' ambiguity going forward
- Batch API = 50% cost savings vs synchronous calls
- Prompt caching on system prompt (profile + criteria shared across all jobs)
- One request per job with custom_id = job ID for result matching
- Two-phase state machine: submit → poll/collect (hourly cron safe)
- filter_state.json tracks pending batch ID between runs
- Model configurable via settings.filter.model (default: claude-sonnet-4-6)
- Telegram notifications on submit + collect
- Errors pass through — never block applications due to filter failure
- --stats flag for queue overview
- lib/filter.mjs: batch scoring engine (10 jobs/call, Claude Haiku)
- job_filter.mjs: standalone CLI with --dry-run and --stats flags
- Threshold configurable globally + per-search in search_config.json (filter_min_score, default 5)
- Job profiles (gtm/ae) passed as context via settings.filter.job_profiles
- Filtered jobs get status='filtered' with filter_score + filter_reason
- Filter errors pass jobs through (never block applications)
- status.mjs: added 'AI filtered' line to report