Skip to content

Instantly share code, notes, and snippets.

@azkore
Last active March 30, 2026 00:56
Show Gist options
  • Select an option

  • Save azkore/934e5387579efb17e1080402efedf13d to your computer and use it in GitHub Desktop.

Select an option

Save azkore/934e5387579efb17e1080402efedf13d to your computer and use it in GitHub Desktop.
Claude Code Issue Tracker: A Case Study in Bot-Managed Bug Neglect (March 2026 analysis)

Claude Code Issue Tracker analysis

Disclosure: This analysis was produced by Claude (Opus 4.6) in a Claude Code session. I asked the questions; Claude did the research, data gathering, and writing.

Part 1 of 2 — see also: OpenCode vs Claude Code: Issue Tracker Comparison

Claude Code's Issue Tracker at Scale: Automation, Community Debugging, and the Visibility Gap

This analysis is based on publicly visible GitHub activity — issue comments, workflow files, and labels. It cannot account for internal issue tracking, private support channels, or fixes implemented without public discussion.

TL;DR

Claude Code has ~6,000 open issues (3,554 labeled as bugs) and receives ~2,000–2,500 new issues per week. An estimated 49–71% of all closures are bot-driven (see methodology note). A deep dive into session rename/resume bugs reveals 12+ related issues spanning 2+ months, 70+ community comments, community-provided root cause analysis — and no visible staff engagement on those specific issues. A reported fix-then-regression went unacknowledged. A related consolidation issue (#27242, 45 thumbs-up) received one brief staff response but no follow-up.


The Numbers (as of March 14, 2026)

Metric Count
Repository age ~13 months
GitHub stars 77,748
Total issues ever filed 32,769
Currently open ~5,978
Open bugs (labeled "bug") 3,554
New issues per week (4-week avg) ~2,003
Closed with duplicate label 10,069
Closed with autoclose label 5,502
Closed with stale label 3,147
Closed with invalid label 918
Bot-driven closures (est.) 49–71% of all closures*

* See methodology note below for how this range was calculated.

The question is not whether automation should exist at this scale, but why so much of it is visibly directed at classification and closure while so little is visibly directed at escalation and response.


The Structural Problem: The Lock-Close-Duplicate Cycle

Before examining a specific bug, it's worth understanding a structural issue in the triage system that affects all bugs.

The repo's public workflows (.github/workflows/) reveal a set of interacting bots:

Workflow Model Behavior
claude-dedupe-issues.yml Claude Sonnet 4.5 Processes every new issue for duplicate detection. Posts "Found N possible duplicate issues" comment
auto-close-duplicates.yml Daily sweep: auto-closes issues 3 days after the duplicate comment, unless the author thumbs-downed it
claude-issue-triage.yml Claude Opus 4.6 Labels every new issue (bug, needs-repro, needs-info, invalid, etc.). Does not comment
sweep.yml Runs twice daily: marks issues stale after 14 days of inactivity, then closes them 14 days later
lock-closed-issues.yml Locks closed issues after 7 days — no further comments allowed
issue-lifecycle-comment.yml Posts timeout warnings when lifecycle labels are applied

These bots interact to create a self-reinforcing cycle for unfixed bugs:

  1. User files a bug report with reproduction steps
  2. Duplicate bot finds similar (also unfixed) issues, posts a duplicate comment
  3. Auto-close bot closes the issue 3 days later
  4. Lock bot locks it 7 days after closure — no further comments allowed
  5. The bug persists. The user can't comment on the locked issue
  6. User files a new issue
  7. Duplicate bot finds the locked issue from step 3 as a match
  8. Go to step 3

This cycle fragments community engagement across many issues, making it harder for any single report to accumulate enough visibility to trigger human review. Issues with 10+ thumbs-up are exempt from stale/autoclose, and authors can thumbs-down the duplicate comment to prevent auto-closure — but these escape hatches are not prominently documented.

The lifecycle system (scripts/issue-lifecycle.ts)

Label Timeout Closures to date
duplicate 3 days 10,069
autoclose 14 days 5,502
stale 14 days (after 14 days inactive = 28 days total) 3,147
invalid 3 days 918
needs-repro 7 days 11
needs-info 7 days 15

Case Study: Session Rename/Resume Bugs

The Bug

/rename stores the custom session title as a custom-title JSON line in the session JSONL file. The /resume picker uses a fast loader that only reads the last 16–64KB of the file. After a few conversation turns, the custom-title line gets pushed outside that tail window and silently disappears from the picker.

This is part of a broader family of sessions-index.json bugs — at least 20 open issues track variants of the index falling out of sync with actual session files on disk.

Root Cause (found by the community)

Community member @Astro-Han traced the exact code path in the compiled binary (v2.1.72):

// uH8: reads head (first 64KB) + tail (last 64KB) of the JSONL
let K = await R.read(q, 0, S3_, 0);           // head — S3_ = 65536
let O = Math.max(0, T - S3_);                  // tail offset
// ...
// woK: searches for customTitle in TAIL only
j = rg(K, "customTitle");   // if custom-title line is >64KB from end → gone

Another user @rfaile313 found a 16KB window (BbR = 16384) in v2.1.39. Either way, 3–5 conversation rounds push the title out of range.

Real-world verification: one user's session had the custom-title at byte offset 19,950,786 — but after 3 more rounds it was 253KB from the end, far beyond the 64KB read window.

The Issue Cluster

At least 12 distinct issues were filed about this family of bugs. None received comments from known Anthropic staff:

Issue Title State Staff Replies
#25090 Renamed session name disappears after 2nd exit OPEN 0 of 20 comments
#23610 /rename overwritten after /resume CLOSED 0 of 8
#26249 /rename not indexed, can't resume by name CLOSED (dup) 0
#26123 /resume broken since v2.1.31 (3 root causes) CLOSED 0 of 17
#24065 /rename title not persisted CLOSED (stale) 0
#25729 /resume shows only ~5-10 sessions OPEN 0 of 4
#19707 --resume fails after /rename from different dir CLOSED (dup) 0
#16973 Conversation name lost after /resume CLOSED (dup) 0
#18311 --resume shows "No conversations found" OPEN 0 of 9
#22107 Session resume logic losing context OPEN 0 of 16
#25905 /rename makes sessions unresumable (macOS) CLOSED (dup) 0 of 3 (all bot)
#26134 /rename doesn't persist (Windows) OPEN 0 of 3

A related consolidation issue (#27242) covering broader session history inaccessibility did receive one response from an Anthropic engineer (@stevenpetryk, who identifies as "Working on Claude Code" at Anthropic) on Feb 25: "Thanks for reporting all this, appreciate the feedback. I'm going to raise these to the team." No public follow-up has appeared as of March 14, 2026.

Timeline

Date Event
Jan 8, 2026 First reports (#16973)
Feb 6 Detailed root cause with 16KB tail-read analysis (#23610)
Feb 11 Main bug report (#25090) — user reports 5+ failed renames/day
Feb 16 Consolidated report with 3 root causes and a one-line fix (community-provided sed patch)
Feb 23 User @ThatDragonOverThere reports v2.1.53 fixed the bug — custom names appeared in the picker
Feb 23 Same user reports v2.1.55 (released ~3 hours later) regressed it — names gone again
Feb 25 @stevenpetryk (Anthropic) responds on related issue #27242: "I'm going to raise these to the team"
Mar 2 User confirms still broken in v2.1.62; notes behavior is now intermittent
Mar 11 Complete 64KB-window root cause posted with binary analysis (#25090)
Mar 14 Still broken in latest version. No follow-up on #27242.

Community-Proposed Fixes (none publicly acknowledged on the rename-specific issues)

  1. Re-append custom-title at every save point — ensures it stays within the tail window
  2. Add customTitle field to sessions-index.json — the index schema currently has no field for custom names
  3. Write a sidecar .meta.json per session — eliminates dependency on JSONL byte offsets entirely
  4. Stop hook workaround — community-built scripts that re-append the title line on session exit

Broader Patterns

Rapid releases without visible regression tracking

Claude Code releases at a rapid pace — sometimes multiple versions per day. According to user reports, the session rename fix in v2.1.53 was shipped and regressed in v2.1.55 the same day. The changelog for v2.1.55 listed only "Fixed BashTool failing on Windows with EINVAL error" — the session regression was not mentioned. Whether this was an unintended side effect or a deliberate revert is unclear from public information.

Community debugging receives little visible acknowledgment

Across the session rename/resume cluster, users have provided:

  • Exact byte offsets in the compiled binary
  • Function names in the minified source (yw8, woK, nKT, x$T, R1T)
  • One-line fix proposals with multiple implementation options
  • Working workaround scripts and hooks
  • Regression testing timelines across 10+ versions

On the 12 rename-specific issues, none of this received visible staff acknowledgment. The one staff response in the broader cluster (#27242) acknowledged the reports but did not engage with the technical analysis.

AI processes every issue — for classification, not visible follow-up

Anthropic's own AI models process every issue filed:

  • Claude Sonnet 4.5 processes every new issue for duplicate detection
  • Claude Opus 4.6 processes every new issue for triage labeling

The issue tracker is not unmonitored — it's actively processed by AI. But based on the public workflow definitions, the AI's role is to classify and close, not to fix or escalate. The triage bot's instructions say: "Don't post any comments or messages to the issue. Your only actions are adding or removing labels." The dedupe bot identifies potential duplicates, after which issues are auto-closed unless the reporter objects. Neither bot's public instructions include flagging critical bugs for human attention — though internal escalation processes may exist that aren't visible in the public repo.

The scale problem is real — but it's the wrong frame for this bug cluster

2,000+ issues per week explains aggressive intake automation. It does not explain why 12 issues with community-provided root causes and proposed fixes received zero visible engineering follow-up over 2+ months. That gap is not a volume problem — it's a prioritization gap at a company with the resources to close it.


Conclusion

Anthropic is one of the best-funded AI companies in the world, and its public issue workflow shows substantial visible investment in automated classification and closure. The session rename/resume bug cluster shows the other side of that investment: 12+ issues, 70+ community comments, binary-level root cause analysis, proposed fixes, a reported fix-then-regression — and across all of those specific issues, zero visible staff engagement. The one staff response in the broader cluster was a brief acknowledgment without follow-up.

The lock-close-duplicate cycle compounds this: it fragments community engagement across many issues, prevents consolidation on locked threads, and makes it structurally difficult for well-documented bugs to accumulate the visibility needed for human review.

What this bug cluster makes visible is a gap not between what's technically possible and what exists, but between the visible investment in intake automation and the visible investment in acting on what the community has already diagnosed. The system is optimized to process reports, not to visibly respond to them.


Follow-up: OpenCode vs Claude Code: Issue Tracker Comparison — comparing automation strategies, issue volumes, and visible engagement across both projects.

Analysis performed on March 14, 2026 using data from the anthropics/claude-code GitHub repository, including its public .github/workflows/ and scripts/.


Methodology Note

The "49–71% bot-driven closures" range is calculated as follows:

  • Lower bound (49%): Counts only issues closed with duplicate (10,069) or stale (3,147) labels. These are the most clearly bot-driven closures. 13,216 / 26,792 total closures = 49.3%.

  • Upper bound (71%): Adds autoclose (5,502) and invalid (918) labels, then subtracts ~604 issues that carry multiple lifecycle labels (to avoid double-counting). This may slightly overcount because some invalid-labeled issues are closed by the reporter themselves after seeing the bot's label.

A definitive number would require checking the closed event actor on all 26,792 closed issues, which exceeds GitHub API rate limits for a single session. The true figure may be closer to the upper bound, since the autoclose label is applied and closed entirely by bots (verified by spot-checking).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment