Decision matrix 8 min read

2026 small-team shared remote Mac: GitHub merge queue, runner label routing, locks & timeouts

M

Published April 11, 2026

Meshmac Team

On a shared remote Mac, two different queues collide in people’s heads: GitHub’s merge queue (pre-merge, trunk-shaped) and the self-hosted runner backlog (machine-shaped). This article separates the concerns, gives a decision matrix, and ends with a parameter checklist—queue depth, labels, concurrency locks, flock/timeout, and preemption—so small teams can tune CI without guessing.

Two queues, one machine

The merge queue answers: “Which pull request is allowed to land next, and did it still pass after rebasing onto the latest default branch?” It reduces merge skew and broken trunk at the Git layer. Runner labels answer a different question: “Which physical or rented remote Mac may execute this job?” Labels never create fairness across repos by themselves—they only filter eligible runners.

When both mechanisms are active, your observable latency is often dominated by the runner pool: Xcode compiles, archives, simulators, and signing are single-host resources. Merge queue depth can look healthy while jobs still pile up behind a narrow label slice. That is why collaboration guides for multi-node Mac meshes and Codespaces vs direct node access still matter—you are designing for humans and CI on the same capacity story.

If you already documented label tiers, extend that doc with merge-queue settings so on-call engineers see one picture: trunk policy plus hardware policy. The runner routing and queue matrix remains the baseline for tag naming and split pools.

Decision matrix: merge queue vs label routing

Use the table to pick a primary control plane. Mature teams usually combine merge queue (trunk) with label routing (builders)—the “both” row is the default recommendation for shared Mac CI once you ship weekly or faster.

Approach Best when Main risk Mitigation on shared Mac
Merge queue first High merge rate, strict trunk integrity, many contributors on one default branch. Extra CI cycles per queue entry; perceived “double wait” if runners are saturated. Cap concurrent queue builds, align required checks, and route queue workflows to ci-merge labels backed by dedicated hosts.
Label routing first Low merge frequency, many toolchain variants, or heterogeneous Xcode pins. Trunk can still break from logical conflicts merge queue would catch. Adopt merge queue for default branch, keep labels for Xcode/SDK isolation.
Both (recommended at scale) Shared remote Mac pool serving PR and trunk checks; releases need predictable slots. Policy sprawl—teams set incompatible timeouts or duplicate concurrency keys. Name conventions for concurrency: groups, document preemption, and mirror thresholds in the parameter checklist below.
Host flock without queue discipline Single heavy mutex (signing) on one machine. Hides GitHub-side parallelism; stuck lock blocks unrelated workflows. Pair flock with wall-clock timeouts and alerts; prefer separate nodes before unbounded critical sections. See flock build queue FAQ.

Parameter checklist

Treat these numbers as baselines—adjust with your measured queue time, disk headroom, and signing constraints. The goal is operability: every parameter should be owned, documented, and visible in dashboards.

Parameter Starting baseline Notes
Merge queue depth / parallelism 1–2 concurrent builds per pool for heavy Xcode; up to 3 only with spare CPU & disk metrics Deep parallel queue entries multiply full rebuilds; correlate with actions:read audit for surprise fan-out.
Runner pool queue depth alert Warn beyond ~15–25 pending jobs per label set; page beyond ~40 Labels do not create fairness; alert on wait time (created → assigned) per tier.
Runner labels (examples) self-hosted, macOS, arm64, xcode-16-2, ci-pr / ci-merge / ci-release Split ci-merge from noisy PR labels so merge queue jobs do not starve behind optional checks.
Workflow concurrency group: ${{ github.workflow }}-${{ github.ref }} with cancel-in-progress: true for PRs; false for release archives Add a shared org/repo+trunk group for mutex-style sections if multiple workflows touch the same resource.
flock critical section One signing/notarization lane per keychain; lock file under /var/tmp or a documented CI volume Always pair with timeout (below); log lock holder id. Expand queue lock FAQ for patterns.
Job timeout-minutes PR compile: 30–60; archive/export: 90–120; merge-queue re-check: same as PR unless diff is tiny Shorter timeouts reclaim stuck workers; tune from p95 duration, not best case.
Shell timeout (GNU/BSD) Wrap network calls: 120–300s; wrap upload steps: 300–600s Prevents hung curl from blocking the runner until job timeout.
Preemption rules PR < optional nightly < merge queue < release; never preempt signing mutex holders Encode priority via labels and concurrency; optionally gate long jobs behind manual labels. Pair with load balance and failover when preemption becomes political.

Composing policies on a pooled Mac

Start from observable signals: median and p95 queue wait, runner busy minutes, and merge time. If merge queue waits are short but Queued in Actions is long, you have a runner topology problem—invest in extra nodes or split labels before you tune GitHub-side concurrency again.

If merge queue waits are long while runners sit idle, your required checks probably target the wrong labels, branch filters omit merge-group events, or org-level concurrency caps starve only the trunk workflows. Fix routing first; locks second.

Document one page per pool: label diagram, merge queue name, concurrency keys, flock paths, timeout table, and on-call escalation. That single page is what makes rented Mac team plans usable across time zones—new hires should not reverse-engineer CI from workflow YAML alone.

FAQ

Should merge-queue jobs use the same labels as pull-request jobs?
They can share a toolchain label (for example xcode-16-2) but should not share capacity labels with noisy optional workflows. Give merge queue and release paths their own ci-merge / ci-release suffix so routing stays explicit.
What is the first knob when two repos fight one Mac?
Split runners by repo family or SKU, then tighten concurrency. Only after routing is honest should you change merge queue parallelism—otherwise you push queue latency back into human review cycles without fixing hardware contention.

Grow the pool before the queue depth hurts trunk

Browse the homepage, blog index, and public plans checkout with no login—size extra nodes or a higher tier when merge-queue latency or mutex contention stops following your parameter table. The help center covers SSH/VNC access patterns for pooled builders.

When labels are honest but waits remain high, add capacity: duplicate runners across additional MeshMac nodes or upgrade your team package so ci-merge and ci-release lanes keep their own CPUs and disk—not shared leftovers from experimental workflows.

Plans — no login