ReBox

A tool I built for on-site equipment tracking at live events. It runs on mobile, works fully offline, and requires no account to use.

iOS 16+ Offline-first Private test DB

The problem

Repacking equipment onsite is hard. People are working fast, in poor conditions, off an Excel file or a paper list — trying to find one item under pressure is not a workflow. Things end up in the wrong cases, items go missing, damage doesn't get logged. Most of the time nobody finds out until the next event.

That was the problem I wanted to solve. Over the past months I built a tool called rebox.


What it does

  1. Import the packing list

    CSV, XLSX, or a text-readable PDF. Per-container files and master lists are both detected automatically. No list? Build one from scratch on site by scanning items into containers.

  2. Scan items with the phone

    Point the camera at a barcode. The app shows which container the item belongs to. The operator taps to confirm.

  3. Flag damage and missing items

    Mark an item missing, or log a damage report with a photo. Both are logged in the moment, not at the end of the job.

  4. Deploy to on-site locations

    Assign items to user-defined positions — Main Stage, FOH, Court 1. The hub answers "where is this?" without opening a case.

  5. Generate shipping documents

    Packing list, proof of delivery, pro-forma invoice, temporary export declaration. All generated on the device from the live manifest.


What an operator actually sees

Six frames from the iOS app — scroll horizontally. Open the full mockup in a new tab.

  1. 1 Hub — session overview
  2. 2 Scan — camera sees a barcode
  3. 3 Box overview — per-box progress
  4. 4 Damage entry with photo
  5. 5 On-site — where things are deployed
  6. 6 Documents — generated on device

The dashboard

Once things were being logged on site, managers couldn't see what was happening. So a web view got added. Same data, different seat.


Phone and dashboard, same picture

Operators on site and managers on a laptop see the same state within roughly half a second. When there's no connection, the phone keeps working.

iPhone operator Firestore + Cloud Functions Web dashboard manager ≈ 500 ms typical latency end to end scan live update
If the phone is offline, writes queue locally and flush on reconnect. Every interaction updates the phone UI immediately; the network is the async tail.

More than a repack tool

Because every item gets tracked through the whole job, there's a byproduct: a detailed production log covering inventory state, on-site location, and service context. That's not what I built this for — but if anyone ever wants it later, the data is already there.

Things that become possible if someone picks it up:

All of that is later. The tool works today as a repack app, and that's enough.


Where it stands

It's on a private test database right now. I'm offering it with no conditions attached — use it as is, move it to our own infrastructure, or take it apart entirely. No cost, no project, no ask.

I just think the people doing this job deserve a better tool than a printed spreadsheet. Happy to walk anyone through it whenever it makes sense.

Stack

LayerTech
iOS appSwiftUI, iOS 16+, Firebase iOS SDK (Auth, Firestore, App Check planned), CoreXLSX, ZIPFoundation. SPM, no CocoaPods.
Web dashboardReact 19, TypeScript, Vite, Tailwind 4, Firebase JS SDK, React Router 7.
BackendCloud Functions for Firebase, TypeScript, Node 22. No owned servers.
DataFirestore (single region). Persistent local cache on both clients.
AuthFirebase Auth. Custom claims distinguish operator vs manager.
Shipping17track API via a Cloud Function webhook.

Data model

Everything is org-scoped. State indexes are Cloud-Function-maintained denormalizations so summary pages read ~200 docs instead of fanning out across thousands of items.

organisations/{orgID}
├─ events/{eventID}                              EventDoc (managed from web)
│  └─ areas/{areaID}                             AreaSessionDoc (one per area per event)
│     ├─ items/{itemID}                          PackingItem (one row of a packing list)
│     └─ comments/{commentID}                    per-area notes
├─ missingItems/{itemID}                         CF-maintained index
├─ damagedItems/{itemID}                         CF-maintained index
├─ deployedItems/{itemID}                        CF-maintained index
├─ operators/{uid}                               OperatorDoc
├─ kits/{id}                                     reusable packing-list templates
└─ areas/{id}                                    org-level area definitions

PackingItem carries three layers of state: inventory (repacked / missing / damaged), location (which on-site position), and service context (which area/zone). Together they form the production log.


Offline-first on iOS

Every user action is local-first. UI never waits on a network round-trip.

User taps confirm repacked SessionStore @Published mutation sync, main queue — UI re-renders schedulePersist — 150 ms debounce background queue — JSON to disk SyncManager.writeItemUpdate async Firestore updateData — fire and forget Delta listener with short-circuit hasPendingWrites = skip echo server ack applies delta only updateAreaStats CF fires in parallel
Everything left of the split runs regardless of connectivity. Right side is the async tail: when the phone is offline, it queues in the Firestore cache and flushes on reconnect.

Cloud Functions

All stats and state indexes are CF-maintained. Clients read the denormalized indexes; they never fan out over all sessions.

FunctionTypePurpose
updateAreaStatsFirestore trigger (item writes)Keeps area-level counts current. Maintains missingItems / damagedItems / deployedItems atomically in the same batch.
updateEventStatsFirestore trigger (area writes)Sums area stats into the parent event. Skips deletes and imports.
onEventCommentWriteFirestore triggerDenormalizes comment / unresolved counts onto the event doc.
onTaskWriteFirestore triggerDenormalizes open / done task counts onto the event doc.
onAreaDeletedFirestore triggerCascades cleanup when an area is removed.
deleteEventRecursiveCallable (manager-only)Deletes an event and all subcollections.
rebuildStateIndexesCallable (manager-only)One-time backfill for recovery. No UI button.
inviteOperator / inviteManagerCallableCreates an Auth account and operator doc, sends a password-reset email.
trackingWebhookHTTP17track pushes AWB status updates here. Validated by shared secret.

Web dashboard

One persistent events listener mounted above the router (EventsContext). Per-event area listeners attach when a specific event is opened and detach when it closes. Summary pages — Missing, Damage, Site Overview — read the CF-maintained state indexes directly, never the item docs.

Design tokens live in a single tokens.css; light and dark themes are toggled via data-theme on <html>. A pre-paint inline script reads localStorage to prevent theme flash.

Firestore's persistent cache keeps resume tokens alive between sessions. If the browser reconnects within 30 minutes, a listener resumes from a delta rather than re-reading everything — that's the main lever on read cost.


Platform split — who owns what

ResponsibilityOwner
Create / delete events, invite users, assign operatorsWeb only
Upload packing listsWeb primarily; iOS also builds lists from scratch on site
Operator scans, damage flags, missing flags, deploymentiOS only
Shipping-document generationBoth platforms have the same generators
Area-stats aggregationCloud Function — both platforms observe
State index collectionsCF-maintained — both read, neither writes
AWB trackingWeb only (callable + webhook)

Scale & cost today


Testing

Hosting: pure-logic unit tests with vitest (parsers, status derivation). Fast, no browser.

Cloud Functions: Firebase emulator harness. npm run test:emu builds the CFs, boots the emulator on a fake project, runs vitest against real Firestore triggers, and tears down — ~10 s end-to-end. Zero impact on prod.


What's next