Runiverse was an MMO RPG focused on crafting unique items, with thousands of players in a big unified open world distributed across multiple servers.
When your game world is split into many server instances, inventory stops being a “simple table” problem and becomes a distributed systems problem. The same player can cross boundaries, receive rewards, consume resources, and complete crafting loops while multiple backend processes are still interested in their state.
This article explores the architecture we used to keep inventory synchronized globally, with server-authoritative logic, near real-time propagation, and strong protection against duplicate processing.
The Challenge
The technical hurdle wasn’t just persisting item amounts, but guaranteeing consistency when a user could be “present” in more than one server context simultaneously. During server handovers or transition zones, players often exist in a gray area where two backend processes might try to modify their state.
We faced a scenario where a player could earn an item in Server A while Server B still held an active session for that same user. Without a robust synchronization layer, this led to “phantom” items, double consumption of resources, or rewards that never materialized. We needed a system that scaled to millions of unique crafted items without the overhead of massive state transfers every time a player moved ten meters.
Key Criteria
To solve this, we established a set of non-negotiable constraints. First, all inventory logic must be handled by the servers, any change in inventory—earning or losing an item—had to be reflected across all servers in near real-time. This meant inventory logic had to be strictly server-authoritative; while clients needed to be notified of changes instantly for UI responsiveness, they should have zero power to alter the state.
Additionally, the system had to handle idempotency. In a distributed environment, network retries or duplicated events are common; we had to ensure that an event of “gained 1 sword” was never counted twice. Finally, the communication protocol had to stay light even as the number of unique item descriptors reached the millions, ensuring that technical debt wouldn’t turn into player-facing lag.
The Solution
Our approach follows a centralized authority model with a distributed notification layer. Before diving into the specifics of caching and persistence, the following diagram illustrates the high-level architecture:
1) Cache the full inventory state and only listen for updates
Inventories can be heavy, especially in crafting-heavy economies. So we use a layered cache model.
Both the client and each server instance hold a local cache of the relevant player’s inventory. After the initial hydration when a player logs in, we stop sending full snapshots. Instead, we propagate only “deltas” (item amount changes). This reduces serialization costs and allows the UI to stay responsive by applying trusted updates immediately upon receipt.
2) Use MongoDB as authoritative storage (with field-level updates)
We chose MongoDB for its flexibility and horizontal sharding capabilities. The key here was granularity: we only update a single field in the inventory document per transaction using atomic operations. This prevents race conditions where two servers might overwrite each other’s changes.
For subtraction flows, we also enforce guarded queries that validate required minimum amounts before decrementing. If the precondition is not met, the write is rejected, protecting against underflow and race-condition side effects.
For multi-scope changes (account-level + character-level), we split changes and apply them with rollback logic if one side fails. For trades, we execute transactional logic and rollback on partial failure. This moves consistency guarantees into the backend layer where they belong.
On session lifecycle, we apply cache hygiene (rehydration/reset behavior and cache invalidation on leave) to avoid stale state carrying over between contexts.
3) Separate inventories from item descriptors
One of our most impactful decisions was decoupling the inventory (the count of items a player has) from the item descriptor (what the item actually is). A transaction doesn’t need to know the stats, lore, or 3D model of a “Flaming Sword”; it only needs its ID and the quantity. This allows players to hold millions of unique crafted items without bloating the synchronization events. Descriptors are loaded on-demand and cached by the client, meaning internal logic remains lean and fast.
This simplified data model highlights why inventory transactions stay lightweight even with large descriptor volumes.
The decoupling ensures that hot transaction paths remain lean, as most operations only require item IDs and quantities rather than full metadata payloads. By moving heavy descriptor data out of the critical synchronization loop, the system avoids loading massive metadata blobs during high-frequency updates, while allowing clients to batch-request and cache the necessary item details only when needed for UI rendering.
On the client side, descriptor fetches are batched and cached in state (e.g., Redux), reducing repeated calls and improving rendering stability for large inventories.
4) Use Redis to emit synchronization events
MongoDB is the source of truth; Redis is the propagation fabric.
We publish inventory-change events through Redis with a per-user channel pattern. This is important: servers do not subscribe to all inventory traffic globally.
Rather than every server listening to every update, we optimized channels by player ID. Each server only subscribes to updates for the users currently present in its zone with reference counting on joins/leaves to subscribe/unsubscribe lifecycle automatically.
To solve the idempotency problem, every transaction carries a unique change_id. If a server receives an update for an ID it has already processed, it simply discards it, preventing the dreaded “double gold” bug.
The result is a practical at-least-once propagation model with idempotent consumption at the inventory layer.
5) Server-authoritative flow from transaction to client notification
The complete path is:
- A gameplay action triggers an inventory mutation on the server.
- The server validates and writes an atomic update in MongoDB.
- The mutation emits a change event with a unique ID.
- Redis propagates that event to other interested server instances.
- Server caches the update from the delta.
- Clients receive real-time notifications from their active server session.
The sequence below shows the synchronization path from one item change to cross-server convergence.
At no point does the client decide authoritative inventory values. The client renders state and reacts to trusted server events.
Results
This architecture proved robust at real MMO scale, combining consistency, performance, and operational efficiency in a single inventory pipeline. In production, the ecosystem handled 74,774 characters with peaks of 8,000 concurrent players, while sustaining a highly dynamic item economy.
At the data layer, we reached 2,043,630 active item descriptors by the end state, with more than 99% generated through crafting, and also processed 2,182,348 descriptor removals through burn mechanics (unique items consumed to recover resources). Those volumes validated that field-level Mongo updates plus event-driven synchronization could absorb massive item churn without degrading responsiveness.
From a gameplay and platform perspective, the result was a globally coherent inventory experience during server transitions, duplicate-safe item accounting through idempotent events, and near real-time client feedback through cached deltas—without giving mutation authority to the client.
Do you have a similar problem or project that requires a solution?

