Skip to main content

8 posts tagged with "Architecture"

Game architecture and design patterns

View All Tags

Layering Visual & API Events: A Best Practices Guide for Scalable Projects

TinyGiants
GES Creator & Unity Games & Tools Developer

As projects grow, one of the most common questions I hear is: "Should I use the visual tools or the scripting API?" The answer is both — but knowing where each approach shines is what separates a clean, scalable event architecture from one that collapses under its own weight.

After seeing how teams of all sizes use GES, I want to share a layered approach that keeps your project maintainable whether you have 50 events or 500.

The Two Worlds of Game Objects

Every Unity project has two fundamentally different categories of objects:

Scene-Resident Objects — things that exist in the Hierarchy at edit time. UI canvases, level geometry, cameras, persistent managers, environment triggers. You can see them, select them, drag references to them.

Runtime-Spawned Objects — prefab instances created through Instantiate() at runtime. Enemies, projectiles, loot drops, pooled VFX, dynamically generated UI elements. They don't exist until the game is running.

This distinction is the foundation of how you should layer your event usage.

Layer 1: Visual Configuration for Scene-Resident Objects

Use the Editor, Behavior Window, and Flow Graph for anything that lives in the scene.

Why? Because scene-resident objects have stable Inspector references. A health bar sitting in your UI Canvas, a door trigger in your level, a background music controller — these objects are right there in the Hierarchy. You can:

  • Open the Game Event Editor, find the event, and click the Behavior button
  • Configure Action Conditions visually (e.g., only trigger the damage flash when health < 30%)
  • Set Schedule timing (0.2s delay before the hit sound, screen shake repeating 3x at 0.1s intervals)
  • Wire up UnityEvent actions by dragging the target object straight from the Hierarchy
  • Use the Flow Graph to orchestrate complex sequences (fade screen → load scene → reposition player → fade in)

This is where designers, artists, and audio engineers thrive. They can tweak game feel — adjust a delay from 0.2s to 0.35s, add a condition that skips the effect on low-HP enemies, reorder a chain sequence — without touching a single line of code and without waiting for a recompile.

Ideal For

  • UI responses — button clicks, panel transitions, HUD updates
  • Level scripting — door opens, traps activate, cutscene triggers
  • Audio events — play/stop/crossfade based on game state
  • Camera behaviors — shake, zoom, follow target switches
  • Environment reactions — lighting changes, particle effects, weather transitions
  • Tutorial sequences — step-by-step chains with conditions

Example

Your OnPlayerDeath event needs to: dim the screen, show a "You Died" panel, play a sound, and disable player input. All four responses are wired to UI and scene objects that already exist in the Hierarchy. This is a textbook case for the Behavior Window — four actions, one event, zero code. A designer can later add a 0.5s delay before the panel appears, or add a condition that skips the sound if the player is underwater, without filing a single code change request.

Layer 2: Scripting API for Runtime-Spawned Instances

Use AddListener(), RemoveListener(), and Raise() for prefab instances that are created at runtime.

Why? Because when you instantiate a prefab, there is no Inspector to drag references into. That enemy you just spawned from an object pool needs to listen for OnPauseGame to freeze its AI. That projectile needs to raise OnEnemyHit when it collides with a target. These bindings must happen in code, at the moment the object comes to life.

public class Enemy : MonoBehaviour
{
[GameEventDropdown] public GameEvent onPauseGame;
[GameEventDropdown] public GameEvent onResumeGame;

void OnEnable()
{
onPauseGame.AddListener(Freeze);
onResumeGame.AddListener(Unfreeze);
}

void OnDisable()
{
onPauseGame.RemoveListener(Freeze);
onResumeGame.RemoveListener(Unfreeze);
}

void Freeze() => agent.isStopped = true;
void Unfreeze() => agent.isStopped = false;
}

Notice something important: the event assets themselves are still assigned via the [GameEventDropdown] attribute on the prefab. The event reference is visual — it's a drag-and-drop field on the prefab asset. Only the listener registration is code, because the instance doesn't exist at edit time.

Ideal For

  • Spawned enemies/NPCs reacting to global events (pause, slow-motion, area effects)
  • Projectiles and VFX raising events on collision or lifetime expiry
  • Pooled objects that need to subscribe/unsubscribe as they activate/deactivate
  • Dynamically generated UI elements (inventory slots, chat messages, leaderboard rows)
  • Any system where the listener count is unknown at edit time

Example

You're building a tower defense game. Towers are placed at runtime. Each tower needs to listen for OnWaveStarted to begin targeting and OnWaveEnded to enter idle state. Since towers are instantiated dynamically, each one registers its own listeners in OnEnable() and cleans up in OnDisable(). Meanwhile, the wave manager that raises OnWaveStarted might be a scene-resident singleton with its timing configured entirely through the Behavior Window.

Layer 3: The Hybrid — Where the Magic Happens

The real power of GES emerges when you combine both layers intentionally:

Programmers define the event architecture and write the raise/listen code for runtime systems. They decide what events exist, what data they carry, and when they fire.

Designers and artists configure what happens in response using the Behavior Window, Flow Graph, and Condition Trees. They control the game feel, the timing, the conditions, and the visual/audio polish.

Here's a concrete hybrid workflow:

[Programmer writes]
- EnemyHealth.cs: raises OnEnemyDamaged(int damage) when hit
- EnemyHealth.cs: raises OnEnemyDeath(GameObject enemy) when HP <= 0
- WaveManager.cs: dynamically adds/removes listeners as enemies spawn/despawn

[Designer configures in Behavior Window]
- OnEnemyDamaged -> flash the damage number UI, shake the camera (condition: damage > 20)
- OnEnemyDeath -> play death VFX, add score to counter, check wave completion

[Designer configures in Flow Graph]
- OnLastEnemyDeath -> triggers OnWaveComplete
- OnWaveComplete -> chains: show reward panel -> wait 3s -> spawn next wave

The programmer never has to adjust a screen shake duration. The designer never has to write a listener registration. Each person works in their domain of expertise, and the event assets are the shared contract between them.

Practical Guidelines for Scaling

As your project grows, keep these principles in mind:

1. Let Object Lifetime Guide Your Choice

If it's in the scene at edit time → visual. If it's instantiated at runtime → API. This single rule resolves 90% of decisions.

2. Keep Event References Visual, Even in Code

Always use [GameEventDropdown] on your MonoBehaviour fields instead of hardcoding event lookups. This gives you type-safe, searchable dropdowns on prefabs and lets you swap events without code changes.

3. Use the Behavior Window for Response Tuning, Code for Response Logic

If the response is "play this sound after 0.3 seconds when health is below 50%," that's configuration — put it in the Behavior Window. If the response is "calculate damage reduction based on armor type, elemental resistance, and buff stacks," that's logic — write it in code.

4. Clean Up Runtime Listeners Religiously

OnEnable() → subscribe. OnDisable() → unsubscribe. No exceptions. This is the single most important habit for preventing memory leaks and ghost listeners in projects with object pooling or frequent scene loads.

5. Use Priority Listeners When Execution Order Matters

When multiple systems respond to the same event, don't rely on registration order. Use AddPriorityListener() with explicit priority values. Save data at priority 1000, update game state at 100, refresh UI at 0, play audio at -100. This makes your execution order self-documenting.

6. Use the Flow Graph to Make Invisible Relationships Visible

When an event triggers other events (via Triggers or Chains), always model it in the Flow Graph. Six months from now, no one will remember that OnDoorOpened triggers OnLightActivated, OnMusicChanged, and OnTutorialStep3. The Flow Graph makes these relationships discoverable at a glance.

7. Organize by Domain, Not by Type

Structure your event databases around game domains (Combat, UI, Audio, Progression) rather than technical categories (Void Events, Int Events). When your combat designer needs to tune something, they should find everything combat-related in one place.

The Decision at a Glance

LayerApproachWhoWhen
Scene objectsVisual (Editor, Behavior, Flow Graph)Designers, Artists, AudioObjects exist in Hierarchy at edit time
Runtime instancesScripting API (AddListener, Raise)ProgrammersPrefabs instantiated during gameplay
HybridEvents as shared contractsEveryoneProgrammers raise, designers respond

The Takeaway

The goal is not to pick one approach over the other. The goal is to let each team member work with the tools that match their expertise, while the event system acts as the clean boundary between disciplines.

Build the architecture in code. Polish the experience in the editor. Ship the game together.

Cross-Scene Events: The Persistence Problem Nobody Talks About

TinyGiants
GES Creator & Unity Games & Tools Developer

Your AudioManager plays background music. It subscribes to OnLevelStart to change tracks when the player enters a new area. You put the AudioManager on a DontDestroyOnLoad object so it persists across scene loads. Everything works during development because you're always testing in the same scene.

Then someone loads Level 2 from Level 1 for the first time. The music stops changing. The AudioManager is still alive — DontDestroyOnLoad did its job — but the event subscription didn't survive the transition. Or worse: the OLD subscription is still there, pointing at the destroyed Level 1 version of the event raiser, and the next time something tries to invoke it you get a MissingReferenceException in the middle of gameplay.

This is the persistence problem, and every Unity project with more than one scene hits it eventually.

Event System Pitfalls: Memory Leaks, Data Pollution, and Recursive Traps That Ship in Production

TinyGiants
GES Creator & Unity Games & Tools Developer

You've been testing your game for 5 minutes at a time. It runs great. Then QA files a report: "Memory usage grows steadily over a 30-minute play session. Frame rate degrades from 60 to 40 after loading 6 scenes." You profile it. There are 847 listeners registered to an event that should have 12. Each scene load added new subscriptions but never removed the old ones. The objects were destroyed, but their delegate references live on, pinning dead MonoBehaviours in memory where the garbage collector can't touch them.

Or this one: "Health values are wrong on the second Play Mode session. First run works fine." You hit Play, test combat, stop. Hit Play again. The player starts with 73 HP instead of 100. ScriptableObject state from the last session bled through because nobody reset it.

Or the classic: the game hangs for 3 seconds, then Unity crashes. Event A's listener raised Event B. Event B's listener raised Event A. Stack overflow. Except sometimes it doesn't crash — it just hangs, eating CPU in an infinite loop that produces no visible error.

These aren't hypothetical. These are bugs I've seen ship in production games. And they all have the same root cause: event system patterns that look correct in isolation but fail at scale.

Parallel vs Sequential: The Two Execution Patterns Every Event System Needs (And Most Don't Have)

TinyGiants
GES Creator & Unity Games & Tools Developer

Player dies. Death sound and death particles should start at the same instant — no reason to wait for one before starting the other. But the screen fade absolutely MUST finish before the respawn point loads. And the respawn MUST finish before the player teleports. And the teleport MUST finish before the screen fades back in.

That's parallel AND sequential execution in the same flow, triggered by a single event. And here's the uncomfortable truth: most event systems in Unity give you exactly one pattern. Fire an event, all listeners respond, done. Whether those responses should happen simultaneously or in strict sequence? Your problem.

So you solve it. With coroutines. And callbacks. And booleans named _hasFadeFinished. And before you know it, you've built an ad-hoc state machine scattered across six files that nobody — including future-you — can follow.

200 Events and Counting: Why Event Organization Breaks Down and How to Fix It

TinyGiants
GES Creator & Unity Games & Tools Developer

You start a new Unity project. You create ten events. OnPlayerDeath, OnScoreChanged, OnLevelComplete. You name them sensibly, drop them in a folder, and move on. Life is good. You can hold the entire event structure in your head.

Fast forward six months. You've got 200 events. The Project window is a wall of ScriptableObject files. You need OnPlayerHealthDepleted — or was it OnPlayerHPLow? Or OnPlayerHealthZero? You scroll through the list, squinting at names that all start with OnPlayer. After three minutes you give up and create a new one because you're not even sure if the event you want already exists.

This is where every event-driven Unity project lands eventually. And it's not because the event pattern is wrong — it's because nobody built the tooling for managing events at scale. Unity gives you the Animation window, Shader Graph, Timeline, the Input System debugger. Events get... the Project window.

Zero Reflection, Zero GC: What 'High Performance' Actually Means for a Unity Event System

TinyGiants
GES Creator & Unity Games & Tools Developer

Every single event system plugin on the Unity Asset Store says "high performance" somewhere in its description. It's right there between "easy to use" and "fully documented." But here's the thing — 1ms and 0.001ms are both fast in human terms, yet one is a thousand times slower than the other. When a plugin says "high performance," what does that actually mean? Compared to what? Measured how?

I used to not care about this. Most of us don't. You wire up some events, the game runs fine on your dev machine, you ship it. But then I started working on a mobile project with hundreds of entities each listening to multiple events, and suddenly "high performance" wasn't a marketing checkbox anymore — it was the difference between 60 FPS and a slideshow.

This post is about what "high performance" should actually mean for an event system, why most implementations fall short, and how GES achieves near-zero overhead through Expression Tree compilation. With real numbers, not hand-waving.

Unity's Generic Serialization Wall: Type-Safe Events Without the Boilerplate Tax

TinyGiants
GES Creator & Unity Games & Tools Developer

You build GameEvent<T>. Clean, type-safe, elegant. You create a GameEvent<float> field for health updates and slap [SerializeField] on it. You switch to the Inspector. The field isn't there. It's just... gone. Unity is staring at you with a blank panel like you asked it to divide by zero.

It's Unity's oldest architectural headache. The serialization system doesn't understand generics. It never has. And every developer who's ever tried to build a type-safe, data-driven event system has walked face-first into this wall.

This isn't a minor inconvenience. It's the kind of limitation that poisons your entire architecture. You either give up type safety, drown in boilerplate, or accept that your beautiful generic design will never touch the Inspector. For years, the community answer has been "just write the concrete classes by hand." But here's the thing — if the boilerplate is 100% predictable, why is a human writing it?

Goodbye Invisible Spaghetti: Why Your Unity Event System Is Killing Your Project

TinyGiants
GES Creator & Unity Games & Tools Developer

You renamed a method. Just one method — OnPlayerDied became OnPlayerDefeated because your game designer asked you to soften the language. You hit Play. Nothing happens. No compile error. No warning. Ten scene objects that were wired up through the Inspector with UnityEvents just... stopped working. Silently. And you won't find out until QA reports it three days later, or worse, your players do.

If this sounds familiar, congratulations — you've met invisible spaghetti code. It's the kind of technical debt that doesn't show up in your IDE, doesn't trigger compiler warnings, and doesn't appear in any dependency graph. It just sits there, waiting to break at the worst possible moment.

This isn't a skill issue. It's an architectural one. And it's way more common than most Unity developers want to admit.