Skip to main content

Layering Visual & API Events: A Best Practices Guide for Scalable Projects

TinyGiants
GES Creator & Unity Games & Tools Developer

As projects grow, one of the most common questions I hear is: "Should I use the visual tools or the scripting API?" The answer is both — but knowing where each approach shines is what separates a clean, scalable event architecture from one that collapses under its own weight.

After seeing how teams of all sizes use GES, I want to share a layered approach that keeps your project maintainable whether you have 50 events or 500.

The Two Worlds of Game Objects

Every Unity project has two fundamentally different categories of objects:

Scene-Resident Objects — things that exist in the Hierarchy at edit time. UI canvases, level geometry, cameras, persistent managers, environment triggers. You can see them, select them, drag references to them.

Runtime-Spawned Objects — prefab instances created through Instantiate() at runtime. Enemies, projectiles, loot drops, pooled VFX, dynamically generated UI elements. They don't exist until the game is running.

This distinction is the foundation of how you should layer your event usage.

Layer 1: Visual Configuration for Scene-Resident Objects

Use the Editor, Behavior Window, and Flow Graph for anything that lives in the scene.

Why? Because scene-resident objects have stable Inspector references. A health bar sitting in your UI Canvas, a door trigger in your level, a background music controller — these objects are right there in the Hierarchy. You can:

  • Open the Game Event Editor, find the event, and click the Behavior button
  • Configure Action Conditions visually (e.g., only trigger the damage flash when health < 30%)
  • Set Schedule timing (0.2s delay before the hit sound, screen shake repeating 3x at 0.1s intervals)
  • Wire up UnityEvent actions by dragging the target object straight from the Hierarchy
  • Use the Flow Graph to orchestrate complex sequences (fade screen → load scene → reposition player → fade in)

This is where designers, artists, and audio engineers thrive. They can tweak game feel — adjust a delay from 0.2s to 0.35s, add a condition that skips the effect on low-HP enemies, reorder a chain sequence — without touching a single line of code and without waiting for a recompile.

Ideal For

  • UI responses — button clicks, panel transitions, HUD updates
  • Level scripting — door opens, traps activate, cutscene triggers
  • Audio events — play/stop/crossfade based on game state
  • Camera behaviors — shake, zoom, follow target switches
  • Environment reactions — lighting changes, particle effects, weather transitions
  • Tutorial sequences — step-by-step chains with conditions

Example

Your OnPlayerDeath event needs to: dim the screen, show a "You Died" panel, play a sound, and disable player input. All four responses are wired to UI and scene objects that already exist in the Hierarchy. This is a textbook case for the Behavior Window — four actions, one event, zero code. A designer can later add a 0.5s delay before the panel appears, or add a condition that skips the sound if the player is underwater, without filing a single code change request.

Layer 2: Scripting API for Runtime-Spawned Instances

Use AddListener(), RemoveListener(), and Raise() for prefab instances that are created at runtime.

Why? Because when you instantiate a prefab, there is no Inspector to drag references into. That enemy you just spawned from an object pool needs to listen for OnPauseGame to freeze its AI. That projectile needs to raise OnEnemyHit when it collides with a target. These bindings must happen in code, at the moment the object comes to life.

public class Enemy : MonoBehaviour
{
[GameEventDropdown] public GameEvent onPauseGame;
[GameEventDropdown] public GameEvent onResumeGame;

void OnEnable()
{
onPauseGame.AddListener(Freeze);
onResumeGame.AddListener(Unfreeze);
}

void OnDisable()
{
onPauseGame.RemoveListener(Freeze);
onResumeGame.RemoveListener(Unfreeze);
}

void Freeze() => agent.isStopped = true;
void Unfreeze() => agent.isStopped = false;
}

Notice something important: the event assets themselves are still assigned via the [GameEventDropdown] attribute on the prefab. The event reference is visual — it's a drag-and-drop field on the prefab asset. Only the listener registration is code, because the instance doesn't exist at edit time.

Ideal For

  • Spawned enemies/NPCs reacting to global events (pause, slow-motion, area effects)
  • Projectiles and VFX raising events on collision or lifetime expiry
  • Pooled objects that need to subscribe/unsubscribe as they activate/deactivate
  • Dynamically generated UI elements (inventory slots, chat messages, leaderboard rows)
  • Any system where the listener count is unknown at edit time

Example

You're building a tower defense game. Towers are placed at runtime. Each tower needs to listen for OnWaveStarted to begin targeting and OnWaveEnded to enter idle state. Since towers are instantiated dynamically, each one registers its own listeners in OnEnable() and cleans up in OnDisable(). Meanwhile, the wave manager that raises OnWaveStarted might be a scene-resident singleton with its timing configured entirely through the Behavior Window.

Layer 3: The Hybrid — Where the Magic Happens

The real power of GES emerges when you combine both layers intentionally:

Programmers define the event architecture and write the raise/listen code for runtime systems. They decide what events exist, what data they carry, and when they fire.

Designers and artists configure what happens in response using the Behavior Window, Flow Graph, and Condition Trees. They control the game feel, the timing, the conditions, and the visual/audio polish.

Here's a concrete hybrid workflow:

[Programmer writes]
- EnemyHealth.cs: raises OnEnemyDamaged(int damage) when hit
- EnemyHealth.cs: raises OnEnemyDeath(GameObject enemy) when HP <= 0
- WaveManager.cs: dynamically adds/removes listeners as enemies spawn/despawn

[Designer configures in Behavior Window]
- OnEnemyDamaged -> flash the damage number UI, shake the camera (condition: damage > 20)
- OnEnemyDeath -> play death VFX, add score to counter, check wave completion

[Designer configures in Flow Graph]
- OnLastEnemyDeath -> triggers OnWaveComplete
- OnWaveComplete -> chains: show reward panel -> wait 3s -> spawn next wave

The programmer never has to adjust a screen shake duration. The designer never has to write a listener registration. Each person works in their domain of expertise, and the event assets are the shared contract between them.

Practical Guidelines for Scaling

As your project grows, keep these principles in mind:

1. Let Object Lifetime Guide Your Choice

If it's in the scene at edit time → visual. If it's instantiated at runtime → API. This single rule resolves 90% of decisions.

2. Keep Event References Visual, Even in Code

Always use [GameEventDropdown] on your MonoBehaviour fields instead of hardcoding event lookups. This gives you type-safe, searchable dropdowns on prefabs and lets you swap events without code changes.

3. Use the Behavior Window for Response Tuning, Code for Response Logic

If the response is "play this sound after 0.3 seconds when health is below 50%," that's configuration — put it in the Behavior Window. If the response is "calculate damage reduction based on armor type, elemental resistance, and buff stacks," that's logic — write it in code.

4. Clean Up Runtime Listeners Religiously

OnEnable() → subscribe. OnDisable() → unsubscribe. No exceptions. This is the single most important habit for preventing memory leaks and ghost listeners in projects with object pooling or frequent scene loads.

5. Use Priority Listeners When Execution Order Matters

When multiple systems respond to the same event, don't rely on registration order. Use AddPriorityListener() with explicit priority values. Save data at priority 1000, update game state at 100, refresh UI at 0, play audio at -100. This makes your execution order self-documenting.

6. Use the Flow Graph to Make Invisible Relationships Visible

When an event triggers other events (via Triggers or Chains), always model it in the Flow Graph. Six months from now, no one will remember that OnDoorOpened triggers OnLightActivated, OnMusicChanged, and OnTutorialStep3. The Flow Graph makes these relationships discoverable at a glance.

7. Organize by Domain, Not by Type

Structure your event databases around game domains (Combat, UI, Audio, Progression) rather than technical categories (Void Events, Int Events). When your combat designer needs to tune something, they should find everything combat-related in one place.

The Decision at a Glance

LayerApproachWhoWhen
Scene objectsVisual (Editor, Behavior, Flow Graph)Designers, Artists, AudioObjects exist in Hierarchy at edit time
Runtime instancesScripting API (AddListener, Raise)ProgrammersPrefabs instantiated during gameplay
HybridEvents as shared contractsEveryoneProgrammers raise, designers respond

The Takeaway

The goal is not to pick one approach over the other. The goal is to let each team member work with the tools that match their expertise, while the event system acts as the clean boundary between disciplines.

Build the architecture in code. Polish the experience in the editor. Ship the game together.