Authoring in-game cinematics with Unreal Engine 5’s Sequencer: Level Sequences, CineCamera, rails, cranes, fade transitions, and Master Sequence architecture.
This project is a cinematics demo built in Unreal Engine 5, developed following this Udemy course. It covers Unreal’s Sequencer tool — the non-linear animation editor used to author cutscenes, cinematics, and scripted in-game sequences — from individual shot construction through to Master Sequence organization and in-game triggering. Sequencer is one of Unreal’s most powerful production tools, used across games, virtual production, and architectural visualization, and it connects directly to the cinematic camera and rendering work covered in the Night Scene project.
You can watch the cinematics here: YouTube
Sequencer: Non-Linear Animation for Games
Sequencer is Unreal’s timeline-based animation editor — conceptually similar to a non-linear video editor, but operating on 3D scene data rather than video clips. Each Level Sequence is an asset that contains a timeline with tracks for camera animation, actor transforms, properties, events, audio, and more. Tracks are animated via keyframes: values set at specific time points, with interpolation curves defining how the value changes between them.
The key distinction between Sequencer and Blueprint-based scripted sequences is authorial intent. Blueprint sequences are driven by logic — they respond to conditions, branch on state, and execute procedurally. Sequencer sequences are driven by time — they play from start to end, with every element’s behavior defined by the timeline. This makes Sequencer the right tool for authored cinematics where every frame is intentional, and Blueprint the right tool for scripted sequences where behavior needs to respond to gameplay state.
CineCamera: Cinematic Camera Settings
The CineCameraActor is Unreal’s physically based camera, designed for cinematics rather than gameplay. Unlike the standard CameraActor, it exposes film-industry camera parameters:
Focal length controls the field of view in terms familiar to cinematographers — 24mm for wide establishing shots, 85mm for portrait-style close-ups. Focal length also affects perspective distortion: shorter focal lengths produce the wide-angle distortion of environment shots; longer focal lengths compress depth, making subjects appear closer to their backgrounds.
Focus distance and depth of field produce the shallow focus characteristic of cinematic work. The CineCameraActor simulates a real camera’s focus system — a specified focal plane is sharp, and objects in front of and behind it blur with a configurable aperture (f-stop). Lower f-stops produce shallower depth of field; higher f-stops produce deeper focus. The bokeh shape of out-of-focus highlights — the circular blur shapes that identify out-of-focus light sources — is configurable as well.
Sensor size maps to a real-world camera format — Super 35, Full Frame, APS-C — and affects how focal length translates to field of view. This physical basis makes it straightforward to match the look of a real camera reference.
Rails and Cranes: Physical Camera Movement
Physical camera movement in cinematography uses mechanical rigs to produce smooth, controlled paths. Unreal’s Camera Rail actor simulates a dolly track — the camera moves along a spline path at a controlled speed, producing smooth lateral and forward tracking shots. Camera Crane actors simulate a boom arm — the camera pivots around a mounting point at a configurable arm length, producing the sweeping overhead-to-eye-level moves characteristic of crane shots.
Both rigs are animated in Sequencer via keyframes on their position and orientation tracks. A dolly move from one end of a rail to the other is authored by setting the camera’s position along the rail at the start and end keyframes; the interpolation curve controls the acceleration profile of the move — a linear interpolation produces constant-speed movement, an ease-in/ease-out curve produces the smooth deceleration of a real dolly.
The value of these physical rigs over hand-animated camera moves is consistency and readability — a rail shot reads clearly as a tracking shot because the camera’s constrained movement is legible to the viewer as intentional and physical.
Keyframes and Interpolation
Every animated value in Sequencer is controlled by keyframes and interpolation curves. Unreal offers four primary interpolation types for curves between keyframes:
Linear interpolation produces constant-rate change between keyframes — the value moves at a fixed speed from one keyframe to the next. This is rarely used for camera or character animation because it produces mechanical, robotic movement with no sense of weight or momentum.
Cubic/Auto interpolation computes smooth tangents at each keyframe automatically, producing organic ease-in and ease-out transitions. This is the default for most animation work and produces natural-feeling motion without manual tangent adjustment.
Constant holds the value at the first keyframe until the second keyframe is reached, then jumps immediately. Used for discrete state changes — a light switching on, a visibility toggle — rather than smooth animation.
Custom tangents allow the animator to manually adjust the curve shape at each keyframe, providing full control over the velocity profile of the animated value. This is used for precise timing work where the automatic tangents don’t produce the desired feel.
Fade Tracks and Transitions
Sequencer’s Fade Track controls the scene’s global fade — a full-screen overlay that blends between the scene and a solid color (typically black or white). Fade tracks are used for scene transitions: fading to black at the end of a sequence, holding black briefly, then fading in to the next sequence. This is the same fade mechanism used in the VR teleportation implementation, applied here at the cinematic level rather than the gameplay level.
Fade tracks are also used for the opening of a sequence — fading in from black at the start of a cutscene after a loading screen, or fading from gameplay into a cinematic. Getting the fade timing right is a subtle but important craft element: a fade that’s too fast reads as harsh; a fade that’s too slow reads as sluggish. 12-24 frames (0.5–1.0 seconds at 24fps) is a typical range for cinematic fades.
Master Sequence Architecture
A Master Sequence is a Level Sequence that contains other Level Sequences as subscequences — a hierarchical composition structure that allows a full cinematic (an intro, multiple scenes, an outro) to be organized as independent shots that play in sequence. Each shot is its own Level Sequence asset, with its own camera and animation data; the Master Sequence defines the order and timing of those shots.
This architecture mirrors the organizational structure of professional video production: a master timeline containing clips, where each clip is an independently editable asset. Editing one shot doesn’t affect any other; the master timeline can be reordered or retimed without modifying the shot assets themselves.
For game cinematics specifically, the Master Sequence approach is important because individual shots can be reused — the same shot asset can appear in multiple master sequences without duplication. It also makes collaboration straightforward: different animators can work on different shots simultaneously, with the master sequence as the integration point.
Sequence Triggering in Gameplay
A Level Sequence authored in Sequencer doesn’t play automatically — it needs to be triggered from gameplay. The standard approach uses a Level Sequence Player obtained via ULevelSequencePlayer::CreateLevelSequencePlayer (C++) or the Blueprint equivalent, which provides Play, Pause, Stop, and seek functions.
For in-game cutscenes, triggering typically happens via a collision volume (the player enters a trigger box and the cutscene plays), a gameplay event (completing an objective triggers a victory cutscene), or a level load event (the sequence plays automatically when the level begins). During playback, player input can be disabled, the player camera can be overridden by the cinematic camera, and HUD visibility can be toggled — all part of the transition from gameplay state to cinematic state.
The return from cinematic to gameplay is the inverse: input re-enabled, camera returned to the gameplay camera, HUD visible. A brief blend from the cinematic camera’s final position back to the gameplay camera’s current position prevents a jarring snap on the return.
Reflection
Sequencer is the Unreal tool most directly connected to the VFX pipeline background from DreamWorks and Sony. The non-linear timeline, the keyframe-and-curve animation model, the shot-based organizational structure, and the camera parameter vocabulary (focal length, f-stop, sensor size) are all concepts shared between Sequencer and the NLE and compositing tools used in offline VFX production.
The critical difference is the real-time rendering constraint — Sequencer sequences render at the game’s frame rate during gameplay, not at unlimited quality like offline renders. The Movie Render Plugin (covered in the Night Scene project) bridges this gap for pre-rendered cinematics, producing frame-accurate high-quality renders from Sequencer sequences. The choice between in-engine real-time playback and Movie Render Plugin output depends on whether the cinematic needs to be interactive (triggered by gameplay, subject to game state) or is purely pre-rendered content.
Leave a comment