The foundational VR interaction primitives: grabbing, shooting, throwing, pulling, two-handed weapons, and full-body character representation on Oculus Quest 2.
This project is a VR mechanics toolkit built in Blueprints with Unreal Engine 5.2 for Oculus Quest 2, developed as part of this Udemy course — the same course as the VR Prototypes project. Where that project explored complete interaction paradigms (archery, climbing, kayaking), this one focuses on the foundational mechanics that underlie virtually every VR game: the grab, the throw, the shoot, the pull, the inventory slot, and the full-body avatar. These are the primitives that get combined and extended to build more complex VR experiences.
You can watch the mechanics in action here: YouTube
Starting from the Default VR Template
Unreal ships with a VR template that provides a basic starting point — a VR pawn with teleport locomotion and a simple grab mechanic. The template is functional but minimal by design: it covers the absolute baseline and leaves everything else to the developer. This project starts from that baseline and extends it systematically, which is the right way to learn VR development — understand what the engine provides, then understand what you need to build yourself.
The shooting mechanism in the template is a simple projectile spawned from the controller. The improvement here adds proper recoil handling, muzzle flash, hit feedback, and a more physically grounded feel — the template’s version fires a ball that floats across the room; the improved version behaves like an actual handheld weapon.
Two-Handed Weapon Grabbing
Single-handed grabbing — the object attaches to the grip hand — is handled by the template. Two-handed grabbing requires significantly more complexity: the weapon is attached to one hand (the dominant grip) but the second hand also grips it, influencing the weapon’s orientation. The weapon’s forward direction is computed from the vector between the two grip points rather than from either hand independently, so moving the support hand rotates the weapon naturally around the dominant grip.
The implementation tracks both grip states simultaneously. When only the primary hand grips, the weapon behaves as a standard grab. When the secondary hand grips as well, the weapon rotation switches to the two-point orientation model. Releasing the secondary grip smoothly returns to single-hand orientation without snapping.
Two-handed grabbing is essential for rifles, bows, and any weapon that requires two hands to aim convincingly. The alignment between the two grip points and the weapon’s visual orientation is the key tuning challenge — misalignment produces a visually broken result where hands appear to float off the weapon surface.
Grabbable Object Pulling
Object pulling allows the player to attract objects toward their hand without physical contact — the VR equivalent of a Force Pull. The implementation uses a line trace or sphere cast from the controller in the facing direction to identify pullable objects within range, then applies a velocity toward the player’s hand each frame while the pull input is held.
The feel of the pull depends critically on the velocity curve: constant velocity produces a robotic, uniform motion; an accelerating curve (slow start, fast finish) produces the satisfying snap of something being yanked across a room. The pulled object remains a simulated physics body during the pull — it deflects around obstacles rather than passing through them, which makes the mechanic feel grounded in the physical space.
Range gating — only objects within a maximum distance can be pulled — prevents the mechanic from being exploited across large distances and keeps the player engaged with the space immediately around them.
Improved Object Throwing
The template’s throw mechanism simply releases the physics constraint on the grabbed object and lets the current velocity carry it. This produces correct but underwhelming throws because controller velocity at the moment of release is often lower than the player’s perceived swing velocity — the hand is decelerating at the end of the throw motion.
The improvement tracks controller velocity over a small window of recent frames (a rolling average of the last N positions) and uses the peak velocity within that window as the throw velocity, rather than the instantaneous velocity at release. This matches what the player feels they did — the peak of the swing — and produces throws that feel much more responsive to the player’s physical intent.
Throw velocity is also scaled by a multiplier to compensate for the scale difference between physical arm movement and the virtual world, since physical throws always feel weaker in VR than they should.
Basic Object Container / Inventory
The VR inventory is a physical space attached to the player’s body — typically represented as holsters on the belt or back — where objects can be placed and retrieved. The player holds an object near a holster zone, releases it, and it snaps into the slot. Grabbing from the slot retrieves the stored object.
The slot is a volume with an attachment point: when a grabbed object enters the volume and the player releases, the object is attached to the slot’s transform rather than dropped. Retrieving reverses the process — the object is detached from the slot and reattaches to the grabbing hand.
This physical inventory model is a fundamental VR UX pattern precisely because it maps to real-world intuition: you reach for your holster and your gun is there. It requires no menu, no UI navigation, no button sequence beyond the physical reach-and-grab. The cost is that the player must remember where things are on their body — which is typically fine, since humans are very good at remembering the locations of things they regularly carry.
Basic Full Body VR Character
The default VR template has no body — the player sees floating hands but no torso, legs, or feet. For many games this is acceptable and even preferable (a body can occlude the player’s view). For others — particularly social VR and physical games — body presence significantly improves immersion.
A full-body VR character presents a unique challenge: the body must be driven by only three tracked points — headset (head position/rotation) and two controllers (hand position/rotation). There is no tracking data for the spine, shoulders, hips, knees, or feet. The body must be inferred from three inputs.
The implementation uses IK to solve the body from these three points. The head position drives the spine and hip height. The hand positions drive the arm chains via IK, with shoulder positions estimated from the head and hand positions. The feet are handled procedurally — they follow a simplified step system based on the player’s movement velocity, similar in concept to the procedural walk cycle project.
The result is an approximate body that reads correctly from the player’s perspective (they see their own hands and torso in roughly the right positions) and from the perspective of other players in a multiplayer context. It won’t withstand close scrutiny but it doesn’t need to — it exists to establish presence, not to be anatomically precise.
Reflection
What makes this project valuable as a learning resource is that it covers VR’s interaction primitives at the level where most VR development work actually happens. The grabbing, throwing, and pulling mechanics appear in some form in nearly every VR game. The two-handed weapon system is directly reusable in any shooter. The physical inventory pattern is a VR UX standard. The full-body character problem is faced by every social or physical VR experience.
Working through these mechanics in isolation — one at a time, with direct feedback from a physical controller — builds a hands-on understanding of VR input that’s difficult to get any other way. The feedback loop between physical hand movement and virtual response is immediate and unambiguous: either it feels right, or it obviously doesn’t.
Leave a comment