In this post, I want to show my progress with DirectX12. I am going to show the base of my current DirectX12 architecture.
Continue reading “DirectX12 – Multithread Architecture: A First Approach”
🎮 Welcome to my portfolio. I'm Nicolás Bertoa, a C++ developer specialized in Unreal Engine with 15+ years of professional experience at DreamWorks and Sony. Here I share technical demos, personal projects, and experiments. Check out my Demo Reels or explore my latest work below 👇
In this post, I want to show my progress with DirectX12. I am going to show the base of my current DirectX12 architecture.
Continue reading “DirectX12 – Multithread Architecture: A First Approach”
Proper gamma correction is probably the easiest, most inexpensive, and most widely applicable technique for improving image quality in real-time applications.
In this post, we are going describe what gamma and gamma correction are, why they are important, most common problems, and how to solve them.
In this post, we will describe how to initialize DirectX 12.
In this opportunity, I want to do the same I did in my previous post but with spheres instead of boxes.
Continue reading “Instancing vs Geometry Shader vs Vertex Shader – Round 2”
In this opportunity, I want to compare 3 techniques to draw the same geometry in different locations:
Continue reading “Instancing vs Geometry Shader vs Vertex Shader – Round 1”
After a long time, finally, I began with Physically Based Rendering. I was using Blinn-Phong, and I was comfortable with its result, but because all famous rendering engines were switching to physically based rendering models, I wanted to check it.
Continue reading “Physically Based Rendering – First Approach”
Deferred shading is a screen-space shading technique. It is called deferred because no shading is actually performed in the first pass of the vertex and pixel shaders, instead, shading is deferred until a second pass.
The first step is to render all scene geometry into a g-buffer, which typically consists of several render target textures. The individual channels of these textures are used to store geometry surface information per pixel, such as surface normal or material properties.
This is a demo of 3 particle systems that I implemented using DirectX 11 for rendering and Compute Shader for particle physics.
Each demo was recorded with Fraps in a Phenom II X4 965, 4GB Ram, Radeon HD 7850 and Windows 8, and ran at 60 FPS (I limited FPS to 60)
There are 1.000.000 particles in this demo which are in 5 different areas.
As demo progress, you can see how particles in each area begin to organize, using steering behaviors and physics inside Compute Shader
640.000 particles are forming a sphere. Particles are moving slowly to the center of the sphere.
There are 640.000 particles organized in 100 layers of 6400 particles each. Particles move at random speed and direction through the plane of its layer.
In this post, we are going to talk about a technique for terrain rendering using interlocking tiles.
Continue reading “Simplified Terrain using Interlocking Tiles”
In this post, we are going to explain a technique called displacement mapping (or height mapping).