Search
  • en
  • es
  • en
    Search
    Open menu Open menu

    Introduction

    Imagine for a moment that you need to create a precise digital copy of an old building for restoration, analyze the terrain of a mine with millimeter accuracy, or meticulously track the progress of a construction project. In all these scenarios, a fundamental requirement is a digital model that faithfully reflects reality. Thanks to technologies like LiDAR and photogrammetry, we can capture millions of points in space, which, when combined, form a “cloud” capable of describing objects and environments with astonishing detail.

     

    Traditionally, creating and managing these digital copies often involves complex and tedious manual tasks. Our ambition is to automate the generation and visualization of these digital copies. At Typhoon, our industrial visualization application built on Evergine, we have long used a hardware-based rasterization rendering system for efficient point cloud visualization. However, as point clouds became larger and more complex, our previous method started to show its limitations. It became difficult to see details in very dense clouds, performance had room for improvement, and customizing how this data was visualized was somewhat clunky.

     

    That’s when we asked ourselves: can we do better? The answer led us to investigate a different approach, inspired by the research of Markus Schützsoftware rasterization using compute shaders. This approach gives us significantly more control over how points are drawn, allowing us to improve both speed and visual quality, in addition to opening the door to further enhancements for point cloud visualization.

    In this article, we will explain why we decided to change our rendering system, what we needed the new system to achieve, and how we implemented it. We’ll also show you how its performance and visual quality compare to the previous version, and what exciting improvements we can add in the future, thanks to this new technology.

    Why a new renderer?

    At Typhoon, we’ve been working with point clouds for a while now – those massive datasets of points that let us digitally reconstruct the real world. Our old visualization system, while functional, was starting to fall a bit behind. Imagine trying to look at a highly detailed photo, but the lens isn’t quite good enough: you lose sharpness, and some details get blurred. Something similar was happening with our larger and more complex point clouds.

     

    So, we set out to find a better “lens,” and we stumbled upon a fascinating idea inspired by Markus Schütz’s work. The result? A rendering system that allows us to view point clouds much more clearly and efficiently.

     

    And why the change? Primarily for these reasons:

    • Increased Speed, Better Efficiency: We wanted visualization to be smoother, especially with enormous point clouds. The new system is significantly faster, enabling the visualization of very large point clouds even on mid-range hardware or with limited resources.
    • Total Control: We now have much more freedom to decide how those points are drawn. It’s like having the controls of a professional camera instead of an automatic one. This allows us to fine-tune the visualization for a more faithful representation.
    • Thinking Ahead: We want to keep improving Typhoon, and this new system opens up many possibilities for adding exciting features in the future. It provides a much more solid foundation for continued growth.
    • Goodbye Aliasing: Have you ever noticed how points overlap when viewing a point cloud? That’s aliasing. Our new system significantly reduces this problem, making point clouds look much smoother and more realistic. No more pixelated appearance!

    Visual comparison showing the reduction of aliasing with the new technique (right) versus a previous method (left).

     

    In short, we changed our rendering system so Typhoon can better handle the point clouds of the future, offering a faster, more controlled, and, most importantly, a visually impressive experience. We think you’ll notice the difference!

    Method description

    Pipeline

    Conceptual diagram of the new renderer’s rendering pipeline in Typhoon.

    Our rendering pipeline is divided into several key stages, each playing a crucial role in the rasterization process. A fundamental characteristic of our system, which largely defines its operation and differentiates it from the reference implementation, is progressive rendering. Instead of rendering the entire point cloud in a single frame, which can be very costly, we divide the process across multiple frames. This is essential to ensure smooth performance even on devices with limited resources, as the processing load is distributed over time.

     

    State reset

    This stage is activated only when camera movement is detected. Its primary function is to prepare the ground for the new rendering frame, ensuring that the results from the previous frame do not interfere with the current one. This involves resetting all information buffers used during the rasterization process, such as color and depth buffers, so they can be filled with data corresponding to the camera’s new perspective. Without this step, visual artifacts from the previous frame could persist.

     

    Reprojection

    This stage runs only in the first frame after the camera moves. Its goal is to accelerate the initial visualization of the point cloud. To do this, we reuse information from points that were visible in the previous frame. By projecting these points with the camera’s new position, we get a quick estimate of their location on the current screen. This technique is very useful when camera movements are small, which is common, and allows progressive rendering to start with a solid base, showing the user a first version of the point cloud much faster.

     

    Projection and color accumulation

    After the reprojection phase, we proceed to the projection of the remaining points onto the screen. For each point, we calculate its 2D position based on the camera’s perspective and position relative to the cloud. Once projected, we need to determine how these points are visualized, especially when multiple points project onto the same screen pixel.

    Our system uses a color accumulation technique via depth-based blending. For each pixel, we maintain a record of the depth of the closest point projected so far. When a new point is projected onto the same pixel, its color is accumulated if its depth is similar to that of the closest point, specifically if it is less than 1.01 times the minimum recorded depth. This very close threshold ensures that only points truly very close in depth contribute to the final pixel color, which smooths the image and dramatically reduces aliasing. If the projected point has a depth less than the minimum recorded for that pixel, indicating a closer object that should occlude what was previously drawn, the accumulation is reset to correctly reflect this new information.

    It’s important to note that this projection and color accumulation process is performed progressively over different frames. In each frame, only a portion of the point cloud is projected, and its color contribution is accumulated in a buffer. This process continues until the entire point cloud has been projected. The result is a high-quality representation where points blend, smoothing the image, significantly reducing the aliasing problem we mentioned earlier while maintaining optimal performance on devices with different characteristics.

     

    Pass to texture and image improvements

    In each frame, our accumulation buffer is converted into a texture. This way, we manage to generate an image with the latest results of the progressive rendering process in real-time, preventing the user from having to wait for the point cloud to be fully projected to get a visualization. This provides a much smoother and more interactive user experience, as changes in the camera or configuration are reflected almost instantly on the screen, even if the final image is still converging to its maximum quality.

    Once the accumulation buffer has been converted into a texture in a given frame, we apply certain improvements to the image. One of the advantages of our software rasterization system is the flexibility it offers for customizing visualization. For example, we can allow the user to modify the size of each point on the screen. This was already possible with the previous version and was very useful for highlighting certain details of the point cloud or improving the perception of density. Furthermore, in this phase, we could implement other post-processing techniques to further enhance visual quality. An example of this is Eye-Dome Lighting, a technique that highlights the fine details and surface structure of the point cloud, improving the perception of depth and shape.

    Comparison with the previous Renderer

    After diving into the details of our new rendering pipeline, it’s time for the moment of truth: have we achieved the improvement we were looking for? As it is usually said, a picture is worth a thousand words. Here you can see a direct comparison between how a dense point cloud was visualized with our old system and how it looks now with the new one:

     

    Visual comparison of a section of a dense industrial point cloud.

    In this comparison, based on an industrial point cloud, you can see how the structure goes from a visualization with noise and artifact lines to a notably smoother and more defined image, where details are better appreciated. Some elements where this can be observed are in the letters present on one of the beams and on the hook.

    Another comparative view of the previous cloud, showing the difference in fine detail representation.

    This comparison focuses on the ground surface. It shows how the new renderer eliminates aliasing, offering clear lines and precise details that were previously blurred and lost, such as the floor markings.

     

    A third comparison highlighting the reduction of aliasing and the improvement in image smoothness in an architectural setting.

    In this example, we focus on a more cluttered indoor architectural scenario than the previous ones. You can appreciate better definition in a multitude of elements, such as the metallic doors on the sides, cathedral columns, chairs, paintings, etc.

     

    This fourth example shows a complex industrial environment. With the old system, pipes and structures look somewhat jagged and with some visual noise, making it difficult to clearly perceive their geometry in the distance. The new renderer presents an image with greater smoothness on surfaces, the pipes are distinguished with greater clarity and definition, and the scene in general feels more coherent and easier to interpret, especially in areas with overlapping geometry.

     

    The difference in terms of clarity and aliasing reduction is evident in these visual comparisons. The new color accumulation technique achieves the smoothness we were looking for, eliminating much of the visual noise and achieving more defined contours.

     

    But the improvement doesn’t stop there. One of the advantages we mentioned earlier is the possibility of adjusting the size of the points on the screen, a functionality that, although it already existed, now benefits from the base quality of the new renderer. This is quite useful: you can use smaller points to avoid losing detail when you get very close to a very dense area, or you can increase the size to get a more “compact” or “solid” view of the cloud, which can help to better understand the general shape or to visualize areas with fewer points better. See how the perception of the same cloud (already rendered with the new system) changes by varying this parameter:

     

    The Future of Visualization in Typhoon

    Having this new, more flexible, and powerful technological foundation of software rasterization is like having better tools in the workshop: it allows us to think about building more interesting things and continue improving the user experience in Typhoon. We are already exploring several ideas for the future:

    1. Even More Visual Quality: We want the image to be even sharper and represent the scene with the highest possible fidelity. To achieve this, we are working on several fronts:
      • Adaptive Point Size: The size at which each point is drawn won’t be fixed, but will adjust automatically (e.g., based on distance or area density). This ensures the cloud looks well-defined whether you are close up or far away.
      • Cleaner Edges (Opacity Check / Visibility Check): Have you sometimes noticed how points from the background seem to “sneak through” or clutter the edges of closer objects? We are investigating a technique that checks the depth of neighboring points on the screen. If a point is much farther away than those right next to it, we avoid drawing it. This dramatically cleans up contours and ensures that foreground objects correctly hide what is behind them.
      • Highlighting Details with Eye-Dome Lighting (EDL): We still have this technique in mind, which helps to highlight the general shape and small details of the point cloud surface, improving the perception of volume.
    2. Exploring New Frontiers (Hello, Gaussian Splats!): The 3D world is moving fast, and techniques like Gaussian Splatting are proving to be very powerful. Therefore, we are actively working on integrating the visualization of Gaussian Splats alongside our point clouds within Typhoon. The goal is clear: you should be able to load, view, and work with both types of data in the same scene. Imagine being able to combine the best of both techniques!
    3. Handling Giant Data (Virtual Geometry): Progressive rendering already helps with large clouds, but we want to prepare for handling truly massive amounts of data. The idea is to evolve towards a virtual geometry system, inspired by cutting-edge research such as that presented in the article Virtualized Point Cloud Rendering. This approach is based on intelligently loading only the parts of the cloud you need to see (LOD and streaming), so you can open projects of almost unlimited size without your computer getting overwhelmed.
    4. Constant Optimization: There’s always room to improve performance. We will continue “tightening the screws” on the code to ensure everything runs as fast and efficiently as possible.

    Conclusiones

    In summary, the journey from our previous hardware-based rendering system to software rasterization using compute shaders has been a necessary and very rewarding evolution for Typhoon. As we saw in the introduction, the growing demands of working with increasingly large and complex point clouds pushed us to seek an alternative that would overcome existing limitations in terms of detail, performance, and customization. We found the answer in this new approach, to which we added our distinctive touch with progressive rendering.

     

    The results fully validate the effort. We have achieved the extra speed and efficiency we were looking for, making the visualization of massive point clouds a much smoother and more agile experience, even on machines with more modest hardware. The image quality has taken a significant qualitative leap: we have largely managed to say goodbye to annoying aliasing, resulting in point clouds being displayed on screen more smoothly, more defined, and with notably superior realism, allowing details that were previously lost to be appreciated.

     

    But perhaps one of the most significant achievements is the total control this new system gives us over each stage of the rendering process. This flexibility not only translates into current quality and performance but, as we explored in the previous section, opens up a huge range of possibilities for continued innovation.

     

    In summary, this new renderer transcends the idea of a simple update; it consolidates itself as a fundamental piece that enhances Typhoon, making it faster, more capable, and, above all, better prepared to face present and future challenges in the field of large-scale digital twin visualization.