José Antonio Collado Antonio Collado
Researcher
Imagine for a moment that you need to create a precise digital copy of an old building for restoration, analyze the terrain of a mine with millimeter accuracy, or meticulously track the progress of a construction project. In all these scenarios, a fundamental requirement is a digital model that faithfully reflects reality. Thanks to technologies like LiDAR and photogrammetry, we can capture millions of points in space, which, when combined, form a “cloud” capable of describing objects and environments with astonishing detail.
Traditionally, creating and managing these digital copies often involves complex and tedious manual tasks. Our ambition is to automate the generation and visualization of these digital copies. At Typhoon, our industrial visualization application built on Evergine, we have long used a hardware-based rasterization rendering system for efficient point cloud visualization. However, as point clouds became larger and more complex, our previous method started to show its limitations. It became difficult to see details in very dense clouds, performance had room for improvement, and customizing how this data was visualized was somewhat clunky.
That’s when we asked ourselves: can we do better? The answer led us to investigate a different approach, inspired by the research of Markus Schütz: software rasterization using compute shaders. This approach gives us significantly more control over how points are drawn, allowing us to improve both speed and visual quality, in addition to opening the door to further enhancements for point cloud visualization.
In this article, we will explain why we decided to change our rendering system, what we needed the new system to achieve, and how we implemented it. We’ll also show you how its performance and visual quality compare to the previous version, and what exciting improvements we can add in the future, thanks to this new technology.
At Typhoon, we’ve been working with point clouds for a while now – those massive datasets of points that let us digitally reconstruct the real world. Our old visualization system, while functional, was starting to fall a bit behind. Imagine trying to look at a highly detailed photo, but the lens isn’t quite good enough: you lose sharpness, and some details get blurred. Something similar was happening with our larger and more complex point clouds.
So, we set out to find a better “lens,” and we stumbled upon a fascinating idea inspired by Markus Schütz’s work. The result? A rendering system that allows us to view point clouds much more clearly and efficiently.
And why the change? Primarily for these reasons:
Visual comparison showing the reduction of aliasing with the new technique (right) versus a previous method (left).
In short, we changed our rendering system so Typhoon can better handle the point clouds of the future, offering a faster, more controlled, and, most importantly, a visually impressive experience. We think you’ll notice the difference!
Conceptual diagram of the new renderer’s rendering pipeline in Typhoon.
Our rendering pipeline is divided into several key stages, each playing a crucial role in the rasterization process. A fundamental characteristic of our system, which largely defines its operation and differentiates it from the reference implementation, is progressive rendering. Instead of rendering the entire point cloud in a single frame, which can be very costly, we divide the process across multiple frames. This is essential to ensure smooth performance even on devices with limited resources, as the processing load is distributed over time.
This stage is activated only when camera movement is detected. Its primary function is to prepare the ground for the new rendering frame, ensuring that the results from the previous frame do not interfere with the current one. This involves resetting all information buffers used during the rasterization process, such as color and depth buffers, so they can be filled with data corresponding to the camera’s new perspective. Without this step, visual artifacts from the previous frame could persist.
This stage runs only in the first frame after the camera moves. Its goal is to accelerate the initial visualization of the point cloud. To do this, we reuse information from points that were visible in the previous frame. By projecting these points with the camera’s new position, we get a quick estimate of their location on the current screen. This technique is very useful when camera movements are small, which is common, and allows progressive rendering to start with a solid base, showing the user a first version of the point cloud much faster.
After the reprojection phase, we proceed to the projection of the remaining points onto the screen. For each point, we calculate its 2D position based on the camera’s perspective and position relative to the cloud. Once projected, we need to determine how these points are visualized, especially when multiple points project onto the same screen pixel.
Our system uses a color accumulation technique via depth-based blending. For each pixel, we maintain a record of the depth of the closest point projected so far. When a new point is projected onto the same pixel, its color is accumulated if its depth is similar to that of the closest point, specifically if it is less than 1.01 times the minimum recorded depth. This very close threshold ensures that only points truly very close in depth contribute to the final pixel color, which smooths the image and dramatically reduces aliasing. If the projected point has a depth less than the minimum recorded for that pixel, indicating a closer object that should occlude what was previously drawn, the accumulation is reset to correctly reflect this new information.
It’s important to note that this projection and color accumulation process is performed progressively over different frames. In each frame, only a portion of the point cloud is projected, and its color contribution is accumulated in a buffer. This process continues until the entire point cloud has been projected. The result is a high-quality representation where points blend, smoothing the image, significantly reducing the aliasing problem we mentioned earlier while maintaining optimal performance on devices with different characteristics.
In each frame, our accumulation buffer is converted into a texture. This way, we manage to generate an image with the latest results of the progressive rendering process in real-time, preventing the user from having to wait for the point cloud to be fully projected to get a visualization. This provides a much smoother and more interactive user experience, as changes in the camera or configuration are reflected almost instantly on the screen, even if the final image is still converging to its maximum quality.
Once the accumulation buffer has been converted into a texture in a given frame, we apply certain improvements to the image. One of the advantages of our software rasterization system is the flexibility it offers for customizing visualization. For example, we can allow the user to modify the size of each point on the screen. This was already possible with the previous version and was very useful for highlighting certain details of the point cloud or improving the perception of density. Furthermore, in this phase, we could implement other post-processing techniques to further enhance visual quality. An example of this is Eye-Dome Lighting, a technique that highlights the fine details and surface structure of the point cloud, improving the perception of depth and shape.
After diving into the details of our new rendering pipeline, it’s time for the moment of truth: have we achieved the improvement we were looking for? As it is usually said, a picture is worth a thousand words. Here you can see a direct comparison between how a dense point cloud was visualized with our old system and how it looks now with the new one:
Visual comparison of a section of a dense industrial point cloud.
In this comparison, based on an industrial point cloud, you can see how the structure goes from a visualization with noise and artifact lines to a notably smoother and more defined image, where details are better appreciated. Some elements where this can be observed are in the letters present on one of the beams and on the hook.
Another comparative view of the previous cloud, showing the difference in fine detail representation.
This comparison focuses on the ground surface. It shows how the new renderer eliminates aliasing, offering clear lines and precise details that were previously blurred and lost, such as the floor markings.
A third comparison highlighting the reduction of aliasing and the improvement in image smoothness in an architectural setting.
In this example, we focus on a more cluttered indoor architectural scenario than the previous ones. You can appreciate better definition in a multitude of elements, such as the metallic doors on the sides, cathedral columns, chairs, paintings, etc.
This fourth example shows a complex industrial environment. With the old system, pipes and structures look somewhat jagged and with some visual noise, making it difficult to clearly perceive their geometry in the distance. The new renderer presents an image with greater smoothness on surfaces, the pipes are distinguished with greater clarity and definition, and the scene in general feels more coherent and easier to interpret, especially in areas with overlapping geometry.
The difference in terms of clarity and aliasing reduction is evident in these visual comparisons. The new color accumulation technique achieves the smoothness we were looking for, eliminating much of the visual noise and achieving more defined contours.
But the improvement doesn’t stop there. One of the advantages we mentioned earlier is the possibility of adjusting the size of the points on the screen, a functionality that, although it already existed, now benefits from the base quality of the new renderer. This is quite useful: you can use smaller points to avoid losing detail when you get very close to a very dense area, or you can increase the size to get a more “compact” or “solid” view of the cloud, which can help to better understand the general shape or to visualize areas with fewer points better. See how the perception of the same cloud (already rendered with the new system) changes by varying this parameter:
Having this new, more flexible, and powerful technological foundation of software rasterization is like having better tools in the workshop: it allows us to think about building more interesting things and continue improving the user experience in Typhoon. We are already exploring several ideas for the future:
In summary, the journey from our previous hardware-based rendering system to software rasterization using compute shaders has been a necessary and very rewarding evolution for Typhoon. As we saw in the introduction, the growing demands of working with increasingly large and complex point clouds pushed us to seek an alternative that would overcome existing limitations in terms of detail, performance, and customization. We found the answer in this new approach, to which we added our distinctive touch with progressive rendering.
The results fully validate the effort. We have achieved the extra speed and efficiency we were looking for, making the visualization of massive point clouds a much smoother and more agile experience, even on machines with more modest hardware. The image quality has taken a significant qualitative leap: we have largely managed to say goodbye to annoying aliasing, resulting in point clouds being displayed on screen more smoothly, more defined, and with notably superior realism, allowing details that were previously lost to be appreciated.
But perhaps one of the most significant achievements is the total control this new system gives us over each stage of the rendering process. This flexibility not only translates into current quality and performance but, as we explored in the previous section, opens up a huge range of possibilities for continued innovation.
In summary, this new renderer transcends the idea of a simple update; it consolidates itself as a fundamental piece that enhances Typhoon, making it faster, more capable, and, above all, better prepared to face present and future challenges in the field of large-scale digital twin visualization.
José Antonio Collado Antonio Collado
Researcher
Cookie | Duration | Description |
---|---|---|
__cfduid | 1 year | The cookie is used by cdn services like CloudFare to identify individual clients behind a shared IP address and apply security settings on a per-client basis. It does not correspond to any user ID in the web application and does not store any personally identifiable information. |
__cfduid | 29 days 23 hours 59 minutes | The cookie is used by cdn services like CloudFare to identify individual clients behind a shared IP address and apply security settings on a per-client basis. It does not correspond to any user ID in the web application and does not store any personally identifiable information. |
__cfduid | 1 year | The cookie is used by cdn services like CloudFare to identify individual clients behind a shared IP address and apply security settings on a per-client basis. It does not correspond to any user ID in the web application and does not store any personally identifiable information. |
__cfduid | 29 days 23 hours 59 minutes | The cookie is used by cdn services like CloudFare to identify individual clients behind a shared IP address and apply security settings on a per-client basis. It does not correspond to any user ID in the web application and does not store any personally identifiable information. |
_ga | 1 year | This cookie is installed by Google Analytics. The cookie is used to calculate visitor, session, campaign data and keep track of site usage for the site's analytics report. The cookies store information anonymously and assign a randomly generated number to identify unique visitors. |
_ga | 1 year | This cookie is installed by Google Analytics. The cookie is used to calculate visitor, session, campaign data and keep track of site usage for the site's analytics report. The cookies store information anonymously and assign a randomly generated number to identify unique visitors. |
_ga | 1 year | This cookie is installed by Google Analytics. The cookie is used to calculate visitor, session, campaign data and keep track of site usage for the site's analytics report. The cookies store information anonymously and assign a randomly generated number to identify unique visitors. |
_ga | 1 year | This cookie is installed by Google Analytics. The cookie is used to calculate visitor, session, campaign data and keep track of site usage for the site's analytics report. The cookies store information anonymously and assign a randomly generated number to identify unique visitors. |
_gat_UA-326213-2 | 1 year | No description |
_gat_UA-326213-2 | 1 year | No description |
_gat_UA-326213-2 | 1 year | No description |
_gat_UA-326213-2 | 1 year | No description |
_gid | 1 year | This cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the wbsite is doing. The data collected including the number visitors, the source where they have come from, and the pages viisted in an anonymous form. |
_gid | 1 year | This cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the wbsite is doing. The data collected including the number visitors, the source where they have come from, and the pages viisted in an anonymous form. |
_gid | 1 year | This cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the wbsite is doing. The data collected including the number visitors, the source where they have come from, and the pages viisted in an anonymous form. |
_gid | 1 year | This cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the wbsite is doing. The data collected including the number visitors, the source where they have come from, and the pages viisted in an anonymous form. |
attributionCookie | session | No description |
cookielawinfo-checkbox-analytics | 1 year | Set by the GDPR Cookie Consent plugin, this cookie is used to record the user consent for the cookies in the "Analytics" category . |
cookielawinfo-checkbox-necessary | 1 year | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-necessary | 1 year | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-non-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Non Necessary". |
cookielawinfo-checkbox-non-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Non Necessary". |
cookielawinfo-checkbox-non-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Non Necessary". |
cookielawinfo-checkbox-non-necessary | 1 year | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Non Necessary". |
cookielawinfo-checkbox-performance | 1 year | Set by the GDPR Cookie Consent plugin, this cookie is used to store the user consent for cookies in the category "Performance". |
cppro-ft | 1 year | No description |
cppro-ft | 7 years 1 months 12 days 23 hours 59 minutes | No description |
cppro-ft | 7 years 1 months 12 days 23 hours 59 minutes | No description |
cppro-ft | 1 year | No description |
cppro-ft-style | 1 year | No description |
cppro-ft-style | 1 year | No description |
cppro-ft-style | session | No description |
cppro-ft-style | session | No description |
cppro-ft-style-temp | 23 hours 59 minutes | No description |
cppro-ft-style-temp | 23 hours 59 minutes | No description |
cppro-ft-style-temp | 23 hours 59 minutes | No description |
cppro-ft-style-temp | 1 year | No description |
i18n | 10 years | No description available. |
IE-jwt | 62 years 6 months 9 days 9 hours | No description |
IE-LANG_CODE | 62 years 6 months 9 days 9 hours | No description |
IE-set_country | 62 years 6 months 9 days 9 hours | No description |
JSESSIONID | session | The JSESSIONID cookie is used by New Relic to store a session identifier so that New Relic can monitor session counts for an application. |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
viewed_cookie_policy | 1 year | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
viewed_cookie_policy | 1 year | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
VISITOR_INFO1_LIVE | 5 months 27 days | A cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface. |
wmc | 9 years 11 months 30 days 11 hours 59 minutes | No description |
Cookie | Duration | Description |
---|---|---|
__cf_bm | 30 minutes | This cookie, set by Cloudflare, is used to support Cloudflare Bot Management. |
sp_landing | 1 day | The sp_landing is set by Spotify to implement audio content from Spotify on the website and also registers information on user interaction related to the audio content. |
sp_t | 1 year | The sp_t cookie is set by Spotify to implement audio content from Spotify on the website and also registers information on user interaction related to the audio content. |
Cookie | Duration | Description |
---|---|---|
_hjAbsoluteSessionInProgress | 1 year | No description |
_hjAbsoluteSessionInProgress | 1 year | No description |
_hjAbsoluteSessionInProgress | 1 year | No description |
_hjAbsoluteSessionInProgress | 1 year | No description |
_hjFirstSeen | 29 minutes | No description |
_hjFirstSeen | 29 minutes | No description |
_hjFirstSeen | 29 minutes | No description |
_hjFirstSeen | 1 year | No description |
_hjid | 11 months 29 days 23 hours 59 minutes | This cookie is set by Hotjar. This cookie is set when the customer first lands on a page with the Hotjar script. It is used to persist the random user ID, unique to that site on the browser. This ensures that behavior in subsequent visits to the same site will be attributed to the same user ID. |
_hjid | 11 months 29 days 23 hours 59 minutes | This cookie is set by Hotjar. This cookie is set when the customer first lands on a page with the Hotjar script. It is used to persist the random user ID, unique to that site on the browser. This ensures that behavior in subsequent visits to the same site will be attributed to the same user ID. |
_hjid | 1 year | This cookie is set by Hotjar. This cookie is set when the customer first lands on a page with the Hotjar script. It is used to persist the random user ID, unique to that site on the browser. This ensures that behavior in subsequent visits to the same site will be attributed to the same user ID. |
_hjid | 1 year | This cookie is set by Hotjar. This cookie is set when the customer first lands on a page with the Hotjar script. It is used to persist the random user ID, unique to that site on the browser. This ensures that behavior in subsequent visits to the same site will be attributed to the same user ID. |
_hjIncludedInPageviewSample | 1 year | No description |
_hjIncludedInPageviewSample | 1 year | No description |
_hjIncludedInPageviewSample | 1 year | No description |
_hjIncludedInPageviewSample | 1 year | No description |
_hjSession_1776154 | session | No description |
_hjSessionUser_1776154 | session | No description |
_hjTLDTest | 1 year | No description |
_hjTLDTest | 1 year | No description |
_hjTLDTest | session | No description |
_hjTLDTest | session | No description |
_lfa_test_cookie_stored | past | No description |
Cookie | Duration | Description |
---|---|---|
loglevel | never | No description available. |
prism_90878714 | 1 month | No description |
redirectFacebook | 2 minutes | No description |
YSC | session | YSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages. |
yt-remote-connected-devices | never | YouTube sets this cookie to store the video preferences of the user using embedded YouTube video. |
yt-remote-device-id | never | YouTube sets this cookie to store the video preferences of the user using embedded YouTube video. |
yt.innertube::nextId | never | This cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen. |
yt.innertube::requests | never | This cookie, set by YouTube, registers a unique ID to store data on what videos from YouTube the user has seen. |