NVIDIA RTX

Whenever you encounter any issues with Enscape or your subscription please reach out to our dedicated support team directly through the Help Center or by using the Feedback button as detailed here.
    • Official Post

    Is this factoring in usage of the dedicated ai tensor cores? AI denoising and up-sampling (DLSS) have been advertised by Nvidia as the critical factors allowing ray tracing to be possible today rather than in 10 years. I'm pretty sure Enscape already uses some sort of denoising filter, but it's not AI accelerated, correct? Image training is the time consuming part, but it sounds like some offline render engines have had success just using Nvidia's algorithm out of the box without training it on their own images.

    No, this is just considering the hardware accelerated ray tracing part. AI denoising or up-sampling is something we haven't looked into for Enscape. But since these features are proprietary Nvidia tech in contrast to the DXR/Vulkan RTX standard (which will likely be implemented by other gpu vendors in the future) we probably won't focus on these any time soon, as we've got our own denoising tech in place.

  • No, this is just considering the hardware accelerated ray tracing part. AI denoising or up-sampling is something we haven't looked into for Enscape. But since these features are proprietary Nvidia tech in contrast to the DXR/Vulkan RTX standard (which will likely be implemented by other gpu vendors in the future) we probably won't focus on these any time soon, as we've got our own denoising tech in place.

    Focusing on implementing DXR/raytracing makes sense. In terms of AI denoising being proprietary to Nvidia though, is that true for all of their deep learning / machine learning tech? I suppose it's similar to how CUDA is proprietary vs. OpenGL. The last time I checked though, Nvidia had close to 90% marketshare in the gpu market (poor AMD), so it seems a shame to let the tensor cores in the RTX cards lie dormant, especially when you consider they make up close to a quarter of the chip and represent 100's of Tflops of compute power at lower floating points. I have no idea what sort of performance jump that represents, but Nvidia makes it sound huge. Hopefully they'll continue to make it easier for developers to implement DLSS and denoising, since training on thousands of images only seems practical for big name companies that have lots of resources.