Posts by TowerPower

    You should already have my upvotes from an earlier thread than this one, but just in case, here's me voting again :thumbsup:


    Hopefully this topic can find it's way to the top of your priority list in the near future... it's been languishing for awhile and really is important.

    Interesting, and it makes the buzzing noise only when you move the mouse or keyboard to navigate through the scene, going silent as soon as the camera stops? (switching to VR mode will actually cause the buzzing to be continuous). My old 1080ti didn't make any noise other than the fans kicking in, so this certainly puts a damper on things. Sounds like it could be a problem with all the new cards though - tisk tisk nvidia.


    Coil Whine: https://forums.evga.com/RTX-2080-coil-whine-m2863038-p3.aspx

    It's not related to the speakers unfortunately - definitely the graphics card making the noise. Enscape is obviously taxing the cards to their limit. I've yet to experiment with manually capping the frame rate to see if that helps. I realize it's likely not an issue with the functioning of the card itself - just an annoyance, but a major one at that!


    Does no one else here own a 2080ti?

    Has anyone else experienced this issue with their new 2080ti? Whenever you move the mouse in the Enscape view (ie. look around), the gpu make a very audible buzzing noise. This only happens when the camera is moving - the instant you stop, the sound stops. I'm almost certain it's coil whine, which is a known issue that some graphics cards have, but the manufacturer sent a replacement card, and it makes exactly the same noise! (We even tried multiple towers to be sure it wasn't just the power supply). It's crazy to think that all 2080tis might have this issue, can anyone else concur? Mine is a stock nvidia card, not one of the 3rd party vendor versions from EVGA etc. so perhaps those don't have the same problem. I'd love to find out though before trying to swap it out with another card.

    No, this is just considering the hardware accelerated ray tracing part. AI denoising or up-sampling is something we haven't looked into for Enscape. But since these features are proprietary Nvidia tech in contrast to the DXR/Vulkan RTX standard (which will likely be implemented by other gpu vendors in the future) we probably won't focus on these any time soon, as we've got our own denoising tech in place.

    Focusing on implementing DXR/raytracing makes sense. In terms of AI denoising being proprietary to Nvidia though, is that true for all of their deep learning / machine learning tech? I suppose it's similar to how CUDA is proprietary vs. OpenGL. The last time I checked though, Nvidia had close to 90% marketshare in the gpu market (poor AMD), so it seems a shame to let the tensor cores in the RTX cards lie dormant, especially when you consider they make up close to a quarter of the chip and represent 100's of Tflops of compute power at lower floating points. I have no idea what sort of performance jump that represents, but Nvidia makes it sound huge. Hopefully they'll continue to make it easier for developers to implement DLSS and denoising, since training on thousands of images only seems practical for big name companies that have lots of resources.

    As stated above, RTX is a mere tool which doesn't make "beautiful reflections" magically by itself. What it does due to it's hardware acceleration is speeding up the calculations so much, that we can spend more time computing more complex geometry for reflections, or use better lighting and texturing for reflections. Which then makes those reflections look more realistic.

    Better looking reflections is something that'd be entirely possible right now without RTX technology, however it'd slow things down so much and eat so much memory that it'd be unusable for most users or for example crash for any slightly more complex scene. BTW you might like to try our latest Preview versions, we've already enhanced the reflection quality there, which now includes fully textured reflections on Ultra quality (enirely without RTX).


    And please beware: What you're currently seeing in RTX on/off videos is usually handpicked marketing material (or from the Battlefield V game), which you really can't compare with the scenarios Enscape is dealing with.

    Is this factoring in usage of the dedicated ai tensor cores? AI denoising and up-sampling (DLSS) have been advertised by Nvidia as the critical factors allowing ray tracing to be possible today rather than in 10 years. I'm pretty sure Enscape already uses some sort of denoising filter, but it's not AI accelerated, correct? Image training is the time consuming part, but it sounds like some offline render engines have had success just using Nvidia's algorithm out of the box without training it on their own images.

    This article may provide some answers: https://gpuopen.com/deferred-path-tracing-enscape/


    It conveys just how complex Enscape already is, and the advanced rendering methods they've been using to achieve real time path tracing in gpus before RTX even existed. I consider myself relatively well versed in rendering lingo, and I didn't understand a good chunk of that article...


    One can infer a few things though - the fact that Enscape already utilizes path tracing in some form would appear to set it up well to take advantage of the dedicated ray tracing cores in RTX, which are said to provide a 6-10x speedup over the last generation cards in ray tracing tasks. I'm sure it's a lot more complicated than that though.


    Here's another interesting article that explains a little how someone else managed to implement full path tracing in their game https://wccftech.com/q2vkpt-first-entirely-raytraced-game/

    "the limiting factor of path tracing is not primarily raytracing or geometric complexity... they mainly depend on the number of (indirect) light scattering computations and the number of light sources... the number of light scattering events does not depend on scene complexity... It is therefore thinkable that the techniques we use could well scale up to more recent games."


    They're using some cheats as well though:


    "However, while elegant and very powerful, naive path tracing is very costly and takes a long time to produce stable images. This project uses a smart adaptive filter that re-uses as much information as possible across many frames and pixels in order to produce robust and stable images."

    One thing to remember though is that while the visual quality of Lumion improved drastically with Version 8, bringing it on par with (or in some cases surpassing) Enscape, it cheats a lot to get there.


    Enscape still renders in realtime, while Lumion has increasingly become like a traditional offline renderer (albeit a very fast one), taking multiple seconds to render a single image. With Enscape, what you see is what you get. With Lumion, what you see is a rough preview with poor GI and simplified geometry - not exactly realtime (which is quite ironic given that Lumion was the original realtime vizualization tool ). To be fair, images still take around the same time to render in Lumion as they always have, it's just that rather than increase speed (which might have allowed them to support things like VR which they've so far eschewed), they chose to increase the rendering quality instead (in an effort to reach the same visual quality as V-ray etc). Afterall, their biggest knock has always been the cartoonish visuals.


    You can't have everything in life afterall, so developers have to choose how to balance speed and quality. Increase one, and you reduce the other. While Lumion was getting prettier, traditional render engines like V-ray have gotten much faster thanks to powerful graphics cards and things like AI denoising. The way things are heading, they'll converge at some point.


    This is why the new RTX cards are so exciting - they promise to close the gap even further, and perhaps with all the recent advances in AI, even bridge it entirely. (https://www.unrealengine.com/e…-in-real-time-ray-tracing )

    That's odd, it shouldn't be showing the skybox, and you should be picking up screen space reflections for objects within your field of view with simplified geometry for objects behind the camera.

    You can download and install the previous version again to test it. There's a link to previous versions on the download page.


    I would hope that light leaking is getting better, not worse...


    Edit: Removed direct download link. (CM)