Posts by TowerPower

    Interesting, and it makes the buzzing noise only when you move the mouse or keyboard to navigate through the scene, going silent as soon as the camera stops? (switching to VR mode will actually cause the buzzing to be continuous). My old 1080ti didn't make any noise other than the fans kicking in, so this certainly puts a damper on things. Sounds like it could be a problem with all the new cards though - tisk tisk nvidia.


    Coil Whine: https://forums.evga.com/RTX-2080-coil-whine-m2863038-p3.aspx

    It's not related to the speakers unfortunately - definitely the graphics card making the noise. Enscape is obviously taxing the cards to their limit. I've yet to experiment with manually capping the frame rate to see if that helps. I realize it's likely not an issue with the functioning of the card itself - just an annoyance, but a major one at that!


    Does no one else here own a 2080ti?

    Has anyone else experienced this issue with their new 2080ti? Whenever you move the mouse in the Enscape view (ie. look around), the gpu make a very audible buzzing noise. This only happens when the camera is moving - the instant you stop, the sound stops. I'm almost certain it's coil whine, which is a known issue that some graphics cards have, but the manufacturer sent a replacement card, and it makes exactly the same noise! (We even tried multiple towers to be sure it wasn't just the power supply). It's crazy to think that all 2080tis might have this issue, can anyone else concur? Mine is a stock nvidia card, not one of the 3rd party vendor versions from EVGA etc. so perhaps those don't have the same problem. I'd love to find out though before trying to swap it out with another card.

    No, this is just considering the hardware accelerated ray tracing part. AI denoising or up-sampling is something we haven't looked into for Enscape. But since these features are proprietary Nvidia tech in contrast to the DXR/Vulkan RTX standard (which will likely be implemented by other gpu vendors in the future) we probably won't focus on these any time soon, as we've got our own denoising tech in place.

    Focusing on implementing DXR/raytracing makes sense. In terms of AI denoising being proprietary to Nvidia though, is that true for all of their deep learning / machine learning tech? I suppose it's similar to how CUDA is proprietary vs. OpenGL. The last time I checked though, Nvidia had close to 90% marketshare in the gpu market (poor AMD), so it seems a shame to let the tensor cores in the RTX cards lie dormant, especially when you consider they make up close to a quarter of the chip and represent 100's of Tflops of compute power at lower floating points. I have no idea what sort of performance jump that represents, but Nvidia makes it sound huge. Hopefully they'll continue to make it easier for developers to implement DLSS and denoising, since training on thousands of images only seems practical for big name companies that have lots of resources.

    As stated above, RTX is a mere tool which doesn't make "beautiful reflections" magically by itself. What it does due to it's hardware acceleration is speeding up the calculations so much, that we can spend more time computing more complex geometry for reflections, or use better lighting and texturing for reflections. Which then makes those reflections look more realistic.

    Better looking reflections is something that'd be entirely possible right now without RTX technology, however it'd slow things down so much and eat so much memory that it'd be unusable for most users or for example crash for any slightly more complex scene. BTW you might like to try our latest Preview versions, we've already enhanced the reflection quality there, which now includes fully textured reflections on Ultra quality (enirely without RTX).


    And please beware: What you're currently seeing in RTX on/off videos is usually handpicked marketing material (or from the Battlefield V game), which you really can't compare with the scenarios Enscape is dealing with.

    Is this factoring in usage of the dedicated ai tensor cores? AI denoising and up-sampling (DLSS) have been advertised by Nvidia as the critical factors allowing ray tracing to be possible today rather than in 10 years. I'm pretty sure Enscape already uses some sort of denoising filter, but it's not AI accelerated, correct? Image training is the time consuming part, but it sounds like some offline render engines have had success just using Nvidia's algorithm out of the box without training it on their own images.

    This article may provide some answers: https://gpuopen.com/deferred-path-tracing-enscape/


    It conveys just how complex Enscape already is, and the advanced rendering methods they've been using to achieve real time path tracing in gpus before RTX even existed. I consider myself relatively well versed in rendering lingo, and I didn't understand a good chunk of that article...


    One can infer a few things though - the fact that Enscape already utilizes path tracing in some form would appear to set it up well to take advantage of the dedicated ray tracing cores in RTX, which are said to provide a 6-10x speedup over the last generation cards in ray tracing tasks. I'm sure it's a lot more complicated than that though.


    Here's another interesting article that explains a little how someone else managed to implement full path tracing in their game https://wccftech.com/q2vkpt-first-entirely-raytraced-game/

    "the limiting factor of path tracing is not primarily raytracing or geometric complexity... they mainly depend on the number of (indirect) light scattering computations and the number of light sources... the number of light scattering events does not depend on scene complexity... It is therefore thinkable that the techniques we use could well scale up to more recent games."


    They're using some cheats as well though:


    "However, while elegant and very powerful, naive path tracing is very costly and takes a long time to produce stable images. This project uses a smart adaptive filter that re-uses as much information as possible across many frames and pixels in order to produce robust and stable images."

    One thing to remember though is that while the visual quality of Lumion improved drastically with Version 8, bringing it on par with (or in some cases surpassing) Enscape, it cheats a lot to get there.


    Enscape still renders in realtime, while Lumion has increasingly become like a traditional offline renderer (albeit a very fast one), taking multiple seconds to render a single image. With Enscape, what you see is what you get. With Lumion, what you see is a rough preview with poor GI and simplified geometry - not exactly realtime (which is quite ironic given that Lumion was the original realtime vizualization tool ). To be fair, images still take around the same time to render in Lumion as they always have, it's just that rather than increase speed (which might have allowed them to support things like VR which they've so far eschewed), they chose to increase the rendering quality instead (in an effort to reach the same visual quality as V-ray etc). Afterall, their biggest knock has always been the cartoonish visuals.


    You can't have everything in life afterall, so developers have to choose how to balance speed and quality. Increase one, and you reduce the other. While Lumion was getting prettier, traditional render engines like V-ray have gotten much faster thanks to powerful graphics cards and things like AI denoising. The way things are heading, they'll converge at some point.


    This is why the new RTX cards are so exciting - they promise to close the gap even further, and perhaps with all the recent advances in AI, even bridge it entirely. (https://www.unrealengine.com/e…-in-real-time-ray-tracing )

    That's odd, it shouldn't be showing the skybox, and you should be picking up screen space reflections for objects within your field of view with simplified geometry for objects behind the camera.

    You can download and install the previous version again to test it. There's a link to previous versions on the download page.


    I would hope that light leaking is getting better, not worse...


    Edit: Removed direct download link. (CM)

    And speaking of edges - having a slider for rounded edges like Lumion introduced in their last version would be another great way to control hard edges (since objects rarely have razer sharp corners, and modeling tiny bevels usually isn't practical).

    Anothe informative article from the same site:


    Ray Tracing 101: What It Is & Why NVIDIA Is Betting On It


    "keeping in mind that rasterization is a hack, it’s good to periodically look at what that hack is trying to achieve and whether that hack is worth the trade-offs.

    Or to put this another way: if you’re going to put in this much effort just to cheat, maybe it would be better to put that effort into accurately rendering a scene to begin with?


    Now in 2018, the computing industry as a whole is starting to ask just that question. Ray tracing is still expensive, but then so are highly accurate rasterization methods. So at some point it may make more sense to just do ray tracing at certain points rather than to hack it. And it’s this train of thought that NVIDIA is pursuing with great gusto for Turing."


    A very informative article on how the new cards work (warning, for geeks only)


    The RTX Recap: A Brief Overview of the Turing RTX Platform


    Sounds like to take advantage of the AI denoising (that's still necessary with realtime raytracing despite the 10x speedup in raycasting from the previous non RTX generation of cards) Enscape may need to submit a library of images for the neural nets to learn from (though general examples from other render engines may suffice, since I know neural nets need huge content libraries to train effectively, ie. thousands or millions of renderings)





    What we need is the recently released 5k video codec that's optimized for taking full advantage of the resolution of the headset (5120x5120 px @ 60fps)


    https://developer.oculus.com/b…rmack-5k-immersive-video/


    "The playback engine simultaneously decodes four video streams – the background and the three strips centered in your current view. As you look around, the three strips switch sources at each gop transition point."


    Most 360 videos look terribly pixelated, but these are nice and crisp.

    Keep in mind that the RTX showcases were handcrafted, small toy scenes with a few objects. RTX has nothing to do with reflection quality, it just speeds up any ray casts. This could also bring better reflection quality and detail for Enscape, but it's not the holy grail feature that automatically makes all reflections beautiful, fast and realistic.

    I wouldn't call this a toy scene though (granted it's still a part of a highly optimized game).



    That video is mostly targetted at gamers, for which the RTX features aren't immediately useful, and therefore not enough to justify the high cost. Similarly, gamers are whining about the lower framerates that RTX necessitates, wheareas we'll gladly sacrifice a few frames for better visuals (except in VR). It's still probably wise to wait a little while before purchasing unless you're a developer yourself, given there's no way for us to take advantage of RTX yet either, but if you really want the increased horsepower and the money isn't an issue, they're much faster than the previous generation even when you exclude AI and RTX.

    Have your RTX cards ordered yet Enscape team? :D


    I'm looking forward to accurate shadows and reflections, and hopefully improved GI as well, and know you guys will be the ones that push the cards to their limits :thumbup: Very exciting indeed!