Posts by TowerPower

    This article may provide some answers: https://gpuopen.com/deferred-path-tracing-enscape/


    It conveys just how complex Enscape already is, and the advanced rendering methods they've been using to achieve real time path tracing in gpus before RTX even existed. I consider myself relatively well versed in rendering lingo, and I didn't understand a good chunk of that article...


    One can infer a few things though - the fact that Enscape already utilizes path tracing in some form would appear to set it up well to take advantage of the dedicated ray tracing cores in RTX, which are said to provide a 6-10x speedup over the last generation cards in ray tracing tasks. I'm sure it's a lot more complicated than that though.


    Here's another interesting article that explains a little how someone else managed to implement full path tracing in their game https://wccftech.com/q2vkpt-first-entirely-raytraced-game/

    "the limiting factor of path tracing is not primarily raytracing or geometric complexity... they mainly depend on the number of (indirect) light scattering computations and the number of light sources... the number of light scattering events does not depend on scene complexity... It is therefore thinkable that the techniques we use could well scale up to more recent games."


    They're using some cheats as well though:


    "However, while elegant and very powerful, naive path tracing is very costly and takes a long time to produce stable images. This project uses a smart adaptive filter that re-uses as much information as possible across many frames and pixels in order to produce robust and stable images."

    One thing to remember though is that while the visual quality of Lumion improved drastically with Version 8, bringing it on par with (or in some cases surpassing) Enscape, it cheats a lot to get there.


    Enscape still renders in realtime, while Lumion has increasingly become like a traditional offline renderer (albeit a very fast one), taking multiple seconds to render a single image. With Enscape, what you see is what you get. With Lumion, what you see is a rough preview with poor GI and simplified geometry - not exactly realtime (which is quite ironic given that Lumion was the original realtime vizualization tool ). To be fair, images still take around the same time to render in Lumion as they always have, it's just that rather than increase speed (which might have allowed them to support things like VR which they've so far eschewed), they chose to increase the rendering quality instead (in an effort to reach the same visual quality as V-ray etc). Afterall, their biggest knock has always been the cartoonish visuals.


    You can't have everything in life afterall, so developers have to choose how to balance speed and quality. Increase one, and you reduce the other. While Lumion was getting prettier, traditional render engines like V-ray have gotten much faster thanks to powerful graphics cards and things like AI denoising. The way things are heading, they'll converge at some point.


    This is why the new RTX cards are so exciting - they promise to close the gap even further, and perhaps with all the recent advances in AI, even bridge it entirely. (https://www.unrealengine.com/e…-in-real-time-ray-tracing )

    That's odd, it shouldn't be showing the skybox, and you should be picking up screen space reflections for objects within your field of view with simplified geometry for objects behind the camera.

    You can download and install the previous version again to test it. There's a link to previous versions on the download page.


    I would hope that light leaking is getting better, not worse...


    Edit: Removed direct download link. (CM)

    And speaking of edges - having a slider for rounded edges like Lumion introduced in their last version would be another great way to control hard edges (since objects rarely have razer sharp corners, and modeling tiny bevels usually isn't practical).

    Anothe informative article from the same site:


    Ray Tracing 101: What It Is & Why NVIDIA Is Betting On It


    "keeping in mind that rasterization is a hack, it’s good to periodically look at what that hack is trying to achieve and whether that hack is worth the trade-offs.

    Or to put this another way: if you’re going to put in this much effort just to cheat, maybe it would be better to put that effort into accurately rendering a scene to begin with?


    Now in 2018, the computing industry as a whole is starting to ask just that question. Ray tracing is still expensive, but then so are highly accurate rasterization methods. So at some point it may make more sense to just do ray tracing at certain points rather than to hack it. And it’s this train of thought that NVIDIA is pursuing with great gusto for Turing."


    A very informative article on how the new cards work (warning, for geeks only)


    The RTX Recap: A Brief Overview of the Turing RTX Platform


    Sounds like to take advantage of the AI denoising (that's still necessary with realtime raytracing despite the 10x speedup in raycasting from the previous non RTX generation of cards) Enscape may need to submit a library of images for the neural nets to learn from (though general examples from other render engines may suffice, since I know neural nets need huge content libraries to train effectively, ie. thousands or millions of renderings)





    What we need is the recently released 5k video codec that's optimized for taking full advantage of the resolution of the headset (5120x5120 px @ 60fps)


    https://developer.oculus.com/b…rmack-5k-immersive-video/


    "The playback engine simultaneously decodes four video streams – the background and the three strips centered in your current view. As you look around, the three strips switch sources at each gop transition point."


    Most 360 videos look terribly pixelated, but these are nice and crisp.

    Keep in mind that the RTX showcases were handcrafted, small toy scenes with a few objects. RTX has nothing to do with reflection quality, it just speeds up any ray casts. This could also bring better reflection quality and detail for Enscape, but it's not the holy grail feature that automatically makes all reflections beautiful, fast and realistic.

    I wouldn't call this a toy scene though (granted it's still a part of a highly optimized game).



    That video is mostly targetted at gamers, for which the RTX features aren't immediately useful, and therefore not enough to justify the high cost. Similarly, gamers are whining about the lower framerates that RTX necessitates, wheareas we'll gladly sacrifice a few frames for better visuals (except in VR). It's still probably wise to wait a little while before purchasing unless you're a developer yourself, given there's no way for us to take advantage of RTX yet either, but if you really want the increased horsepower and the money isn't an issue, they're much faster than the previous generation even when you exclude AI and RTX.

    Have your RTX cards ordered yet Enscape team? :D


    I'm looking forward to accurate shadows and reflections, and hopefully improved GI as well, and know you guys will be the ones that push the cards to their limits :thumbup: Very exciting indeed!


    Tabletop mode would be awesome and something I suggested a long time ago because of how cool it looks in VR (anyone who has tried Google Earth VR knows this).


    IrisVR's is pretty limited as I recall though. It would be better if we could manually adjust the scale and choose whether there was any background at all (I don't think a room is necessary, or even a table necessarily, although it's good to have the option to bring the model up to an elevated level). Perhaps you could use the controllerss to scale the model up and down, similar to how it works with Tilt Brush and other software.

    Thanks! Right now we only have Quadro cards, but Im planning on a new workstation with a GTX (or RTX) when we buy the VR.


    Any news on Vive (or Vive Pro) supporting "real" walktrough now? I don't think we'll be using roomscale VR, so perhaps Oculus is the way to go. Still it's a couple of years old and a new version should probably be released soon, cause it feels like the resolution will make a big diffrent? Isn't Vive Pro a big step up from Vive? Maybe we'll wait to the spring and then buy a Oculus 2 and a RTX-card.

    Don't count on the new Oculus Rift showing up this Spring - most people agree it won't be coming out until at least 2020.


    This Spring will likely see the release of their Santa Cruz hybrid headset, which is standalone like the Oculus Go, but also includes positional tracking and dual controllers like the Rift (through camera tracking built into the headset, eliminating the need for external sensors, similar to how the Samsung Odyssey and other Windows MR headsets work). The big difference being that like the Go, it's a mobile headset (ie. no chord and computer powering it), so performance will be an order of magnitute less, and any version of Enscape capable of running on it would likely be more cartoony than realistic.

    Sure, is like you say. I cannot see any issue on this, is just a better performance and better quality rendering for this gpus, as it has to be, but older gpus work like since today :)

    Just one last question for me, nowadays, do you recommended waiting for buying new one 2080ti (seems to be a not so good performance) or do you recommended me to buy an asus strix 1080ti 11gb for about 750€ (new one) [ 867,94 $ ], because my 980ti is a good gpu but i think is no so quick for VR rendering. Thank you a lot.

    You should really do some research on the new generation of cards, they're fundamentally different, with a quarter of the chips dedicated each to ray tracing and AI. So while the traditional shader portion of the chip may only be incrementally better than Pascal (ie 1080ti) and not worth the upgrade, that only represents half the card.




    When you combine all these new technologies within RTX, real time ray tracing performance increases exponentially. Just check out the graph below. 13 tflops for the 1080ti vs. 80 tflops for the 2080ti, (ie. 6 times faster!).



    Now of course, the major caveat is that this all depends on applications actually taking advantage of the new cores, which will take time, but given nvidias dominance in the gpu market, I don't see any reason to believe that developers won't try and utilize all the new free horsepower.

    I was actually very impressed by the grass, it looks just like a dying lawn in real life. You could ask for it to be watered better I suppose ;)