Posts by Ilias Kapouranis

Reminder: If you encounter any issues with Enscape or your subscription please reach out to our dedicated support team through the Help Center or by using the Feedback button as detailed here.

    Hello there! I would like to add a detail to manage expectations regarding Enscape's displacement mapping.

    Our technique operates at screen space and doesn't alter the geometry. This means that we employ some tricks so that the surfaces of the displaced materials appear to have some depth but the geometry stays exactly the same. Even if you find a seamless texture with displacement, you will still have straight edges for the walls at their boundaries.

    A seamless texture will help hide this when two surfaces with the same material are connected but it won't help when two surfaces with different materials are next to each other. In the image below I have emphasized the edges that stay straight with displacement maps.


    Hope that helps!


    Yes this is a copy-paste from the 3.5.6 service pack. These newer smaller versions contain bugfixes so the rendering quality might have been increased but this is not stated explicitly. That's why there is also the "Automatic disabling of ray-traced sun shadows" in every version's changelog.

    Hello Ofri and happy new year!


    The video with alpha channel was never properly supported and for the versions that "it works", it was actually "a bug" from our side which was "fixed" for us and we no longer show transparency of the videos in newer versions. There are talks on how to properly support it because transparency in videos is not well defined among different formats. No ETA though for when this will be implemented.

    Hello cookiemonster. Since HW accelerated ray tracing is disabled, we mainly rely on screen space reflections and a very limited software emulated ray tracing technique. The problem with screen space reflections is that we can only reflect what is already on the screen and for everything outside of the screen we need ray tracing. In the following video you can see how the reflections of the trees are no longer visible when the trees are no longer "inside the screen".


    External Content www.youtube.com
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.


    The only setting that could help you is enabling HW accelerated Ray-Tracing and probably disabling HW accelerated Sun Shadows (because from the view you shared, you won't have much benefit from them and your GPU has 6Gb of VRAM).

    Other than that, you could change the view so that more slats are in the image and be reflected by screen space reflections.

    Hello cookiemonster. From the get go:
    - what is the Enscape version that you are using?
    - what GPU are you using?
    - do you have hardware accelerated ray tracing enabled?

    From the image you shared, I assume that the slats are modelled as geometry.

    One reason that the reflections fade out is that the project has a lot of geometry and not every object is picked as a candidate for reflections. Since the slats are very thin, it could be that their "importance" is lower from other objects and that's why we don't see that slats that are further away from the glass.

    After asking around a bit internally, I was informed that there is a workaround that you could try by setting an Enscape startup variable to choose the GPU you want. To do that:


    1. Create a text file named userPre.cfg (not userPre.cfg.txt) at the location C:\Users\your_username\AppData\Roaming\Enscape\userPre.cfg

    2. Edit that file, write the line r_vkDeviceSelectionOverride 0 and save it. This line will select the first GPU reported.

    3. Open Enscape and observe which GPU is used for rendering.

    4. If the GPU is not the one you want, close Enscape (might also need to restart the CAD too), increase the number by 1, i.e. r_vkDeviceSelectionOverride 1, save and try again until you reach the GPU you want.


    Hope that helps!

    Thanks for reporting your results here!


    The OpenGL check had very little chances of working because Enscape is based solely on Vulkan now (but was based on OpenGL 2 years ago before the ray tracing features were added). The only OpenGL use for us now is for the web standalones. Sketchup's renderer is still OpenGL based so it makes sense that it worked.


    I would recommend creating a feature request based on the directions in here in case this needs UI/UX changes and I will also raise it internally to see if we can find a robust solution that could be done in the background.

    Hello! Enscape reads the list of GPUs as reported from Windows and selects the first ones that meet the requirements that we have set. It is a good idea to have a list of GPUs during Enscape start that satisfy the requirements so that the user can select which one they want to use.


    I guess you could try switching the slot of your GPUs on your motherboard to see if that makes a difference in the order they are reported to us. I don't know if it will work though.

    I saw in the release notes of the latest alpha that you split up the previous denoiser into a GI Denoiser and a Shadow Denoiser. Is that supposed to help with the issue from the thread or should I expect to see improvement in some other area?

    Now that we have separate denoisers we can optimize their settings for each use case. So you can expect better (but not wildly different) results in walkthrough and captures but not much difference in VR yet.

    Hello, do you notice these slowdowns only with Hardware Accelerated Ray Tracing enabled? These slowdowns are expected because whenever the camera moves abruptly (and not smoothly) from one position to another, we recreate the structures that help us to perform ray tracing on the GPU.

    If you render a still, the camera doesn't move so less things need to be processed for the capture.


    If you think that this slowdown is not acceptable, I will ping Demian Gutberlet to continue the discussion so that this is reported properly.

    There is another reason I don't use VR High and Ultra and stick with Medium. It's a little hard to describe, but I'll try. Whenever I turn my head to a new spot, the image is more grainy and the it take a couple second before it "loads" additional pixels and gets rid of some of that grain. Like a classic TV-show picture loading animation. It is distinct from the noise from the diffuse and specular reflections we discussed above, because that is permanent. This goes away after a few seconds or looking in the same direction. Any idea?

    This is the global illumination algorithm warming up when new geometry enters the view or the lighting of the space changes drastically. In simple terms, to be able to emulate a lot of bounces for global illumination, we store the calculated lighting for the surfaces that have entered the view and reuse that for other bounces. The stored value is updated as more rays enter the scene and light gets scattered.
    This "loading" that you see is just every new surface converging to the correct lighting after a lot of bounces. Since in VR we perform less ray traces, this convergence effect is more pronounced in VR but you can see it in desktop too if you rotate the camera very fast or move very fast in the scene and new objects come into view.


    If you can't manage to reproduce I can test again on the Index.

    I managed to reproduce it on an Index and filed a bug for it, thanks for forwarding your test scene again!


    It would be nice having a packaging-export-feature which would package everything for optimal performance.

    This is actually a great idea that has been getting in discussions by the devs on and off but since it is a very big feature, it hasn't been scheduled yet. What I can do is just to link to the Enscape Roadmap Portal so that you can raise your voice about this feature there:

    Enscape Roadmap Portal (How to forward/upvote Feature/Asset Requests directly)

    Any plans to use to to improve VR performance?

    We generally don't have that such high geometry complexity to justify the time required to implement such technique, so I can't promise anything for now. We know the technology, that's for certain.


    But I'd like to something switch to Ultra to get the prettiest picture in VR. Currently that gives me 50FPS or so, but I don't use it because it looks worse than Medium. Ultra and High both looks worse than Medium in VR.

    With 3.5 we did a lot of changes in lighting even in VR with the new Global Illumination system. Please give that a try too.


    I want a "real desktop quality" version in VR, even if I only get 3 FPS. I do architecture, so mostly I just navigate to a spot and looks at it with my head still.

    That is a controversial topic and I understand where you are coming from. In my previous role I was working on VR for mechanical engineers and they, too, wanted the best possible image and they didn't care about FPS; they just wanted to stay still and look at the point of interest. The point of controversy is that Enscape is designed to be real time and at least try to stay at specific framerates. If we provided that "real" ultra quality, then customers would think that the program is just bad because it can only produce 5-10 fps at the quality. Quality image in VR is just so hard to produce both in high fps and consistently.

    This is why there are only a handful games that have great graphics in VR. The effort it requires to get that level of detail cannot be afforded by small studios; and they know what geometry the use and optimize/bake it before shipping. Enscape has to deal with anything on the fly.


    I just tested again, the issue still is present. Maybe you could try reproducing it again?

    Is the issue still present in 3.5.2? I could retry reproducing it then because I couldn't see anything close to that behavior before. What is the resolution that you are using?

    Hello burggraben and thank you very much for the detailed explanations and comparisons! We have some things to cover so lets go:

    - VR is so much more demanding that desktop. Having 2 4k screens in VR doesn't mean that it will just take double time than rendering 1 4k screen in desktop. There are a lot of things that don't scale very well with resolution, so the GPU utilization doesn't scale as expected every time. For this reason, we reduce the VR quality by one level compared to the desktop quality; VR ultra quality is actually desktop high quality.


    When rendering a model at Ultra Quality on a 4k desktop @ 60Hz I typically have less than 25% GPU ultilization on a 4090 to render 60 frames / second.

    - In addition, it is not just GPU work that is doubled, there is a lot of processing on the CPU for the preparation of the GPU commands for each eye. We don't use any Single Pass Stereo technique currently to reduce the CPU overhead of the draw calls.


    After some more testing: It is related to ray tracing. If I turn off ray-traced sun shadows, then the Shelf 010 stop beeing brighly lit and in VR the noise is noticaly reduced. When ray tracing is switched off entirely it is reduced again.

    - The 90s TV effect that you see is actually noise from the diffuse and specular reflections in the scene. For VR we perform a lot less ray tracing calls to try to stay on a specific framerate and that shows in these areas. Of course we apply a denoising step on the ray tracing results but this can not be enough in some cases. That's why these are mostly resolved when you disable hardware accelerated ray tracing.


    - DLSS is also a bad candidate for VR. DLSS produces weird artifacts that can be almost invisible on desktop but become very obvious in VR. I would suggest to keep it off because it can also mess with stereopsis of people with more sensitive eye sight. There are also people who don't notice anything at all.


    - Night time or areas with very low light are always hard to show noise-free; and in VR it becomes an even bigger problem.


    Since we will be resolving the issue of the washed out image in VR in the next version, I would like to ask you to try that version and come back to us with what problems still remain.

    So I tested it some more and I can live without the eye calibration and just run the headset on SteamVR instead of the VarjoOpenXR engine. The only major problem left with 3.5 are the washed out colors. Does this affect every hearset or just a few? Do you need me to provide any log files or are you able to reproduce it on your end?

    From what we have seen it affects all headsets and we have reproduced it internally. It has been scheduled for further investigation and resolution but I don't have something more to share unfortunately.

    Thanks for sending again your logs which indicate that there might be a problem and can't find a correct runtime. This means that Lenovo VDM is uninstalled but we don't have the correct path to the Oculus runtime. Let's clean up a bit the different runtimes.


    1. Uninstall SteamVR and Oculus.

    2. Go to the environment variables by following the steps outlined here:
    https://docs.oracle.com/en/dat…D5-48F6-8270-A27EC53807D0


    3. There are two lists of variables, one named "User variables" and one named "System variables". We will need to check in both lists that XR_RUNTIME_JSON is not present. If it is, click on it and then press the Delete button.

    4. Restart your computer.

    5. Install the Oculus software.


    6. Open Enscape.

    7. If the issue is not resolved, then install SteamVR too and set the default OpenXR runtime to SteamVR.

    8. If the issue is not resolved yet again, then I will ask you again to send a new round of feedback and logs to inspect.

    Thanks for staying with me and trying to resolve this.