Posts by Lucas Cunningham

    +1 for the ultra extreme still settings, (i think I've seen this referred to as print quality before) Most are fine with letting one of these renders run for a couple minutes to produce an image that is as close to photo-realistic as possible.

    I would love to be able to create a dynamo script that utilizes Enscape tools, unfortunately there is no API so all interaction is manual clicking. (Which means we have to rely on manual entry <X and not glorious scripts and automation:love:)

    Here's some pseudo-code to illustrate:

    1. def RenderPanorama (view)
    2.     Enscape.NavigateTo(view.Position)
    3.     newPanorama = Enscape.RenderPanorama(MediumQuality, + " Panorama")
    4.     return  = newPanorama
    5. RenderPanorama(FavoritedViews)
    6. videoOut = Enscape.RenderVideo(4k, 30fps, Bluray, videopath.xml)

    An Enscape API would open up a ton of productivity opportunities, and maybe help when you have to re-render all of your content due to a design change.

    Thanks and keep up the good work,


    iModels will be a way to bridge 3D content and design data in to one common location from multiple design applications like

    • SketchUp
    • Revit
    • Microstation

    The benefit of this is that a program could support all these platforms and only have to maintain one connection.

    This new platform (iModel hub/iTwin Services) could definitely use a simple to use high quality visualization/VR software integration from a company like Enscape.

    Here is their SDK page also:


    I would like the ability to create renderings like images, panorama's, and videos from and EXE.

    Opening the Exe is often way faster than opening the Revit model, additionally opening just the Exe runs smoother and with less computer resources.

    Exe's are really easy to deal with and would make it where somebody without Revit experience, like somebody in marketing, could generate graphics without the need for a Revit license or the know-how to operate the program.

    Understandably when you enter "render mode" you then would require an Enscape license, but being able to make renderings from a standalone without the need for modeling software would make Enscape even more desirable.


    Diagnosing problem areas continued

    Oculus/Facebook just released a really in depth article on how they developed the Insight system and how it works. You can find that here, if you have some reading time

    But knowing that the system relies on SLAM (Simultaneous Localization And Mapping) I think I have found a way to diagnose problem areas for the tracking system, we can download SLAM based applications for our smartphones that can allow us to see what areas will register.

    Android link:…t.visualslamtool&hl=en_US

    Apple link:

    It is worth noting that these systems use different sensors and SLAM engines, however the underlying concepts are the same.

    These apps will place points at areas with high contrast that SLAM systems will be able to detect, if you notice that there aren't very many points then that would be area for concern because the insight tracking system wont be able to create references.

    ^poor tracking (few reference points)

    ^good tracking (many reference points)

    dfersh I'm no to sure what the best way to do precise height *calibration is.

    Maybe try setting the touch controllers on the ground and walking around a bit during the floor position setup portion of the Guardian setup. This should theoretically give the tracking system more points of reference for where the floor is.

    The Oculus Insight tracking system is a SLAM (Simultaneous Location And Mapping) system that relies on finding unique edges for tracking so rooms that a very basic with just plain colors also aren't the best for it. Read more on that near the bottom of this article:

    One thing I haven't done but would like to try would be to make some stickers with distinct patterns on them to act as markers to assist the tracking system if it is having difficulties.

    The new tracking system is far easier to setup, but we have run into a little more precision issues when it is up and running unfortunately.

    EDIT: Changed "calculation" to "calibration"

    I think the issue is with the new Oculus Insight tracking system itself, we had the same problem when trying to use the headset under a pop-up outside, the ambient sunlight wrecked the tracking system and messed with the floor height and controller tracking. If the issue arises in a room with open blinds or curtains, close them and see if resolves the issue. Basically what happens is that the sunlight overexposes the cameras in charge of the tracking and the system looses its reference points.

    It is strange though that you mention that it is a specific problem while in the Enscape application.

    Hi I turned auto resolution off but my Exe window still appears to be rendered to 1080p or lower, it is really noticeable on our 65" 4k TV. Even with auto resolution on it should be rendering in 4k because we are using a 2080 Ti.

    Are there any known causes/fixes for this?

    The memory is not the problem, the limited performance is.

    Exactly, the GPU is capable of much less on a mobile unit, the rendering was extremely foveated on on of the applications I tried (meaning the outer edges of the display rendered very pixielated)

    I know the Enscape uses a relatively small amount of memory for runtime processes, ~86MB on the .exe but to get the same thing to work as an .apk may take more, regardless if the product rendered out this hardware does not match up with Enscape's quality standards then it wouldn't make sense for them to develop for it at this time.

    I did some further testing yesterday with the Quest and found that it actually gives applications 2.75 GB of RAM to work with.

    With your rendering software needing some of that for normal processes (UI, Navigation controls, etc.) we are then left with roughly 1.6 GB for model geometry and textures. The1.6 GB figure is true with IrisVR but this number can be increased toward 2.75 GB if they optimized their runtime memory requirements. This works fine with "small" models and can actually provide a good untethered experience, however any of your models that take more memory than that to run will crash the application.

    One way to maybe work around this would be to cloud render the image going to the headset, which is subject to latency. (Which is very bad for VR) Or to maybe look at something like limited draw distance, where the whole file is stored on the device, but only the portions nearest to the user are loaded into memory.

    Like Demian said, I think the Focus and Quest are great first steps but they still have their limitations.

    +1 Rift S support, ours should be shipping May 21st :). We are hoping that it will be supported in time for a presentation that is coming up fairly soon.

    It looks like Oculus has tried to make the migration process as easy as possible, if it is a process at all. I did a little research and here is a promising screenshot straight from the Oculus developers site.

    Can anyone confirm when the Rift S will be supported, or if it will just already work with the software?

    -Thanks and keep up the good work.:thumbup: