VR preformence

Please note: Should you experience issues with Enscape or your subscription, and in case of any urgent inquiries/questions (e.g. regarding our upcoming licensing changes) please reach out to our dedicated support team via the Help Center or Support button as detailed here.
  • We are trying to push the bounderies of our VR exprerience.

    I am creating a city scene with a lot of detail, where we can place our temporary structures in so we can quickly create renders animations and walktroughs.

    already discussed this a bit in this tread:

    separate objects or one big one

    but does the culling also work when i have a family with a shared object.
    for example at the moment i am working on some train tracks. i place a beam every 500mm or so...
    In the family this beam is set to shared. So does Enscape render the complete track at once or does it only render the elements that are in the viewing area.

    And how does this work with linked files.

  • Hey Arno - Enscape renders everything in the scene from the design application that's visible. Exceptions: Lines, Model Lines, and analytic objects (Section Boxes, Scope Boxes, etc). Suggest doing what's best for Revit and make exceptions if it starts to cause issues w Enscape. IMO render from a view that has everything preemptively turned off except geometry so Enscape doesn't have to think about it (only to not render it).

  • i never use in place families...
    So it is a shared loaded familie into a familie and then into the project. (in this example)

    but i can't see how or what enscape renders... Phil you say try this, but i can't check preformance...
    But i would love to know these kind of things what is best to render..

  • Mmmm all of this sounds interesting in order to gain VR performance. Time ago, when I modelled Unreal Engine projects for VR I used several techniques to maintain the 11ms saint grial / 72 framerate to avoid motion sickness. Main of all was the ability of the engine to use fulcrum culling to avoid rendering of anything outside the camera view (in terms of direct rendering but not for reflections). But if one actor, a big one, had only a part in the view and the rest outside, then all the faces of that actor were in the rendering pipeline. This is the reason for use many individual actors. Other tricks were the use of Level of Detail (LOD) definition and Hierachical Level of Detail (HLOD very useful for unify many actors under only one if certain conditions of distance to camera occurs). I assume Enscape uses not so optim tricks, but could be useful in the future to give us a layer to optimice massive scenes this way, specially if some day we can get the chance to package APKs for uploading to Quest Pro or Quest 3, for standalone playing.