Video Animation Quality

Reminder: If you encounter any issues with Enscape or your subscription please reach out to our dedicated support team through the Help Center or by using the Feedback button as detailed here.
  • Hi All,

    I am producing a few Video Flythroughs with Revit Enscape, I have experience with Still renders and am noticing the quality of the output of the video render does not match the still image quality, it does not align with the lighting, especially the subtle shadows and highlights that you can modify for the still captures and see in the Enscape viewport, but once the video is rendered, the lighting is more washed out and loses the depth of the shadows that you get if you were to render a still using the same settings - is this a known limitation of Enscape video rendering?

    • Official Post

    david.mcgowan90 . there can be some differences in terms of quality when it comes to still renders vs videos. In this case, is there a chance you could provide a video as an example, perhaps alongside some stills? Also, just to make sure, please also acquire our latest release Enscape 2.8.1 if you haven't already.

  • to this same point (though I haven't done A/B comparrisons of stills vs keyframes) one thing I find immediately evident (in video outouts) is the "speed" it takes for the automatic exposure to "catch up" with a physcical transition* between scenes of contrasting brightness.

    *time-stepped stills seem to handle it well enough when "only" lighting is changing - but add geometry changes into the mix and it gets laggy.

    I used to know the correct/optical term but have forgotten it - but essentially it's the effect we see a lot in FPS games nowadays when moving indoors/outdoors - and whilst "technically" correct (to experience a bloom/blinding) - and arguably perfect in walkaround.... in an output video, where the movement might be exagerrated (faster) the renderer sometimes appears to "stutter" or "skip" in resolving the difference, resulting in over-blown, or too dark scenes undesirably quite a few frames later.

    DOF transitions seem to suffer similarly.

    Is it that each frame is "only" given a set time to finalise/gather? It's certainly more noticeable when attempting fast changes or cuts.

    For now we've resolved to speed up certain scenes in post (which is where I believe most video-chopping should happen anyway) but instead of requiring us to artificially stretch our keyframe time markers (and increase the number of infill frames to be rendered) - could there be a slider-adjustment for "per frame finalisation" on video outputs, where, GPU memory (and project-time) allowing, we might set "number of passes" - to work in conjunction with our view setting profile(s)

    i.e. a bit like Handbrake's predefined profiles...

    view setting: "ULTRA" + video setting: "FAST" output = min. compute-time per frame

    view setting: "ULTRA" + video setting: "SLOW" output = max. compute-time per frame

    view setting: "DRAFT" + video setting: "FAST" output = min. compute-time per frame

    view setting: "DRAFT" + video setting: "SLOW" output = max. compute-time per frame

  • Also noted the difference in quality of stills and videos, and videos and real-time preview - especially when it comes to time changes and transitions from day to night. Real-time preview shows hugely different (and more accurate) light transition from light to dark, but video doesn't captures the same...