Posts by landrvr1

    Tnx Demian. Hopefully this will be possible. It's a very awkward workflow when we send an exe file to a customer, and then have to ask them to go in and tweak certain settings before taking their virtual tour! Most times they simply won't do it, or struggle finding the settings - then the odd FoV/exposure/etc becomes a POOR reflection of our work in the mind of the customer.

    The end users shouldn't have to do anything but enjoy the experience.
    Does that make sense?

    Okay, per my other thread I managed to get Preview 2.7 up and running sort of.

    The flicker problem DOES NOT show up.

    However, standard Enscape rectangular lights along a core wall cove are exhibiting an odd behavior:

    This does not occur with 2.6 - although with that version you do sometimes get the tile problem.

    Okay, I was able to get the latest 2.7 Preview version working. However, if I close the Enscape render window and then try and reopen it again within the same SketchUp session, then SketchUp will crash. It DOES work the first time you open the render window.

    I did the following:

    "r_rtx 0" in the userPre.cfg
    No quotes should be included .
    The userPre.cfg file is one I made using a standard notepad file and changing the name.

    While the build works, it's pretty unstable on my end. With long freezes happening when turning on/off layers, adjusting lighting, etc.

    Getting the spinning Blue Wheel of Agony right now....looks like I'll have to do an end task.

    Gadget, thanks. I figured it might be connected to some sort of 'bucket' type rendering process - similar to any offline CPU based engine - just working much faster. What's puzzling is why it's related to any light that's close to a vertical surface? Hopefully the Devs will see this and take a look.

    The taps exhibit that almost anti-aliasing effect that often occurs with realtime. I'll make sure the inside faces have the same material! Tnx

    I've got a strange issue with cove/under cabinet lighting. I'm getting these odd flickers that seem very specific square or rectangle shapes.
    There's no co-planar surfaces.
    There's no architecture/objects past the face of the affected walls. In other words, the squares showing up aren't related to anything beyond the wall.

    This happens with both IES or standard Enscape lights.

    I'm running 2.6.0 11215.

    Here's a video showing the issue. Starts happening at around the 15sec mark.…view/391859798/a2a6fb482f

    QR Code and setting the field of view on the device is not the main answer or solution.

    There's two issues at play with field of view, and they are related but highly distinct.

    1. Device Field of View

    2. Original Image Field of View

    With cardboard viewers, you have no control over the field of view. Instead, whatever app you are using may or may not have an onboard FoV setting, or the chance to scan a QR code. This was primarily implemented because different devices have different LENS CURVATURE amounts. The more curvature of the lens (which, by the way, is NEVER a good thing) the more fisheye effect you'll get. The QR codes warp the image in the app in a manner similar to adjusting a field of view in order to compensate for the curvature of the lens. It's shocking how many people have a great set of lenses, but are using the wrong kind of QR code - hence that link from Stefan.

    While it's important to dial in the write device FoV / QR code combination, that's not dealing with the root problem - which in the case of Enscape is a Field of View that's simply way too high.

    When the FoV is set too high in the original image, it only makes the device FoV issues worse and that much harder to fix. Remember, if you start with a bad egg you are never going to get a great omelet - I don't care how good your cooking skills might be, LOL.

    If I have time later I'll put together a link that shows exactly what I'm talking about. Sorry for the long-winded babbling. haha

    Welcome to our forum! :) We are actually in the development process of the first animation feature which will make vegetation and such move in the wind. Afterward we'll have people animated looking at their phones and such for example, to make them feel a bit more organic - as a final step we'll implement a dedicated animation system which will allow you to create your own paths.

    In this case the 1. point will indeed be developed first I'm afraid. :) This will generally be less complex to implement compared to having dedicated animations via keyframes, but this is also just a matter of time until it's gonna be available. :)

    lol. No worries, Demian. It's been mentioned before, but what about integrating with the Animator extension by Fredo6? It seems like a fairly straightforward process. I know it works with V-Ray for SketchUp, but not sure if that's CPU offline only - where there's a definitive 'end of render frame' point that tells Animator (or a keyframe process in Max, Maya,etc) that it's safe to start rendering the next frame. This is pretty much how Chaos' Project Lavina is working in their realtime beta player when you need to save frames with animation. Unreal is trying to do the same, but with very mixed results so far...

    Moving objects is really brand new territory with realtime GPU and realtime ray tracing!

    There's really two different things when we talk about animation:

    1. Enscape-driven objects like birds flying, wind in the trees, etc.

    2. User-driven objects animated with keyframe techniques.

    I'd much prefer the 2nd option be developed first, lol.

    Regardless, it would probably be a two step process, correct? Capturing something moving on a frame-by-frame basis for an animation is probably much easier to implement than a true, realtime scenario. With keyframed animation, the duration of the image generation would be the same as now - but perhaps have the option of adding a set amount of extra time to each frame (1 to 5sec?) so that all shadows/reflection/GI solutions have time to resolve. Just spitballing here, haha.

    Great Demian, thanks. One interesting part of our workflow is to isolate different objects and figures in our scenes in order to highlight them with After Effects magic, lol. It also helps to simply have overall post control over individual elements in the scene.

    I fully recognize that this falls firmly in the realm of traditional viz designer work and isn't something that the average Enscape user might need. :)

    I'm circling back to this thread because it really would be great to either have a FoV adjustment for Panoramas, or bring the field of view number lower. The default, in every viewing circumstance, is way too wide a FoV. It's probably set to around 100° or so in the code, but FoV should be much less for either monoscopic or stereoscopic panos.

    The issue is a simple one: regardless of your delivery platform (PC screen, cardboard, Oculus Go/Quest, GearVR) the scenes look twice as large as they would in the real world. Interior spaces are highly distorted...a bedroom feels like a massive living room, lol.

    Even having the ability to add a bit of override code somewhere in the roaming folder would be great.

    Thanks for that link, I see exactly what you are talking about in your examples, and your proposed solution could very well solve the problem.

    I did a quick test with a 5sec sequence at 30fps.

    Lossless png frames

    5sec @ 30fps = 150 frames

    1:19 total render time

    1.89 sec per frame

    mp4 Max Quality
    5sec @ 30fps = 150 frames

    :16 total rendertime

    0.1 sec per frame

    That's an astounding difference in render time per frame! In fact, it's so different that frankly I'm shocked there's not a bigger difference in fidelity from lossless png to mp4 max quality! I have yet to see ghosting in my tests, but that doesn't mean anything. The ghosting could be the result of the Enscape version vs the individual particulars of the scenes involved (polygon count, lighting rig techniques, bitmap texture sizes, etc etc.

    I know you said you wanted to avoid creating png files because it's an extra step, but have you at least tested the output quality for your scene? Everyone is different, but even for test films I would never use the onboard Enscape mp4 generator. I can take png frames and, in about less than 30 seconds, dump those into Premiere and get a working draft for review purposes. Something to consider....

    Hmm. I just rendered a sequence to png and the same scene to mp4 (at the Max) setting. I'm seeing a bit of difference in the glass reflections and color grading, but not really any fidelity loss in terms of true GI/reflection/refraction/etc etc. There is an overall degradation of the image from the mp4 compression - even at the Max setting.

    Here's a quick screen shot: left side is the png, right is the mp4.

    Can you post any examples of what you're seeing? I'd love to understand better.

    Absolutely beautiful work. I'm curious on your technique for the glowing cubes....was that post pro work or straight out of Enscape? GPU based engines are still hampered with things like 2sided materials that would easily allow that effect in offline renderers. Would love to know more! This issue is similar to proper, glowing lampshades. lol.

    I'm puzzled by this post because the opposite seems to be true when rendering a camera path to individual png files at the 'lossless' setting. The final 'settlement' and tracing is always complete before each frame is saved. I'll never use the straight to video render solution, so not sure if this resolve/tracing step is shorter for video export? Have you tried png frames instead?