Posts by Micha

    ... unacceptably high "realtime" losses again

    I'm not sure I understand you. Do you mean the workaround will cause a slow down by the calculation? I don't know that render times are increased, but so far I have seen renderings are needed here only, so this should no problem.

    is there a possibility of reducing this problem of reflection with specific setting ...

    In your case you could work around if you mirror the scene behind the mirror and you remove the mirror. So, "real" objects are seen and not reflections only. ;)

    ... looks like the environment is switched off in the orthographic mode (?), this means we do not get the sky reflections in the glass. Is it possible to maintain the environment settings so the orthographic has the same quality as the perspective?

    Maybe anything works right. Orthographic means that you are looking from infinite distance with a nearly 0° view angle. This could cause that you see a very small portion of the BG only and it looks like a constant color. The same happens for the reflections - every coparallel surface shows the same little part of the environment. If you look at your ortho shot than you see that some windows have a white reflection on it and windows in an other direction shows a dark part of the environment and it looks like no reflection. I afraid what you ask for isn't possible. Or I'm wrong? I'm curious. ;)

    Interesting, looks like heavy usage of RTX raytracing. For example shadows looks like raytraced and not done per shadow map.


    At the youtube videos there is an interior rendered - 4k in 86s (2060). This sounds like what I missed at Enscape - a high quality output based on RTX, not optimized for real time only, but for final stills. The big question for me would be - how much of a scene can be seen at mirrors? Are objects skipped like at Enscape?

    Is it not a problem of your viewing device? The eye distance of the virtual camera influence the stereo effect only. A large eye distance of the virtual camera cause a stronger depth effect. Funny side effect, if you would choose a very eye wide distance than the stereo image would show a doll-house-effect. But it should not cause an overlay problem.

    The Clara scene looks like the scene isn't really flat, the street in higher at the curve side than at the straight one. An other problem could be that the pano isn't 100% leveled.


    If you want to know is Enscape using a wrong horizon or not you could do a test. Attached a "pano" with a horizontal line at 50% image height. I checked it here and it looks perfect. A large plane disk at height 0 matchs the environment horizon line. But I found that a self-shot-pano is in one direction higher than in the other. I thought it was aligned, but it is it not. Maybe it helps if you fine adjust the horizon line of your pano. But it could be difficult. For example I use Affinity and here I can adjust the horizon in 1° steps only, not finer.

    I'm testing the usage of selfmade panoramas too and I quite happy with the results. Could you post some images about your issues?

    I afraid you can't expect so much from manipulating the view per render software. So far I understand there isn't so much possible. During shooting the pano you decide how the view will be looking. For example if you shot a pano in the park 70 cm above the ground than you can expect to render an interior and to look out the window and to see the park from 3m above the ground.


    Adjusting the horizon height is critical - the horizon of the model and the image needs to match. Attached an example where I mapped the pano to a sphere and moved the sphere up. It looks wrong.

    The second screenshot shows my virtual camera at approx. the same height like during shooting the pana - approx. 4m over the ground. It works.

    Pete Chamberlain I wished, some times someone from the team would more think about features as an advocate of the users and in this case he should come to the conclusion, the current implement way isn't so intuitive and useful and a solution is needed. I don't know why not more users ask the same question again and again, maybe other users think - oh, the problem is reported, so why should I beg again?


    You have some professional early adapters who tell you that there is a strong problem and a fix is needed. If you ignore them than you waste a worthwhile resource over years. Also you waste the time of other users which run in the same situation, be frustrated, looking for a solution and write posts ... and get nothing. Believe me, the beg system of upvotes is a pain at the users side and doesn't cause a well adjusted software.


    The problem is that it is impossible to keep the head at standard height, if you navigate per mouse.

    That's still the case, the distance from glass pixel to respective background steers the diffusion intensity.

    In the past I got the effect, that an additional placed object somewhere at the scene far in the background helped to get more blur. But I tested it now and this behavior isn't there anymore.


    Here an example - this is the max possible frost effect. The man is approx. 40 cm behind the glass. More blur could be really useful.


    It's a well know and ignored issue since a long time. It makes VR Tours on the PC screen very uncomfortable. I suppose we will need to wait longer for a better PC screen navigation, but I'm sure you will get an upvote accordingly. ;)


    Thank you for asking for it again, maybe once a day it will be solved. It sounds like a must-be-there feature.


    2019:

    Camera movement too slow


    2018:

    Enscape - experiences of the last weeks (bugs and missed features)


    2017:

    Will the Oculus Remote work with Enscape?

    Macker In the past I found that the blur effect depends on the scene depth, maybe it helps to place an object far away at the background of scene. But I'm not sure, since did it some months before the last time. Also adding a noise bump map can help to increase the blur, which is some times not strong enough. But this workaround can cause some noise.

    So it's not a simple equation of taking 2x the geometry and receiving 2x rendering time. Memory is even more critical - if the resources don't fit into your memory, there's not that much you can do.

    Theoretically this maybe true, practical the user can remove the mirror glass of a little/medium bath room scene and mirror the scene as a raw workaround. Anything is rendered perfect and fast without any technical problem. Unfortunately this workaround doesn't work for any scene, because the mirror-room relations. Also my impression was that Enscape can render really complex and heavy scenes, but fails to render a mirror at a small simple bath room.

    The problem was and I think it is, that even on simply light weight scenes not all geometry is show in mirrors, also if the user own a card with 11GB or more. So, I save two seconds and my memory keeps empty, but my client isn't happy. It's the reason because most I found no usage for Enscape, but I see a great potential at this Engine and so I'm waiting for the day, if the quality reached 100% - less my client's doesn't accept. My client pay me per hour, seconds doesn't matter. ;)

    Maybe you sayed - an option like this would allow to see and adjust the reached web viewer look before uploading. Sounds like it's essential to optimize a scene for web use.