Posts by Macker

    import those into SketchUp to particular layers/tags, then xref these sub-files (per building babsis usually) into a master file. It means we can have multiple artists working on the various sub-files, while the head artist works within the master setting up cameras, getting the lighting right... all the good stuff.

    How are you doing this? Sketchup doesn't have a referencing system?

    Quote

    Lastly - how are you importing your Revit files?

    We use the Sketchup exporter plugin from Simlab Soft when going from both Revit and 3ds Max for this task. It's very, very good (though it only brings through standard materials from max, not vray)

    Quote

    Great work, has got me inspired for our next job with Enscape.

    Many thanks :D

    Very well done - and elegant way to suggest lively movement.

    Out of curiosity - why do you need to export the Revit model into Sketchup as Enscape is available in Revit as well?

    Good question.

    The primary reason is simply that our 3D team don't use Revit. We could of course use Enscape within revit (and have done many times), but making changes to the model becomes an issue when it's the working model (imagine us ruining a Revit model of this scale!) and also when we need to tweak things for visual purposes (UV mapping for example) it becomes extremely unwieldy. Also, despite trying many, many methods with our in house Revit guru, we've yet to find a decent way to bring high quality assets in from 3DS max with their textures all still intact.

    Revit simply isn't built for visualisation as its primary function, and it never will be. If you want to venture outside of the Enscape asset library in Revit and model some custom bits of furniture, etc you're going to have a fairly hard time in comparison to Sketchup.

    Nice! The way you’ve implied motion in the people is a very clever solution.

    Out of curiosity, how many people have worked on the model? Is it just yourself or were others involved. I ask because I’m ‘the 3D guy” and do everything and if you’re the same then that’s one hell of a big project (modelling, rendering, video)

    Hi Paul,

    It was myself and my junior colleague (who also animated the map/diagram), and it was quite the challenge due to the design of things being ongoing as we were modelling it, plus as you said the scale of it.

    Most of the site/buildings had been modelled in Revit, which we then exported into sketchup. As you can imagine, navigating a site of this size with all the buildings slowed Sketchup and Enscape to a snails pace, so our solution was to storyboard (PDF attached - also shows some shots that weren't used) the proposed camera shots to get client approval. Once that had been done, we could create a Sketchup model for each shot, deleting out all of the surplus geometry that was out of shot/not required. This had the additional benefit of allowing us to spread our workload between us quite easily as we would just each pick a shot and work on it.


    One of the issues we faced when bringing stuff through from Revit is that all of the textures read as being 2.54mm (even though they don't display as that) - so a lot had to be re-textured, which was a real pain. Also due to Revit not having any proper UV mapping facilities, a lot of the brick arches needed re-doing properly in 3DS max and then exporting back into Sketchup.

    The next challenge was getting assets of a high enough quality for vignettes, etc. We have a quite extensive 3s max library, but sadly our sketchup library lacked the quality that was needed. We ended up exporting lots of our shop interiors/windows and plants into Sketchup and going through the laborious task of re-assigning all the various bump and opacity maps.

    Our client then requested that we have animated people (shock horror we had initially quoted for that, which would have meant rendering out of 3DS max/VRay which is considerably more expensive and thus they said they were happy to go with Enscape). So the challenge was how do we get animated people into this?! We had many conversations and meetings about it, eventually arriving at the solution that we would treat each shot as if it were a time lapse, fading different people in and out in post. This meant every camera path had to be rendered up to 4 times, each with a different set of people! Thankfully due to Enscape being so fast at rendering, this wasn't a major headache.

    We did trial some other ideas too such as rendering just the people from 3DS max and matching the camera, which had some interesting results - but ultimately wasn't the look our client wanted to go for.

    For me personally it was a really fantastic opportunity to showcase what we could offer in terms of visualisation; I'm sure I'm not the only one who is bored of the traditional loooooong camera path that takes you round the entire development! We really tried to squeeze in as many little cut-away/B-Roll/Vignettes/Whatever you want to call them as possible, because there were so many nice details within the development that we could focus on. This, plus the faked movement of people is what kept the pace of it a bit more interesting than the standard camera path, I think.

    [edit] The still images of the development that it cuts to in places were 3DS Max/VRay.

    Files

    Hi guys,


    I submitted a support request (00811560) but haven't heard back yet.

    I installed the latest Enscape update (2.9 I believe) and it asked me for my license number. I input my license number and it tells me it's been activated too many times.

    Surely I shouldn't have to input my license for an update? And surely, it shouldn't count as an activation?

    Can someone from the Enscape team please reset it as I'm currently unable to actually do any work.

    Many thanks,

    Chris

    If you transition quickly from day to night the auto exposure seems to react too slow, resulting in a initially too dark night portion of the exported video. To solve this a "look ahead" on the auto exposure for video export would be needed.

    That sounds exactly like what is happening. Does Enscape not evaluate exposure frame by frame? Would be good to be able to control how many frames it averages over.

    Hi all,


    I'm animating a day to night shot. In the viewport I set the daytime shot up as I want it, then I move to around 8pm and set up the nighttime shot how I want it. Both look excellent.


    Then I render it out (with auto exposure on) and whilst the daytime shot looks fine, the night shot looks much darker than when I was setting it up in the viewport. Why is this?

    Hi all,


    This begins with a question, which could end up as a suggestion if there's no suitable way to achieve it...


    I'm putting together an animation in Enscape which requires hundreds of animated people (just greyed out/semi transparent) and obviously Enscape doesn't do this. It's something that I could quickly and easily achieve in 3DS max which begs the question; can I export the Enscape camera paths in such a way that I can import them into max? The alternative for me at the moment is having to track the enscape footage (uuggghhhhhhh!!!) which is something I really, really do not want to do.


    Is the current camera path export format a proprietary one? Or is it based on fbx and possible to import in max? i.e. a simple change of file extension?

    If this isn't possible; what is the likelihood of you guys implementing a camera path export that is compatible with other software?


    Kind regards,

    Chris

    Hi all,


    I'm currently working on a large site and I'm already concerned with the size of the files I'm creating, which I'm sure will eventually give me issues when in Enscape. I'd love to be able to see a breakdown of which textures are taking up the most RAM on the gpu so that I can look at reducing them. Is this something that could be implemented?

    Kind regards,

    Chris


    Okay, got it working! Once you start the Oculus app on the PC, go into the Enscape menu and enable VR Headset, you have to press the SketchUp button in the Quest environment when you've got the headset on... I was missing that last step.

    Again, I'm using the Link method with a USB3 cable and NOT the wireless hack referenced above.

    I can say this: The Link method is utterly fantastic looking for a VR experience. Great visual fidelity - shadows, reflections, etc. Navigating and menu options work great so far...

    Well that's a positive on it's own. How's the tracking (as it doesn't use base stations)?

    Would you be kind enough to give the wireless hack a go?

    Kind regards,

    Chris