Using google maps 3d data in Enscape

  • Hi all

    I just wanted to share my newest technique/tool :-) I've managed to get real 3d data from google earth (or maps if you will...) into sketchup, and thereby into Enscape. This is extremely usefull for site models and birds eye renders, as it allows us to get correct, detailed 3d models of the surroundings MUCH quicker than building it all yourself.

    It has taken a LOT of trial and error to get to work, but when it does, it looks awesome :-)

    The process is quite complicated, and requires a bunch of different software, but once you have learned the process it can be done quite quickly.

    Here is a short screen capture video showing the different stages it goes through (without details on how to actually do it):

    Let me know if you want the details on how to achieve this :-)

  • Awesome Herbo , thanks so much for sharing! :) I think a lot of people would love to know more and I've put already it into the resource collection as well, and I'll be linking your little guide to anyone asking for this in the future. I appreciate your time.

  • It would be wonderful! I tried same months ago to export the Data of GoogleMaps but i missed it. At the Point where you use Transmutr (a super Programm! Can recommend it) i can follow. But Blender at the Step before is completly new for me!

    Edit: A list of the Progams would be a good start and a tutorial from you would be awesome!!!

  • I am thrilled by your video! :thumbsup:

    It's exactly what I'm looking for right now for our projects. We are working on projects in infrastructural construction and would like to make the city and its surroundings visible. Google actually has all the data we would like to use.

    Can you give us more details about your workflow?

  • It looks very interesting indeed, I tried already but did not succeed myself -> didn't get the data into RenderDoc at all..........

    and, could it also be used in a Revit environment?

    Thanks Herbo!

  • This is mind blowing Herbo. Demian Gutberlet I'm sure there is some budget to hire this talent? Or at least to let him prepare and share a TUT?

  • Would love to learn this process Herbo. I think i have most of the software needed (Google Earth Pro, Blender, Transmuter/Skimp? SketchUp) But what software is 0:15?

  • I'd love to make a small program to automate this. But is this legal

    Almost certainly isn't... clearly it has occurred to Google that their 3D dataset would be useful outside of their application. If Google wanted to provide this capability they would have. Sketchup would have been a logical connection but they chose to only allow for 2D import. More recently Google Studio has provided great access for this type of use case (via a different methodolody) Google again chose not to allow for export of the 3D data.

    It is unlikely they will go after a user like Herbo. A software developer enabling this type of use to be scaled up might raise eyebrows.

  • Superb workflow and really great results!

    Unfortunately this is true:

    Almost certainly isn't... clearly it has occurred to Google that their 3D dataset would be useful outside of their application. If Google wanted to provide this capability they would have. Sketchup would have been a logical connection but they chose to only allow for 2D import. More recently Google Studio has provided great access for this type of use case (via a different methodolody) Google again chose not to allow for export of the 3D data.

    It is unlikely they will go after a user like Herbo. A software developer enabling this type of use to be scaled up might raise eyebrows.

    I'd assume that Google's TOS prohibit the use of their data outside their ecosystem so using this data, especially as a professional might call Google to action which is really a shame considering the nice stuff that you could achieve with this... :(

  • Thank you so much for all the feedback! I'm sorry for the late reply, but i've been busy writing this massive post, hehe... so here we go:

    Okay, here is a step by step guide on how to achieve this. It is long and a bit complicated, but bear with me. When you have done it a couple of times, it makes sense. There are basically 5 steps in achieving this:

    • Capturing the data
    • Getting it into Blender (free powerful 3d software)
    • Combining and optimizing the captures in Blender.
    • Getting the OBJ into sketchup:
    • Working with the model in sketchup

    The first two steps are largely covered in this great video that helped me alot in the beginning! I still urge you to read my description as well though, as some problems and tips are not covered in detail. But its a great starting point!:

    1. Capturing the data

    Software used: RenderDoc, modified chrome browser

    The first step is to capture the 3d data. This is done through a new software called RenderDoc, developed at MIT. The way it works is (very simplified) is by reading the actual data stream through the graphics card in a certain application. In this case, the browser.

    This requires a modified shortcut to chrome, that allows you to intercept the data stream from the graphics card. We then inject this data stream into RenderDoc, so we can capture it. How all of this is done, is described in detail in the video linked above!

    You then navigate to “” and use RenderDoc to capture the desired area(s). Google earth works with a number of different LOD’s (level of details), meaning that the further you zoom out, the more simplified the 3d mesh becomes, and the lower the texture resolution is.

    With that in mind, it can be benefitial to do a number of captures with the view zoomed in, and then combining them later in blender. That way you can cover a larger area in higher fidelity.

    Be aware that i’ve run into several problems with the actual captures and their readability in the next step. In my experience capture sizes of more than 120 mb will be hard to handle. If it is larger, try to zoom in a tad more, and the size should go down J. It is also very important that you really move the camera around during the actual seconds the capture happens! I use the “capture with delay” function set to 3 seconds, as this makes this much easier.

    2. Getting it into Blender

    Software used: Blender, blender plugin “Mapsmodelimporter”.

    The next step is to get the captures into Blender. Blender is an extremely powerful 3d software, that is completely free. It has a number of very specific plugins that allows us to read and work with the captured data we just made. And the first of these is a plugin called “Mapsmodelimporter”. This allows us to read the .rdc files that RenderDoc created.

    When you import the capture you will notize that it has alot of overlapping geometry, and that the capture probably looks a lot larger than you imagined. This is because it has saved all the LOD’s in the view, and you simply select all of the large planes containing the rough geometry you dont need, until you a left with just the detailed area you actually wanted to capture. It looks something like this:

    3. Combining and optimizing the captures in Blender

    Software used: Blender, Lily capture merger, Lily texture packer. Both can be bought cheaply here:

    In this step we will try to combine all our captures into a single, seamless model, and then clean it up and optimize it for exporting.

    It is important to keep each capture in its own “collection” folder in blender, as this will make the merging much easier. we import one capture at a time and align it to the previous one, before moving on to the next.

    We will use the plugin called Lily capture merger to align the two collections together. This video shows how it works:

    when you are all done, you should have a number of collections in blender, and all the models should be perfectly aligned. something like this:

    Notice all the 10 collections on the top right, each containing a single capture, representing a part of the complete model.

    It is now time to erase overlapping geometry between the collections. This will remove flickering in enscape later on, and it will greatly reduce file sizes as well.

    You do this by simply turning on one collection at a time, and deleting the squares that is present in another collection already. This is quite straight forward. When you are done, no part of any collection should overlap with a different one. you will notice that the captures are made up of smaller squares, and this makes it very easy to delete parts of it you dont need.

    Now it is time to optimize the textures. When you make a capture from RenderDoc, all the textures are individual, resulting in more than a hundred different images for a single capture. We would like to reduce that to just one for each collection! For this, we use the plugin “Lily texture packer”

    We create and save textures for one collection at a time, turning off all the collections we are not working with.

    This video is a good introduction:

    remember to save the generated textures (go to the UV editor, find the newly packed image in the dropdown, and save it to a location).

    Now it is time to link the new texture to the geometry. To do this, simply go back to the 3d viewport, select all the geometry in the collection, click “ctrl-L” and select “materials”.

    Repeat this until all collections have ONE texture, all textures are saved, and all collections have linked the new textures to them.

    Now we have to join all the individual sqares in each collection into one single mesh.

    To do this, simply select all the geometry in each collection, and click “ctrl-J” (join). Now each collection should be selectable as one single mesh pr collection.

    It is now time for some cleaning! RenderDoc creates a lot of overlapping geometry, and that makes the models very heavy, and hard to work with. It is fortunately very easy to fix in Blender.

    Simply select each of the joined collections, click “Tab” (to go into edit mode), right click the mesh and select “merge verticies – By distance”. This merges all overlapping faces. In the bottom of the screen you should see blender tell you how many verticies it has removed. It is alot! But, for some reason it does not alway work in one go, so you have to repeat the command. If it says “0 verticies removed” next time, you are all good. If it still reports a number, continue to repeat until it says zero.

    When this has been done for all collections you are done with blender. At this point you should have a number of collections (perfectly merged together), each containing one capture with one packed texture and cleaned geometry.

    Now, simply select “file – export – obj” and save the file as an .obj 3d file.

  • 4. Getting the .obj into sketchup

    Software used: Transmutr

    This one is simple. Use Transmutr to open the OBJ and save it as a sketchup file. If you havent used Transmutr before, you have been missing out on an awesome program. It is basically a sketchup plugin, that allows you to import various 3d formats, and convert them to .skp’s with a lot of options, including automatic enscape material setup, enscape proxy generator, advanced poly reduction system and much more.

    During this phase there is one more thing we can do to optimize our model (and its size!).... namely, the packed textures. These were probably saved as heavy PNG’s, and they take up a lot of video memory in Enscape. I therefore tend to edit my textures in photoshop, and save them as .jpg’s with a medium quality of around 7-8. This drastically reduced file sizes, and it is baraly noticably in the final model. Another trick is to add an “unsharpen mask” sharpener to the image. This inreases the crispness when viewed in enscape later on.

    The good thing about changing the textures in Transmutr’s material editor instead of in sketchup, is simply that transmutr handles the heavy model much better, and it is therefore quicker and easier.

    And now it is just a question of pressing “Transmute” and your brand new sketchup site model is complete J

    5. Working with the model in Sketchup
    Software used: Sketchup and Enscape

    The generated model can be extremely heavy (above 1 GB is not uncommen for larger sets), and it is therefore almost impossible to work directly in the model. What I do to combat this, is utilize the “proxy” functionality of enscape, that allows you to place a linked model in sketchup. That way our heavy site model is only represented by a wire box, and it wont hamper performance.

    To achieve this I open the heavy site model, I make a box in the shape of the plot my new building is located on. I then intercept that shape with the sitemodel, thereby generating lines that allow me to select the plot itself. I then make the plot a group, and the rest of the site model a component. It has to be a component, as this allows us to right click it, and select “save as external model for enscape”. This converts the component to a wireframe proxy, placed in the EXCACT location of the original model, thereby aligning perfectly with our plot.

    We now have a light and nimble model to sketch in, and when we launch Enscape, our awesome new google maps surroundings are rendered beautifully.

    And you can even (as i have done in the original movie) redo the whole proces for a way more zoomed out model that can surround the first detailed model, thereby extending the “draw distance”. This model can be lighter and simpler in geometry and textures, as it will never be seen up close.

    I know this was a lot, but i really hope some of it makes sense. The whole process requires a lot of trial and error, but it can produce some truly awesome results. Let me know if you get stuck at any of the steps J maybe i’ve encountered the same issues, and have a work around, hehe.

    And if you come up with further tips, please share as many as you can! I’m looking forward to seing some great site models, hehe.