Hello,
I have several scans that I've captured using a mobile phone photogrammetry workflow. And I'm looking for help with the best method to perform a merge in cloud compare, that combines the very best from each of these scans into one model.
Should I be trying to sample each scan for points before merging? How can I filter outliers? Is there a way to preserve the color information? And is there a way to perform a merge that starts with the most complete scan - using the other scans to fill only the data that is missing?
I'm interested in maybe using filtering tools to isolate areas that are missing from my most complete scan. Or maybe there's a way to apply weighting to the merge that prioritizes the data values from one scan over the others? Can scalar fields be useful in this type of merge workflow that understands the deviation between models? And lastly would computing the octree help for this merging?
Best,
Damani
Merging Multiple Scans - Best Practice
Merging Multiple Scans - Best Practice
- Attachments
-
- Image 001.jpg (876.92 KiB) Viewed 5626 times
Re: Merging Multiple Scans - Best Practice
Ah, that's a huge topic ;)
Here it seems you have meshes... CloudCompare is not super strong with meshes, especially when it comes to merge them. But meshes are good for registration!
So I would:
1) keep one mesh as is, and use it as reference to register the others (the widest I guess)
2) sample a lot of points on the others (Edit > Mesh > Sample Points). Keep the normals and the colors when doing so.
3) manually move the clouds so that they are roughly aligned with the reference mesh (see https://www.cloudcompare.org/doc/wiki/i ... sformation)
4) Use the ICP tool register each cloud with the reference mesh. You can first segment the clouds so as to remove the obvious outliers. But you can (and should) also set a rather low 'Final overlap' parameter so as to indicate to CloudCompare that not all parts of the clouds have correspondances in the reference mesh. Something like 70%, looking at your screenshot.
5) Once all the clouds are registered, you can now sample points on the reference mesh
6) Merge all the clouds
7) Segment the merged cloud if there are still parts that you don't want
8) Finally, use the PoissonRecon tool to re-mesh this cloud (to fill the holes, etc.). Make sure to output the 'Density', so as to control the resulting mesh extents and hole filling (see https://www.cloudcompare.org/doc/wiki/i ... n_(plugin))
And I realize that maybe the main drawback of this method is that the colors on the resulting mesh will be now per-vertex colors, and not textures. Which might not be managed by the other software tools you might use!
Here it seems you have meshes... CloudCompare is not super strong with meshes, especially when it comes to merge them. But meshes are good for registration!
So I would:
1) keep one mesh as is, and use it as reference to register the others (the widest I guess)
2) sample a lot of points on the others (Edit > Mesh > Sample Points). Keep the normals and the colors when doing so.
3) manually move the clouds so that they are roughly aligned with the reference mesh (see https://www.cloudcompare.org/doc/wiki/i ... sformation)
4) Use the ICP tool register each cloud with the reference mesh. You can first segment the clouds so as to remove the obvious outliers. But you can (and should) also set a rather low 'Final overlap' parameter so as to indicate to CloudCompare that not all parts of the clouds have correspondances in the reference mesh. Something like 70%, looking at your screenshot.
5) Once all the clouds are registered, you can now sample points on the reference mesh
6) Merge all the clouds
7) Segment the merged cloud if there are still parts that you don't want
8) Finally, use the PoissonRecon tool to re-mesh this cloud (to fill the holes, etc.). Make sure to output the 'Density', so as to control the resulting mesh extents and hole filling (see https://www.cloudcompare.org/doc/wiki/i ... n_(plugin))
And I realize that maybe the main drawback of this method is that the colors on the resulting mesh will be now per-vertex colors, and not textures. Which might not be managed by the other software tools you might use!
Daniel, CloudCompare admin
Re: Merging Multiple Scans - Best Practice
This is awesome. Thank you so much. I have a few follow up questions.
1) Why should it be the “widest” mesh that I use as reference? My instinct was to start with the most complete mesh or something along that line. When you say widest you mean along the x axis?
2) For rough alignment I usually line them up with the basic transform tools then I use the point to point matching method is that ok?
3) The final overlap parameter really confuses me. I’m supposed to estimate how much deviation there is so that cloudcompare knows how much of the scans should actually overlap? And what’s low and what’s high as far as a final overlap value and how should I be using this parameter to achieve the best result?
4) Is a merge technique like this able to improve accuracy by combining data or is it more of an averaged product?
5) You made distinction between meshes and pointclouds, but what's the difference? I was under the impression that these were merely expressions of the same information. And that meshes could always easily be converted to pointclouds and vice versa. But I guess the issues is that normals information is lost in doing this? But then I've calculated normals. Are these calculated normals different from the normals I get from processing images to produce a mesh? Also curious about the difference in calculating normals per vertex vs per face.
6) Last question about sampling. When converting a mesh to points I really assumed it was just a matter of disconnecting the faces to get to the vertices. I thought it was vertices and the vertex count that created a pointcloud. But sampling implies something else correct? Can the amount of points sampled from a mesh exceed the vertex count? Can I use vertex count as a guide for the amount of points to sample from a mesh cause you just said to "sample a lot of points"
1) Why should it be the “widest” mesh that I use as reference? My instinct was to start with the most complete mesh or something along that line. When you say widest you mean along the x axis?
2) For rough alignment I usually line them up with the basic transform tools then I use the point to point matching method is that ok?
3) The final overlap parameter really confuses me. I’m supposed to estimate how much deviation there is so that cloudcompare knows how much of the scans should actually overlap? And what’s low and what’s high as far as a final overlap value and how should I be using this parameter to achieve the best result?
4) Is a merge technique like this able to improve accuracy by combining data or is it more of an averaged product?
5) You made distinction between meshes and pointclouds, but what's the difference? I was under the impression that these were merely expressions of the same information. And that meshes could always easily be converted to pointclouds and vice versa. But I guess the issues is that normals information is lost in doing this? But then I've calculated normals. Are these calculated normals different from the normals I get from processing images to produce a mesh? Also curious about the difference in calculating normals per vertex vs per face.
6) Last question about sampling. When converting a mesh to points I really assumed it was just a matter of disconnecting the faces to get to the vertices. I thought it was vertices and the vertex count that created a pointcloud. But sampling implies something else correct? Can the amount of points sampled from a mesh exceed the vertex count? Can I use vertex count as a guide for the amount of points to sample from a mesh cause you just said to "sample a lot of points"
Re: Merging Multiple Scans - Best Practice
Sorry one more clarifying question. The workflow you suggest involves aligning sampled point clouds from each mesh to a reference mesh correct? I just want to make sure. Am I aligning meshes to one that I designate to be the "reference mesh" or am I aligning point clouds to the single designated reference mesh. And the reference should always be marked as fixed too right?
Re: Merging Multiple Scans - Best Practice
I was thinking about registering point clouds to a single reference mesh (that should not move indeed).
But you could also register the meshes directly (still with one single 'reference' mesh). But in this case, CC will only consider the vertices of the mesh for the registration. So you just need to make sure there are lot of them (and with a good density).
But you could also register the meshes directly (still with one single 'reference' mesh). But in this case, CC will only consider the vertices of the mesh for the registration. So you just need to make sure there are lot of them (and with a good density).
Daniel, CloudCompare admin