User Guide Cancel

3D Capture | Substance 3D Sampler

  1. Substance 3D home
  2. Home
  3. Getting Started
    1. Getting Started overview
    2. Activation and licenses
    3. System requirements
    4. Shortcuts
    5. Importing Resources
    6. Report a bug
    7. Project Management
    8. Export
      1. Export overview
      2. Export Window
      3. Default Presets
        1. Default Presets overview
        2. Arnold 5
        3. Blender Cycles/Eevee
        4. Corona Renderer
        5. Enscape - Revit
        6. Keyshot9+
        7. Lens Studio
        8. Spark AR Studio
        9. Unity HDRP Standard
        10. Unity HDRP Specular
        11. Unity Standard
        12. Unity Specular
        13. Unreal Engine 4
        14. Redshift
        15. V-Ray Next
      4. Managing custom presets
      5. Managing Presets
  4. Interface
    1. Interface overview
    2. The Home Screen
    3. 2D and 3D Viewport
    4. Sidebars
    5. Panels
      1. Panels overview
      2. Project panel
      3. Assets panel
      4. Layers panel
      5. Properties panel
      6. Viewer Settings panel
      7. Shader Settings panel
      8. Channel Settings panel
      9. Metadata panel
      10. Export panel
      11. Physical Size Panel
      12. Exposed Parameters Panel
      13. Resources
    6. Tools and Widgets
      1. Tools and Widgets overview
      2. Sliders
      3. Color Picker
    7. Preferences
      1. Preferences overview
      2. Normal Format
      3. Layer Resolution
  5. Filters
    1. Filters overview
    2. Custom Filters
    3. Compound Filters
    4. Generators
      1. Generators overview
      2. Atlas Scatter
      3. Brickwall
      4. Cloth Weave
      5. Decal
      6. Embossing
      7. Embroidery
      8. Floor Tiles
      9. Gravel
      10. Panel
      11. Parquet
      12. Pattern
      13. Pavement
      14. Perforate
      15. Quilt Stitch
      16. Splatter
      17. Stonewall
      18. Surface Relief
      19. Weave
    5. Adjustments
      1. Adjustments overview
      2. Blur
      3. Brightness/Contrast
      4. Colorize
      5. Color Replace
      6. Color Variation
      7. Equalize
      8. Fill
      9. Hue/Saturation
      10. Invert
      11. Sharpen
      12. Vibrance
    6. Tools
      1. Tools overview
      2. Atlas Creator
      3. Atlas Splitter
      4. Channels Generation
      5. Channel Switch
      6. Clone Stamp
      7. Crop tool
      8. Delight (AI Powered)
      9. Height to AO
      10. Height to Normal
      11. Image To Material
      12. Make it Tile
      13. Match
      14. Multiangle To Material
      15. Normal to Height
      16. Paint Wrap *missing*
      17. PBR Validate
      18. Perspective Correction
      19. Tiling
      20. Transform
      21. Warp
      22. Warp Transform
      23. Upscale
    7. HDRI Tools
      1. HDRI Tools overview
      2. Color Temperature Adjustment
      3. Exposure
      4. Exposure Preview
      5. HDR Merge
      6. Line Light
      7. Nadir Extract
      8. Nadir Patch
      9. Panorama Patch
      10. Plane Light
      11. Shape Light
      12. Sphere Light
      13. Straighten Horizon
    8. Wear and Finish
      1. Wear and Finish overview
      2. Corrode
      3. Cracks
      4. Dirt
      5. Discarded Gums
      6. Dust
      7. Erode
      8. Metal Finish
      9. Moss
      10. Oxidate
      11. Paint
      12. Rust
      13. Scratch
      14. Snow
      15. Stylization
      16. Water
      17. Varnish
  6. Technical Support
    1. Technical Support overview
    2. Exporting the log file
    3. Configuration
      1. Configuration overview
      2. Retrieving the installation path
      3. Update Checker
      4. NVIDIA Driver Settings
      5. 3D Capture set-up on Linux  
    4. Technical Issues
      1. Technical Issues overview
    5. Data or project issues
      1. Data or project issues overview
      2. Import Substance Alchemist projects in Substance 3D Sampler
    6. Filter issues
      1. Filter issues overview
      2. Image to Material and Delighter are missing
      3. Image to Material visual artefacts
    7. Interface issues
      1. Interface issues overview
      2. Fonts are not displayed correctly
      3. Main interface is transparent
    8. Performance issues
      1. Performance issues overview
      2. Color picker takes long time to open the first time
      3. Interface lags when interacting with the layer stack or other elements
    9. Stability issues
      1. Stability issues overview
      2. Crash when exporting a material
      3. Crash when using the Image to Material or Delighter
    10. Startup issues
      1. Startup issues overview
      2. Application doesn't start on Linux
      3. Crash at start up - Old OBS version
  7. Features and workflows
    1. Features and workflows overview
    2. 3D Capture
    3. Export parametric assets
    4. End to end Physical Size Workflow
    5. Generative Workflow
    6. Texture Import
    7. Texture Generators
    8. Use As Bitmap
    9. Adobe Standard Material
  8. Pipeline and integrations
    1. Pipeline and integrations overview
    2. Environment variables
    3. Substance Send-to
    4. HP Z Captis support
      1. HP Z Captis support overview
      2. Your first capture, step by step
      3. System requirements to use the HP Z Captis device
      4. FAQ for HP Z support in Sampler
      5. Known issues and limitations
  9. Scripting and Development
    1. Scripting and Development overview
    2. Manage installed plugins and scripts
    3. Create a Plugin with Python and QML
    4. Create a Script with Python
      1. Create a Script with Python overview
      2. Example Scripts
  10. 3D Capture
    1. 3D Capture equipment
    2. Camera settings - Exposure
    3. Camera settings - Focus
    4. 3D Capture lighting
    5. Cross-polarizing for 3D Capture
    6. Processing advanced 3D Captures
    7. Editing 3D Captured meshes
  11. Release Notes
    1. Release Notes overview
    2. All Changes
    3. Beta
    4. Version 4.5
    5. Version 4.4
    6. Version 4.3
    7. Version 4.2
    8. Version 4.1
    9. Version 4.0
    10. Version 3.4
    11. Version 3.3
    12. Old Versions
      1. Version 3.2
      2. Version 3.1
      3. Version 3.0
      4. Version 2020.3 (2.3)
      5. Version 2020.2 (2.2)
      6. Version 2020.1 (2.1)
      7. Version 2019.1
      8. Version 0.8.1
      9. Version 0.8.0
      10. Version 0.7.0
      11. Version 0.6.1
      12. 0.6.0
      13. 0.5.4
      14. 0.5.3
      15. 0.5.2
      16. 0.5.1
      17. 0.5.0
  12. FAQ
    1. FAQ  Overview

3D Capture

Getting Started

What is photogrammetry?

Sampler is using photogrammetry to transform images into a mesh with textures. Photogrammetry is the science of making measurements from images. It is used to extract information from photographs, to create 3D models and textures. The process involves taking multiple photographs of an object from different angles, and then processing the images to extract information about the shape and location of features in the images.

The goal is to match corresponding features between the images to establish the relative positions of the camera for each image. From the matched features, a 3D model of the object is reconstructed. The final step is to project the textures onto the 3D model.

Hardware requirements

The 3D Capture is available on Windows and MacOS Monterey or Ventura.

Windows/Linux

We recommend:

  • GPU with 8Gb of VRAM
  • 16Gb of RAM. Ideally, 32Gb and 64Gb.
  • Minimum of 10Gb of disk space

Linux configuration

Mac

  • Apple Silicon devices are strongly recommended (M1 or M2)
  • Intel-based and AMD GPU with at least 4Gb of VRAM and raytracing support

Start a new 3D capture

Import your dataset

Dataset Preparation

Drag and drop your photos or click to browse your OS explorer.

Note:

Dataset recommendations

We recommend to have a dataset that contains at least 20 images for the 3D Capture to run smoothly.

For iPhone users, .HEIC format is not yet supported. You can use Lightroom to convert to .jpeg.

On MacOS, you can use Quick Actions to convert your images.

For cameras RAW formats, we recommend to use Lightroom to convert your photos into .jpeg.

Note:

Dataset limitations

Windows: Your dataset has to be smaller than 6G pixels (6 000 000 000 pixels) in total. It represents 500 photos of 12M pixels

Once the photos are imported, you can click on a photo to see it in full.

Photogroup definition:

Your dataset can be splitted in several photogroups. Photogroups group photos by properties (sensor size, focal length, rotation,…)

Masking

Using masks has many advantages. It allows the photogrammetry process to detect features and reconstruct only non-masked areas.

This allows also to move the object during the capture as the masks will hide background in all photos.

To use masks, select a photogroup and open the Mask tab on the right.

You can import masks by respecting a naming convention:

  • [image_name].file_extension
  • [image_name]_mask.file_extension

You can automatically generate masks by photos using our AI-powered technology.

Alignment

The alignment is to process all images to extract and match corresponding features to establish the relative positions of the camera for each image.

Settings

Precision

There are two options, low and high.

  • Low: advised for most datasets.
  • High: increase the number of points, advised to match more photos in cases where the subject has insufficient texture or the photos are small. This setting makes the processing slower. We recommend you to try low option first.

Photo ordering

There are two options, default and sequence.

This may be computed using different feature matching algorithms:

  • Default: selection is based on several criteria, among which similarity between images.
  • Sequence: use only neighbor images within the given distance, advised for processing a single sequence of photos if the Default mode has failed. The photo insertion order must correspond to the sequence order.

Points cloud and cameras position

The result of the alignment step is a sparse point cloud with all features detected and the position of all cameras.

If the image outline is green, the image was correctly aligned.

If the image outline is orange, the image was not correctly aligned and no feature was extracted from this image.

You can click on image on the left panel to frame the points cloud on the associated camera.

You can click on a camera to frame the points cloud on it.

Reconstruction

The reconstruction step generates a 3D model of the object from the matched features as projecting the textures onto the 3D model.

Setting

Geometry details This option specifies the precision level in input photos, which results in more or less detail in the computed 3D model.

Region of interest

Before generating the 3D model, you can set the region to reconstruct around the point cloud with the bounding box.

You can translate, scale and rotate the box in the 3 axis.

By pressing Shift while scaling, you will scale the box from the center.

Post-processing

The post-processing helps you to adapt and optimize your mesh and textures to your needs and how you want to use it.

The result of the reconstruction can generate a mesh with millions of polygons and up to 16K textures. This often won’t be optimized for rendering, realtime or AR experience.

You will need to post-process the result to reduce the number of polygons without losing details.

The post-processing step chains 4 steps automatically:

  • Decimation: Reduce the number of polygons by defining the number of faces you want
  • UV unwrap: Automatically defines seams, unwrap and package UVs of the decimated mesh
  • Reprojection: Reproject the color texture of the photogrammetry mesh onto the decimated mesh
  • Baking: Bake normal, height and AO details from the photogrammetry mesh onto the decimated mesh. This will ensure to transfer all mesh details lost during the decimation into texture maps.

Version

To easily iterate and test different post-process options, you can create several versions and select the one to add to your project.

To help you, you can visualize the mesh in different mode.

Solid mode

Wireframe mode

UV Grid mode

Non-destructive workflow

Once a version added to the project, a layer stack is created with several layers.

The first layer is the reconstruction result.

The second layer (if you did some post-process) is the mesh post-processing layer with the values defined in the 3D Capture window. You can still edit the parameters at this step if you want to use other settings.

The third layer is a mesh transform layer to scale, translate and rotate your 3D object.

At this stage, you can add filters you’re used to apply on materials to edit the textures on the 3D object.

Export

In the export window, you can define the mesh format and material settings (same settings when you export a material).

Tutorials

FAQ

What are the best capture conditions for photogrammetry?

For photogrammetry to produce accurate results, it's important to follow certain best practices when capturing images.

  1. Lighting: Photogrammetry works best when images are captured in good lighting conditions. Avoid taking images in low light or high contrast lighting, as these can make it difficult to accurately extract features from the images. The best lighting conditions for photogrammetry are overcast days or shaded areas.
  2. Overlap: To ensure that there is enough information in the images to accurately extract features, it's important to capture images with significant overlap. A general rule of thumb is to have at least 60% overlap between images, both horizontally and vertically.
  3. Camera: Use high-resolution camera and lens which good image quality and sharpness. Avoid using cameras with fish-eye lens or wide angle lens as it can cause geometrical distortion which can affect the final results.
  4. Orientation: When taking images, try to keep the camera level and perpendicular to the ground. Images taken at an angle can make it difficult to accurately extract features and may lead to distorted results.
  5. Camera calibration : Make sure the camera is calibrated prior to taking images. This process allows to correct lens distortion, and other errors that can affect the accuracy of the final results.

How does it work for specular and reflective objects?

Photogrammetry can be challenging when working with highly specular or reflective objects, as the bright reflections can make it difficult to extract features from the images. Here are a few strategies that can be used to overcome these challenges:

  1. Lighting: When capturing images of highly reflective objects, try to avoid direct sunlight and instead capture images in overcast or shaded conditions. This can help to reduce the intensity of reflections and make it easier to extract features from the images.
  2. Matte finish: Applying a matte finish to the reflective surfaces can help to reduce the intensity of reflections and make it easier to extract features from the images.
  3. Capture multiple images: Capturing multiple images of the same object from different angles can help to reduce the impact of reflections and increase the chances of being able to extract features from at least some of the images.
  4. Image editing: In post-processing, certain image editing software like Lightroom can be used to reduce reflections and enhance features in the images, such as increasing the contrast, or color correction.

Keep in mind that reflective objects may need more elaborate setup and treatments, and it may not be possible to get perfect results in all cases. It's a good idea to experiment with different techniques.

What is the recommendation between a mobile phone and DSLR camera for photogrammetry?

Both mobile phones and DSLR cameras can be used for photogrammetry, but they have different strengths and weaknesses. Here are a few things to consider when deciding which type of camera to use:

  1. Resolution: DSLR cameras typically have much higher resolution than mobile phones, which can lead to more detailed and accurate results. However, with recent advancement in mobile phone camera, some high-end mobile phone cameras have comparable resolution and image quality to some lower-end DSLR cameras.
  2. Camera calibration: Photogrammetry relies on accurate camera calibration, which is typically more difficult to achieve with mobile phone cameras than with DSLR cameras. Some mobile phone camera have built-in calibration parameters that you can use, but it may not be as accurate as a proper calibration of a DSLR camera.
  3. Battery life and storage : Mobile phone cameras have a more limited battery life compared to DSLR cameras. Therefore, you'll have to plan on charging the phone or carrying extra batteries while working. Additionally, you need to make sure that the phone has enough storage capacity to handle large image files.
  4. Cost: DSLR cameras are generally more expensive than mobile phones, and they also require additional accessories, such as tripods and external flash units.
  5. Portability: A mobile phone is more portable than a DSLR camera, and it's more likely that you'll have your phone with you when you come across an interesting object or scene that you want to capture for photogrammetry.

In summary, it really depends on your specific needs and the characteristics of the project. For lower resolution projects, a mobile phone may be sufficient. However, if high accuracy and high resolution is needed, a DSLR camera may be a better choice. Additionally, if you are planning to take photos on a regular basis or for a long-term project, investing in a DSLR camera may be a more cost-effective solution in the long run.

How should I calibrate my camera to limit the blur on my object ?

Camera calibration is an important step in the photogrammetry process that helps to correct for lens distortion and other errors that can affect the accuracy of the final results. Here are a few steps you can take to calibrate your camera and limit blur on your object:

  1. Use a tripod: To keep the camera stable and reduce blur, it's important to use a tripod when capturing images for photogrammetry. This will ensure that the camera is in the same position for each shot and will help to minimize camera movement.
  2. Use a remote shutter release: To further reduce camera movement, you can use a remote shutter release or self-timer function on the camera to take the images. This will help to minimize any camera shake caused by pressing the shutter button.
  3. Adjust the shutter speed: To reduce blur caused by camera movement, you should use a fast shutter speed. A general rule of thumb is to use a shutter speed that is at least as fast as the reciprocal of the focal length of the lens. For example, if you're using a 50mm lens, you should use a shutter speed of at least 1/50th of a second.
  4. Use a high ISO: In low light conditions, you may need to use a higher ISO to maintain a fast shutter speed and reduce blur. However, keep in mind that a high ISO can also increase noise in the image, which can affect the accuracy of the final results.
  5. Use a flash: In some situations, using a flash can help to reduce blur caused by low light. Keep in mind that flash can also cause reflections and other issues in some cases, so be sure to experiment with flash and non-flash shots to see which works best for your specific application.

Remember that calibration is an iterative process and might require multiple attempts to achieve good results.

Can I move the object during the capture for photogrammetry?

In most cases, it is not recommended to move the object during the capture for photogrammetry. The process of photogrammetry relies on the object being in a fixed position for each image, as the software uses the relative positions of features in the images to reconstruct a 3D model of the object.

If the object is moved during the capture, it will appear in a different position in each image, making it difficult for the software to match corresponding features between images. This can lead to inaccuracies in the final 3D model and can also make the image matching step difficult or impossible.

However, there are some cases where moving the object can be beneficial. For example, in the case of small objects, where it is difficult to take images with significant overlap, it's possible to use a turntable and rotate the object to ensure that all features are captured from multiple angles.

Get help faster and easier

New user?