Tag: Python

PySLM Release – Version 0.6

After a long period, PySLM version 0.6 is released. This coincides with the intensity of commitments as a new academic at the University of Nottingham over the past year. The release has mainly focused on improvements and enhancements to the underlying codebase rather than the addition of entirely new features. There are several substantial changes to the underlying dependencies that contributes some improvements and performance throughout which PySLM users will benefit from.

Dependency changes

With the release of ClipperLib2 library, additional python bindings were exposed and released as a separate library in the PyClipr. These were created using the PyBind11 headers and provides the core functionality required to performing offsetting and clipping of path segments and hatch vectors. There are no substantial feature improvements inherited from the change, but a noticeable performance improvement can be observed. Another benefit is that PySLM does not require compilation via cython and is now a full source distribution available via PyPi repositories.

Another significant dependency change is the use of the manifold mesh Boolean library. This offers a substantial improvement to mesh manipulation operations, that are fundamental to successful support generation for use in metal L-PBF. The library provides robust intersection of water-tight meshes, that is also computationally efficient when compared to the prior PyClipr library which was based on the clipr library from over a decade ago. This significantly improves the quality of the volumes generated in BlockSupportBase and those derived from these such as those with the GridTrussSupport that provide perforations and teeth for metal L-PBF. Additionally, this removes an additional dependency that requires maintenance by myself, and is more cross-platform that what can be offered by the previous PyClipr library.

Further incremental changes, that will not affect users is migration to the Shapely 2.0 library and also Trimesh 4.0, which required some internal changes to maintain compatibility.

Support Generation Improvements:

The support generation has been improved to be more robust and reliable compared to the initial release in version 0.5.0. Further robustness checks are implemented in the ray-tracing method developed in version 0.5, for identifying correctly the support projection height maps which are used to identify boundaries of the support volume. Further use has been explored in applied research by TWI – see open access paper (An Interactive Web-Based Platform for Support Generation and Optimisation for Metal Laser Powder Bed Fusion) by Dimopoulos et al.

By default all BlockSupport‘s have smoothed boundaries by the use of spline fitting, which was previously only applied on self-intersecting supports. Smooth boundaries significantly improve the quality of the final GridBlockSupport, because the perforated grid truss skin can more smoothly conform to the boundary of the support volume.


Smoothed boundaries generated for all SupportVolumes for a complex part: including both self-intersecting supports and those only connected to the build-plate

As a recommendation to users, care must be taken to not smoothen the boundaries too much or these will not correctly conform to the original geometry causing the ray-projection algorithm to fail. A recommending starting point for the spline simplification factor is between 5-30, but is dependent on the relative part scale. Coinciding with the use of the manifold3d library, there is an appreciable improvement in the speed for generating the support volumes.

Another improvement is the more configurable parameters for Grid Truss Support generation. This includes further enhancement and control over the perforated teeth, across both upper and lower support volume surfaces. These are fully customisable by a user function, which ensure that a repeating shape is conformed in 3D across the surface profiles of the support volume.

Finally, a significant enhancement is correctly pre-sorting the scan vectors within the sliced support regions to take advantage of the line segments when scanning by the beam source. This significantly improves build productivity by minimising jumps across adjacent segments and ensures that the galvo-mirror movement remains mostly in the same direction.

A layer showing the order of scanning across all grid truss support generated for a complex topology optimised part. Jump distance is a total of 2056 mm with a total scan vector length 1377 mm.

Documentation Improvements

Further improvements to the inline documentation have been included alongside improvements and examples that are now provided on readthedocs. These provide basic information and guides for using PySLM, some of which is consolidated from these blog entries to aid new users using the library. Over time these will be further enhanced and amended to support researchers and users wishing to use PySLM in their work.

Conclusions & Change Log

The release has taken a while to release, but overall has received a level of polish and refinement that helps the release find use amongst more in commercially vested R&D projects and academic research. There are other developments still in the pipeline but much focus was on providing a long-term stable release for users. The full changelog can be found here.

Pyclipr – Python Polygon Clipping and Offsetting Library

Pyclipr is a Python library offering the functionality of the Clipper2 polygon clipping and offsetting library and are built upon pybind . The underlying Clipper2 library performs intersection, union, difference and XOR boolean operations on both simple and complex polygons and also performs offsetting of polygons and inflation of un-connected paths. Unfortunately, the contracted name (Clipr) is the closest name to that of the previous form.

Unlike pyclipper, this library is not built using cython, which was previously integrated directly into PySLM with custom modifications to provide ordering of scan vector. Instead the full capability of the pybind binding library is exploited, which offers great flexibility and control over defining data-structures. This library aims to provide convenient access to the modifications and new functionality offered by Clipper2 library for Python users, especially with its usage prevalent across most open source 3D Printing packages (i.e. Cure) and other computer graphics applications.

Summary of key ClipperLib2 Features Relevant to AM and their use in PySLM

  • Improved performance and numerical robustness
  • Simplification of open-path clipping – no requirement to use PolyPath usage
  • Built-in numerical scaling between floating point and the internal Int64
  • Additional point attributes built-in directly (Z-attribute)

Summary of Implementation

The structure follows closely with ClipperLib2 api in most cases but has adapted some of the naming to be more pythonic and regularity during typing.

The added benefit of the original PyClipper library is that it can take numpy and native python lists directly, because these are implicitly converted by pybind into the internal vector format. A significant addition is the ability to accept 2D paths with the additional ‘Z’ attributes (currently floating points) without using separate functions, taking advantage of pythons duck typing. Open-paths and these optionally defined z attributes are returned when passing the arguments when performing the execute function for clipping utilities. Below are a summary of the key operations

Path Offsetting

Path offsetting is accomplished relatively straightforwardly. Paths are added to the ClipperOffset object and the join and end types are set. The delta or offset distance is then provided in the execute function.

import numpy as np
import pyclipr

# Tuple definition of a path
path = [(0.0, 0.), (0, 105.1234), (100, 105.1234), (100, 0), (0, 0)]
path2 = [(0, 0), (0, 50), (100, 50), (100, 0), (0,0)]

# Create an offsetting object
po = pyclipr.ClipperOffset()

# Set the scale factor to convert to internal integer representation
pc.scaleFactor = int(1000)

# add the path - ensuring to use Polygon for the endType argument
po.addPath(np.array(path), pyclipr.Miter, pyclipr.Polygon)

# Apply the offsetting operation using a delta.
offsetSquare = po.execute(10.0)

Polygon Intersection

Polygon intersection can be perform by using the Clipper object. This requires add individual path or paths and then setting these as subject and clip. The execute call is used and can return multiple outputs depending on the clipping operation. This includes open-paths or Z attribute information.

# continued 

# Create a clipping object
pc = pyclipr.Clipper()
pc.scaleFactor = int(1000) # Scale factor is the precision offered by the native Clipperlib2 libraries

# Add the paths to the clipping object. Ensure the subject and clip arguments are set to differentiate
# the paths during the Boolean operation. The final argument specifies if the path is
# open.

pc.addPaths(offsetSquare, pyclipr.Subject)
pc.addPath(np.array(path2), pyclipr.Clip)

""" Polygon Clipping """
# Below returns paths of various clipping modes
outIntersect  = pc.execute(pyclipr.Intersection)
outUnion = pc.execute(pyclipr.Union)
outDifference = pc.execute(pyclipr.Difference, pyclipr.EvenOdd) # Polygon ordering can be set in the final argument
outXor = pc.execute(pyclipr.Xor, pyclipr.EvenOdd)

# Using execute2 returns a PolyTree structure that provides hierarchical information
# if the paths are interior or exterior

outPoly = pc.execute2(pyclipr.Intersection, pyclipr.EvenOdd)

Open Path Clipping

Open-path clipping (e.g. line segments) may be performed natively within pyclipr, by default this is disabled. Within the execute function, returnOpenPaths argument should be set true.


""" Open Path Clipping """
# Pyclipr can be used for clipping open paths.  This remains simple to complete using the Clipper2 library

pc = pyclipr.Clipper()
pc2.scaleFactor = int(1e5)

# The open path is added as a subject (note the final argument is set to True to indicate Open Path)
pc2.addPath( ((50,-10),(50,110)), pyclipr.Subject, True)

# The clipping object is usually set to the Polygon
pc2.addPaths(offsetSquare, pyclipr.Clip, False)

""" Test the return types for open path clipping with option enabled"""
# The returnOpenPaths argument is set to True to return the open paths. Note this function only works
# well using the Boolean intersection option

outC = pc2.execute(pyclipr.Intersection, pyclipr.NonZero)
outC2, openPathsC = pc2.execute(pyclipr.Intersection, pyclipr.NonZero, returnOpenPaths=True)

Z-Attributes

The final script of note is the in-built Z attributes that are embedded within PyClipr. Z attributes (float64) can be attached to each point across a path or. set of polygons. During intersection of segments or edges, these Z attributes are passed to the resultant clipped paths. These are returned as a separate list in the output.

""" Test Open Path Clipping """

pc3 = pyclipr.Clipper()
pc3.scaleFactor = int(1e6)

pc3.addPath(openPathPolyClipper, pyclipr.Clip, False)

# Add the hatch lines (note these are open-paths)
pc3.addPath( ((50.0,-20, 3.0),
              (50.0 ,150,3.0)), pyclipr.Subject, True) # Open path with z-attribute of 3 at each path point

""" Test the return types for open path clipping with different options selected """
hatchClip = pc3.execute(pyclipr.Intersection, pyclipr.EvenOdd, returnOpenPaths=True)

# Clip but return with the associated z-attributes
hatchClipWithZ = pc3.execute(pyclipr.Intersection, pyclipr.EvenOdd, returnOpenPaths=True, returnZ=True)

Usage in PySLM

PyClipr has been refactored for use in the next release of PySLM (v0.6). This has improved readability of code and in some cases there are performance improvements due to inherent optimisations within ClipperLib2. This includes also removal of unnecessary transformations and scaling factors performed within python, that were required converting between paths generated in PySLM (shapely) and PyClipper originally. In particular, avoiding the use of PolyNodes were especially useful to avoid throughout. Modifications have been applied throughout the entire modules including the hatching and support modules. PySLM also now benefits by becoming a purely a source distribution, by distribution the clipping and offsetting functions into a separate package, therefore no additional compiling is required during installation.

Combining Mitsuba with Python for Photorealistic Renders

In the past, I have used Paraview for visualising voxel models and finite element meshes. Paraview is geared towards scientific visualisation built towards ray-tracing capability for volumes and meshes using OSPRay. Unfortunately, despite options to automate their preparation of models using packages such as PyVista, it often required additional manual image editing. I wanted to find an alternative that semi-automates some of the model visualisation and also improve the quality of the visualisation, whilst providing some additional ‘glossy’ pictures for this website.

Mitsuba 3, is a cross-platform open source photorealistic render originating Wenzel Jakob. This alongside some other photo-realistic rendering codes that are available and can operate within Python. For most purposes these codes are targeted for `academi’c education and for research use for developing accurate physical light rendering models. Some academic papers have used this also for rendering their 3D mode with a better aesthetic quality compared to mainstream mesh editing programs or visualisation software.

Mitsuba is straightforward to setup and install within a standard Python environment (pip install Mitsuba). Mitsuba’s documentation is reasonable to follow in terms of documentation of their plugins and API, however, the number of examples and python excerpts was a little lacking beyond those packaged bundled ‘scene.xml’ files.

For reference, below is an excerpt that can be used to assist with rendering some objects. This involves creating a scene definition using a Python Dict object. Be aware that additional ‘realistic’ material BSDF models are available, such as metal, plastic and transparent/translucent materials. The scene is relatively simple, consisting of a circular disc (z=0) to cast shadows upon to. Careful seleciton of light emitters (area lights) should be provide good illumination in the scene, otherwise convergence and noise artefacts can become present in the render. Crucially, in this script the mesh provided is rescaled and positioned above the disc plane based on the Z-height.

  • Note: an average of the bounding box could be taken. This is done in order to simplfify the alignment and position of the perspective camera in the scene and make it independent of the geometry provided.
import os
import numpy as np
import matplotlib.pyplot as plt
import mitsuba as mi
from mitsuba import ScalarTransform4f as T

import trimesh
from trimesh.transformations import rotation_matrix

mi.set_variant('scalar_rgb')

# Load the mesh file here
myMesh = trimesh.load('bracket.stl')

# Scale the mesh to approximately one unit based on the height
sf = 1.
myMesh.apply_scale(sf/myMesh.extents[2])
myMesh = myMesh.apply_transform(rotation_matrix(np.deg2rad(90), [1.0,0.0,0]))

# Translate the mesh so that it's centroid is at the origin and rests on the ground plane
myMesh.apply_translation([-myMesh.bounds[0,0] - myMesh.extents[0] / 2.0,
                          -myMesh.bounds[0,1] - myMesh.extents[1] / 2.0,
                          -myMesh.bounds[0,2]])

# Fix the mesh normals for the mesh
myMesh.fix_normals()

# Write the mesh to an external file (Wavefront .obj)
with open('mesh.obj', 'w') as f:
    f.write(trimesh.exchange.export.export_obj(myMesh,include_normals=True ))

#Create a sensor that is used for rendering the scene
def load_sensor(r, phi, theta):
    # Apply two rotations to convert from spherical coordinates to world 3D coordinates.
    origin = T.rotate([0, 0, 1], phi).rotate([0, 1, 0], theta) @ mi.ScalarPoint3f([0, 0, r])

    return mi.load_dict({
        'type': 'perspective',
        'fov': 40.0,
        'to_world': T.look_at(
            origin=origin,
            target=[0, 0, myMesh.extents[2]/2],
            up=[0, 0, 1]
        ),
        'sampler': {
            'type': 'independent',
            'sample_count': 16
        },
        'film': {
            'type': 'hdrfilm',
            'width': 1024,
            'height': 768,
            'rfilter': {
                'type': 'tent',
            },
            'pixel_format': 'rgb',
        },
    })

# Scene parameters
relativeLightHeight = 5.0

# A scene dictionary contains the description of the rendering scene.
scene2 = mi.load_dict({
    'type': 'scene',
    # The keys below correspond to object IDs and can be chosen arbitrarily
    'integrator': {'type': 'path'},

    'mesh': {
        'type': 'obj',
        'filename': 'mesh.obj',
        'face_normals': True, # This prevents smoothing of sharp-corners by discarding surface-normals. Useful for engineering CAD.
        'bsdf': {
            'type': 'diffuse',
            'reflectance': {
                'type': 'rgb',
                'value': [0.1, 0.27, 0.86]
            }
        }
    },

    # A general emitter is used for illuminating the entire scene (renders the background white)
    'light': {'type': 'constant', 'radiance': 1.0},
    'areaLight': {
        'type': 'rectangle',
        # The height of the light can be adjusted below
        'to_world': T.translate([0,0.0,myMesh.bounds[1,2] + relativeLightHeight]).scale(1.0).rotate([1,0,0], 5.0),
        'flip_normals': True,
        'emitter': {
            'type': 'area',
            'radiance': {
                'type': 'spectrum',
                'value': 25.0,
            }
        }
    },

    'floor': {
        'type': 'disk',
        'to_world': T.scale(3).translate([0.0,0.0,0.0]),
        'material': {
            'type': 'diffuse',
            'reflectance': {'type': 'rgb', 'value': 0.75},
        }
    }
})

sensor_count = 1

radius = 8
phis = [70.0]
theta = 60.0

sensors = [load_sensor(radius, phi, theta) for phi in phis]

"""
Render the Scene
The render samples are specified in spp
"""
image = mi.render(scene2, sensor=sensors[0], spp=256)

# Write the output
mi.util.write_bitmap("my_first_render.png", image)
mi.util.write_bitmap("my_first_render.exr", image)

# Display the output in an Image
plt.imshow(image** (1.0 / 2.2))
plt.axis('off')

A sample output is produced below of topology optimised bracket used with the above script. In the scene a flat disc (Z=0) is illuminated by a global ‘constant’ light emitter and an area emitter to provide soft shadows.

3D Printing Topology Optimised Component Rendered using Mitsuba
Render of a topology optimised bracket using the Mitsuba 3 Renderer.

On a laptop, this took approximately 30s to render a 1024×768 to 128 samples per pixel (SPP). Renders are relatively quick to generate on modern multi-core computer systems using just their CPUS.

Rendering Meshes with Vertex Colours

Plotting just geometry in a single colour isn’t very interesting, especially when we have results or extra data stored within the mesh. Mitsuba has the option to render colours assigned to each vertex.These can be extracted relatively easily from Finite Element meshes, or other functions. This normally would be trivial, but a subtle trick required is adapting the script to export to the .ply format and separately assign the vertex colour attribute within the scene structure. This can be done by setting the option within the BSDF diffuse material reflectance property, as documented

'bsdf': {
    'type': 'diffuse',
    'reflectance': {
        'type': 'mesh_attribute',
        'name': 'vertex_color'
    }

Trimesh, to date, exports 4-component (RGBA) vertex-colour attribute to the .ply format. The 4-component vertex-colour attribute, however, is unfortunately incompatible with Mitsuba, therefore the data array must be attached separately to the loaded scene, using the traverse function. This can be done by accessing the ‘mesh’ object within the declared Mitsuba scene.

Note: Mitusba uses DrJit for its data representation, but this directly interoperates with numpy arrays. For updating the internal buffer data in the scene, a flat data structure must be supplied to the vertex attribute property.

myMitsubaMesh = scene2.shapes()[2] # Access the .ply mesh object in the loaded scene

# Add a separate 3 component vertex colour attribute with the same number of vertices as the mesh

myMitsubaMesh.add_attribute('vertex_color', 3, [0] * myMesh.vertices.shape[0])
N = myMesh.vertices.shape[0]

# Use Mitsuba traverse function to modify data in the scene graph/structure
meshParams = mi.traverse(myMitsubaMesh)

# Generate a colour mapping based solely on the z-corrdiante of the mesh
vertColor = trimesh.visual.color.interpolate(myMesh.vertices[:,2], 'Paired') [:,:3] / 255.0 

# Update the vertex colour data buffer/array in mitsuba associated with the .ply mesh
meshParams["vertex_color"] = vertColor.ravel()
meshParams.update()

The full example excerpt is presented below

import os
import numpy as np
import mitsuba as mi
import matplotlib.pyplot as plt

import drjit as dr
mi.set_variant('scalar_rgb')

from mitsuba import ScalarTransform4f as T

import trimesh
from trimesh.transformations import rotation_matrix

# Load the mesh file here
myMesh = trimesh.load('bracket.stl')

# Scale the mesh to approximately one unit based on the height
sf = 1.
myMesh.apply_scale(sf/myMesh.extents[2])
myMesh = myMesh.apply_transform(rotation_matrix(np.deg2rad(90), [1.0,0.0,0]))

# Translate the mesh so that it's centroid is at the origin and rests on the ground plane
myMesh.apply_translation([-myMesh.bounds[0,0] - myMesh.extents[0] / 2.0,
                          -myMesh.bounds[0,1] - myMesh.extents[1] / 2.0,
                          -myMesh.bounds[0,2]])

# Fix the mesh normals for the mesh
myMesh.fix_normals()

# Write the mesh to an external file (Wavefront .obj)
with open('mesh.ply', 'wb') as f:
    f.write(trimesh.exchange.export.export_ply(myMesh))

#Create a sensor that is used for rendering the scene
def load_sensor(r, phi, theta):
    # Apply two rotations to convert from spherical coordinates to world 3D coordinates.
    origin = T.rotate([0, 0, 1], phi).rotate([0, 1, 0], theta) @ mi.ScalarPoint3f([0, 0, r])

    return mi.load_dict({
        'type': 'perspective',
        'fov': 40.0,
        'to_world': T.look_at(
            origin=origin,
            target=[0, 0, myMesh.extents[2]/2],
            up=[0, 0, 1]
        ),
        'sampler': {
            'type': 'independent',
            'sample_count': 16
        },
        'film': {
            'type': 'hdrfilm',
            'width': 1024,
            'height': 768,
            'rfilter': {
                'type': 'tent',
            },
            'pixel_format': 'rgb',
        },
    })

# Scene parameters
relativeLightHeight = 5.0

# A scene dictionary contains the description of the rendering scene.
scene2 = mi.load_dict({
    'type': 'scene',
    # The keys below correspond to object IDs and can be chosen arbitrarily
    'integrator': {'type': 'path'},

    'mesh': {
        'type': 'ply',
        'filename': 'mesh.ply',
        'face_normals': True,
        'bsdf': {
            'type': 'diffuse',
            'reflectance': {
                'type': 'mesh_attribute',
                'name': 'vertex_color'
            }
        }
    },

    # A general emitter is used for illuminating the entire scene (renders the background white)
    'light': {'type': 'constant', 'radiance': 1.0},
    'areaLight': {
        'type': 'rectangle',
        # The height of the light can be adjusted below
        'to_world': T.translate([0,0.0,myMesh.bounds[1,2] + relativeLightHeight]).scale(1.0).rotate([1,0,0], 5),
        'flip_normals': True,
        'emitter': {
            'type': 'area',
            'radiance': {
                'type': 'spectrum',
                'value': 25.0,
            }
        }
    },

    'floor': {
        'type': 'disk',
        'to_world': T.scale(3).translate([0.0,0.0,0.0]),
        'material': {
            'type': 'diffuse',
            'reflectance': {'type': 'rgb', 'value': 0.75},
        }
    }
})


myMitsubaMesh = scene2.shapes()[2]
myMitsubaMesh.add_attribute('vertex_color', 3, [0] * myMesh.vertices.shape[0])
N = myMesh.vertices.shape[0]

#vertex_colors = dr.zeros(mi.Float, 3 * N)
#vertex_colors +=
#vertex_colors /= 255

meshParams = mi.traverse(myMitsubaMesh)
vertColor = trimesh.visual.color.interpolate(myMesh.vertices[:,2], 'Paired') [:,:3] / 255.0 #paired / plasma
meshParams["vertex_color"] = vertColor.ravel()
meshParams.update()

sensor_count = 1

radius = 8
phis = [70.0]
theta = 60.0

sensors = [load_sensor(radius, phi, theta) for phi in phis]

"""
Render the Scene
The render samples are specified in spp
"""
image = mi.render(scene2, sensor=sensors[0], spp=256)

# Write the output
mi.util.write_bitmap("my_first_render.png", image)
mi.util.write_bitmap("my_first_render.exr", image)

plt.imshow(image** (1.0 / 2.2))
plt.axis('off')

The output of this is shown below, which uses a stratified colour map to render the z-value position. On closer inspect it can be observed that the interpolation of the Z position is interpolated across each triangle, so the precise isolevel boundaries are not exact. For most purposes, using a high resolution mesh, this would not cause concern.

Rendering of a topology optimised bracket with Z component isolevels generated and attached as vertex_color attribute

Rendering Volumetric Textures

Mitusba 3, provides the opportunity to both render volumes and interestingly apply volumes interpolated as volumetric surface textures. In this situation, a 3-component (RGB) voxel grid can be generated and intersecting values corresponding with intersecting mesh surface are used as the colour information.

Render of a topology optimised bracket with a volume field attached as a surface texture

Mitsuba uses its own simple .vol format for storing voxel grid information, although a convenient Python function does not exist for this. The definition of the current file format used is presented in the table. This is relatively simple to generate and export using Python’s inbuilt file handing functions. More efficient approaches would simply write the Numpy array directly to the file by correctly ordering the data in the array with the channel data (last axis) moving the fastest across the flattened array.

Position [Bytes]Content
1-3ASCII Bytes ’V’, ’O’, and ’L’
4File format version number (currently 3)
5-8Encoding identified (32-bit integer). 1 = Float32
9-12Number of cells along the X axis (32 bit integer)
13-16Number of cells along the Y axis (32 bit integer)
17-20Number of cells along the Z axis (32 bit integer)
21-24Number of channels (32 bit integer, supported values: 1, 3 or 6)
25-48Axis-aligned bounding box of the data stored in single precision (order: xmin, ymin, zmin, xmax, ymax, zmax)
49-*Binary data of the volume stored in the specified encoding. The data are ordered so that the following C-style indexing operation makes sense after the file has been loaded into memory: data[((zpos*yres + ypos)*xres +xpos)*channels + chan] where (xpos, ypos, zpos, chan) denotes the lookup location.
Structure of the .vol file format used by Mitsuba (version 3.0)

Below is an excerpt of the function that can export a numpy 4D (m \times n \times p \times 3 ) array with three colour channels to this file format.


def writeVol(filename, vol: np.ndarray, bbox):

    def int8(val) -> bytes:
        return struct.pack('b', val)

    def int32(val) -> bytes:
        return struct.pack('i', val)

    def float(val) -> bytes:
        return struct.pack('f', val)

    with open(filename, 'wb') as f:
        f.write('VOL'.encode('ascii'))
        f.write(int8(3))
        f.write(int32(1)) # 1 = float type
        f.write(int32(vol.shape[0]) ) # X grid size
        f.write(int32(vol.shape[1]))  # Y grid size
        f.write(int32(vol.shape[2]))  # Z grid size

        f.write(int32(vol.shape[3])) # Number of channels

        # Write the bounding Box of the grid
        # Values [x0,y0,z0, x1,y1,z1]
 
        f.write(bbox.astype(np.float32).tobytes())

        for k in range(vol.shape[2]):
            for j in range(vol.shape[1]):
                for i in range(vol.shape[0]):
                    for m in range(vol.shape[3]):
                        f.write(float(vol[i,j,k,m]))

An excerpt is presented below for generating a TPMS lattice U field. A volume is generated with an equal number of unit cells across each dimension covering the bounding box of the mesh. The values of the U field are transformed using matplotlib colourmap via trimesh.visual.color.interpolate as before and these are exported as the Mitsuba volume format using the inline function writeVol() . Later in the scene definition, this is transformed to align with the original mesh by using the ‘to_world‘ attribute

import os
import numpy as np
import mitsuba as mi
import drjit as dr
import struct
mi.set_variant('scalar_rgb')

from mitsuba import ScalarTransform4f as T

import matplotlib.pyplot as plt

import trimesh
from trimesh.transformations import rotation_matrix

def writeVol(filename, vol: np.ndarray, bbox):

    def int8(val) -> bytes:
        return struct.pack('b', val)

    def int32(val) -> bytes:
        return struct.pack('i', val)

    def float(val) -> bytes:
        return struct.pack('f', val)

    vol.shape[0]

    with open(filename, 'wb') as f:
        f.write('VOL'.encode('ascii'))
        f.write(int8(3))
        f.write(int32(1)) # 1 = float type
        f.write(int32(vol.shape[0]) ) # X grid size
        f.write(int32(vol.shape[1]))  # Y grid size
        f.write(int32(vol.shape[2]))  # Z grid size

        f.write(int32(vol.shape[3])) # Number of channels

        # Write the bounding Box of the grid
        # Values [x0,y0,z0, x1,y1,z1]
        print('bounding box', bbox)
        f.write(bbox.astype(np.float32).tobytes())

        for k in range(vol.shape[2]):
            for j in range(vol.shape[1]):
                for i in range(vol.shape[0]):
                    for m in range(vol.shape[3]):
                        f.write(float(vol[i,j,k,m]))


# Load the mesh file here
myMesh = trimesh.load('bracket.stl')
myMesh = myMesh.apply_transform(rotation_matrix(np.deg2rad(90), [1.0,0.0,0]))
#myMesh.apply_scale([2.5,1,3.0])

# Resolution of the Lattice Grid (note if this is set too low, Mitsuba has render issues)...
res = 0.3499

# Create a gyroid field
Lx = myMesh.extents[0]
Ly = myMesh.extents[1]
Lz = myMesh.extents[2]

""" Number of lattice unit cells"""
cellLength = 5.0

kx = Lx/cellLength
ky = Ly/cellLength
kz = Lz/cellLength

""" Create the computational grid - note np operates with k(z) numerical indexing unlike the default matlab equivalent"""
x,y,z = np.meshgrid(np.arange(0.0, Lx, res),
                    np.arange(0.0, Ly, res),
                    np.arange(0.0, Lz, res))


""" 
Calculating the Gyroid TPMS
"""
Tg = 0.7

U = ( np.cos(kx*2*np.pi*(x/Lx))*np.sin(ky*2*np.pi*(y/Ly))
    + np.cos(ky*2*np.pi*(y/Ly))*np.sin(kz*2*np.pi*(z/Lz))
    + np.cos(kz*2*np.pi*(z/Lz))*np.sin(kx*2*np.pi*(x/Lx)) )**2 - Tg**2

vol = trimesh.visual.color.interpolate(U, 'plasma').reshape(list(U.shape) + [4])[:,:,:,:3] / 256.0

# Delete the temporary variables
del x,y,z, U

# Scale the mesh to approximately one unit based on the height
sf = 2.5
myMesh.apply_scale(sf/myMesh.extents[2])

# Translate the mesh so that it's centroid is at the origin and rests on the ground plane
myMesh.apply_translation([-myMesh.bounds[0,0] - myMesh.extents[0] / 2.0,
                          -myMesh.bounds[0,1] - myMesh.extents[1] / 2.0,
                          -myMesh.bounds[0,2]])


# Fix the mesh normals for the mesh
myMesh.fix_normals()

# Write the volume
bounds = myMesh.bounds.reshape(-1,1).copy()
bounds = np.array([0,0,0,1,1,1])

# Write out the volume
writeVol('out.vol', vol, bounds)


# Write the mesh to an external file (Wavefront .obj)
with open('mesh.ply', 'wb') as f:
    f.write(trimesh.exchange.export.export_ply(myMesh))

#Create a sensor that is used for rendering the scene
def load_sensor(r, phi, theta):
    # Apply two rotations to convert from spherical coordinates to world 3D coordinates.
    origin = T.rotate([0, 0, 1], phi).rotate([0, 1, 0], theta) @ mi.ScalarPoint3f([0, 0, r])

    return mi.load_dict({
        'type': 'perspective',
        'fov': 40.0,
        'to_world': T.look_at(
            origin=origin,
            target=[0, 0, myMesh.extents[2]/2],
            up=[0, 0, 1]
        ),
        'sampler': {
            'type': 'independent',
            'sample_count': 30,

        },
        'film': {
            'type': 'hdrfilm',
            'width': 1024,
            'height': 768,

            'pixel_format': 'rgb',
        },
    })

# Scene parameters
relativeLightHeight = 5.0

# A scene dictionary contains the description of the rendering scene.
scene2 = mi.load_dict({
    'type': 'scene',
    # The keys below correspond to object IDs and can be chosen arbitrarily
    'integrator': {'type': 'path'},


    'mesh': {
        'type': 'ply',
        'filename': 'mesh.ply',
        'face_normals': True,
        'bsdf': {
            'type': 'diffuse',
            'reflectance': {
                'type': 'volume',
                'volume': {
                    'to_world': T.translate([-myMesh.extents[0]/2.0,-myMesh.extents[1]/2.0,0.0]).scale([myMesh.extents[0], myMesh.extents[1], myMesh.extents[2]]),
                    'type': 'gridvolume',
                    'filename': 'out.vol',
                }
            }
        }
    },

    # A general emitter is used for illuminating the entire scene (renders the background white)
    'light': {'type': 'constant', 'radiance': 1.0},
    'areaLight': {
        'type': 'rectangle',
        # The height of the light can be adjusted below
        'to_world': T.translate([0,0.0,myMesh.bounds[1,2] + relativeLightHeight]).scale(1.0).rotate([1,0,0], 5),
        'flip_normals': True,
        'emitter': {
            'type': 'area',
            'radiance': {
                'type': 'spectrum',
                'value': 25.0,
            }
        }
    },

    'floor': {
        'type': 'disk',
        'to_world': T.scale(3).translate([0.0,0.0,0.0]),
        'material': {
            'type': 'diffuse',
            'reflectance': {'type': 'rgb', 'value': 0.75},
        }
    }
})


sensor_count = 1

radius = 8
phis = [70.0]
theta = 60.0

sensors = [load_sensor(radius, phi, theta) for phi in phis]

"""
Render the Scene
The render samples are specified in spp
"""
image = mi.render(scene2, sensor=sensors[0], spp=256)

# Write the output
mi.util.write_bitmap("my_first_render.png", image)
mi.util.write_bitmap("my_first_render.exr", image)

plt.imshow(image** (1.0 / 2.2))
plt.axis('off')

Conclusions

Hopefully, these excerpts and explanations can assist those who wish to render some nice images of their models directly within Python, without the cumbersome additional steps required using external render programs.

Multi-threading Slicing & Hatching in PySLM

In PySLM, the slicing and hatching problem is inherently parallelisable simply because we can discretise areas the geometry into disrcete layers that for most situations behave independent. However, the actual underlying algorithms for slicing, offsetting the boundaries, clipping the hatch vectors is serial (single threaded). In order to significantly reduce the process time, multi-threaded processing is very desirable

Multi-threading in Python

Unfortunately, Python like most scripting or interpreter languages of the past are not inherently designed or destined to be multi-threaded. Perhaps, this may change in the future, but other scripting languages may fill this computational void (Rust, Julia, Go). Python, by intentions limits any multi-threaded use in scripts by using a construct known as the GIL – Global Interpreter Lock. This is a shared situation in other common scripting languages Matlab (ParPool), Javascript (Worker) where the parallel computing capability of multi-core CPUs cannot be exploited in a straightforward manner.

To some extent special distributions such as via the Anaconda distribution, core processing libraries such as numpy, scipy, scikit libraries internally are generally multi-threaded and support vectorisation via native CPU extensions. More computational mathematical operations and algorithms can to some extent be optimised to run in parallel automatically using numba, numexpr, and however, this cannot cover more broad multi-functional algorithms, such as those used in PySLM.

Python has the thread module for multi-threaded processing, however, for CPU bound processing it has very limited use This is because Python uses the global interpreter lock – GIL and this only allows one programming thread (i.e. one line) to be executed at any instance. It is mainly used for asynchronous IO (network or filesystem) operations which can be processed in the background.

Use of Multiprocessing Library in PySLM

The other option is to use the multiprocessing library built into the core Python distribution. Without going into too much of the formalities, multi-processing spawns multiple python processes and assign batches of work. The following programming article I found as a useful reference to the pitfalls of using the library.

In this implementation, the Pool and Manager modules are used to more optimally process the geometry. The most important section is to initialise the multiprocessing library with the ‘spawn‘ method, which stops random interruptions during the operation as discussed in the previous article.

from multiprocessing import Manager
from multiprocessing.pool import Pool
from multiprocessing import set_start_method

set_start_method("spawn")

The Manager.dict acts as a ‘proxy‘ object used to more efficiently store data which is shared between each process that is launched. Without using manager, for each process launch, a copy of the objects passed are made. This is important for the geometry or Part object, which if it were to contain a lattice of a set ofs complex surface would become expensive to copy.

d = Manager().dict()
d['part'] = solidPart
d['layerThickness'] = layerThickness # [mm]

A Pool object is used to create a set number of processes via setting the parameter processes=8 (typically one per CPU core). This is a fixed number re-used across a batch through the entire computation which removes the cost of additional copying and initialising many process instances. A series of z slice levels are created representing the layer z-id. These are then merged into a list of tuple pairs with the Manager dictionary and is stored in processList.

Pool.map is used to perform the slice function (calculateLayer) and collect all computed layers following the computation.

p = Pool(processes=8)

numLayers = solidPart.boundingBox[5] / layerThickness
z = np.arange(0, numLayers).tolist()

processList = list(zip([d] * len(z), z))

# Run the pro
layers = p.map(calculateLayer, processList)

The slicing function is fairly straightforward and just unpacks the arguments and performs the slicing and hatching operation. Note: each layer needs to currently initialise its own instance of a Hatcher class because this is not shared across all the processes. This carries a small cost, but means each layer can process entirely independently; in this example the change is the hatchAngle across layers. The layer position is calculated using the layer position (zid) and layerThickness.

def calculateLayer(input):
    # Typically the hatch angle is globally rotated per layer by usually 66.7 degrees per layer
    d = input[0]
    zid= input[1]

    layerThickness = d['layerThickness']
    solidPart = d['part']

    # Create a StripeHatcher object for performing any hatching operations
    myHatcher = hatching.Hatcher()

    # Set the base hatching parameters which are generated within Hatcher
    layerAngleOffset = 66.7
    myHatcher.hatchAngle = 10 + zid * 66.7
    myHatcher.volumeOffsetHatch = 0.08
    myHatcher.spotCompensation = 0.06
    myHatcher.numInnerContours = 2
    myHatcher.numOuterContours = 1
    myHatcher.hatchSortMethod = hatching.AlternateSort()

    #myHatcher.hatchAngle += 10

    # Slice the boundary
    geomSlice = solidPart.getVectorSlice(zid*layerThickness)

    # Hatch the boundary using myHatcher
    layer = myHatcher.hatch(geomSlice)

    # The layer height is set in integer increment of microns to ensure no rounding error during manufacturing
    layer.z = int(zid*layerThickness * 1000)
    layer.layerId = int(zid)

    return zid

The final step to use multiprocessing in Python is the inclusion of the python __name__ guard i.e:

if __name__ == '__main__':
   main()

The above is unfortunate because it makes debugging slightly more tedious in editors, but is the price for extra performance.

Performance Improvement

The performance improvement using the multiprocesssing library is shown in the table below for a modest 4 core laptop (my budget doesn’t stretch that far).

PySLM: A matplotlib showing hatching and slicing across multiple layers for the a cubic geometry using the python multi-processing library.
Matplotlib Figure showing every subsequent 10 layers of hatching for the geometry but is shown reduced scale.

This was performed on the examples/inversePyramid.stl geometry with an overall bounding box size [90 \times 90 \times 60] mm, hatch distance h_d=0.08mm and the layer thickness set at 40 μm.

Number of ProcessesRun Time [s]
Base-line (Simple For loop)121
1108
265.4
442
637.1
831.8
Approximate timings for 4 core CPU i7 Processors Using the Multi-Processing Library.

Given these are approximate timings, it is nearly linear performance improvement for the simple example. However, it can be seen choosing more processes beyond cores, does squeeze some extra performance out – perhaps due to Intel’s hyperthreading. Below shows that the CPU is fully utilised.

PySLM: Multi-threading options

Conclusions:

This post shows how one can adapt existing routines to generate multi-processing slicing and hatching with PySLM. In the future, it is desirable to explore a more integrated class structure for hooking functions onto. Other areas that are of interest to explore are potentially the use of GPU computing to parallelise some of the fundamental algorithms.

Example

The example showing this that can be run is example_3d_multithread.py in the github repo.

Slicing and Hatching for Selective Laser Melting (L-PBF)

Much of slicing and hatching process is already taken for granted in commercial software mostly offered by the OEMs of these systems rarely discussed amongst academic research. Already we observe practically the implications direct control over laser parameters and scan strategy on the quality of the bulk material – reduction in defects, minimising distortion due to residual stress, and the surface quality of parts manufactured using these process. Additionally, it can have a profound impact the the metallic phase generation, micro-structural texture driven via physics-informed models [1], grading of the bulk properties and offer precise control over manufacturing intricate features such as thin-wall or lattice structures [2].

This post hopefully highlights to those unfamiliar some of the basis process encountered in the generation of machine build files used in AM systems and get a better understanding to the operation behind PySLM. I have tried my best to generalise this as much as possible, but I imagine there are subtleties I have not come across.

This post is to provide some reference into the generation of hatches or scan vectors are created for use in AM processes such as selective laser melting (SLM), which uses a point energy source to raster across a medium. Some people prefer to more generally to classify the family of processes using the technical ASTM F42 committee standards 52900 and 52911 – Powder Bed Fusion (PBF). I won’t go into the basic process of the manufacturing processes such as EBM, SLM, SLA, BJF, as there are many excellent articles already that explain these in far greater detail.

Machine Build Files

AM processes require a digital representation to manufacture an object. These tend to be computed offline – separate from the 3D Printer, using specialist or dedicated pre-processing software. I expect this will become a closed-loop system in the future, such that the manufacturing integrated directly into the machine.

For some AM process families, the control operations may be exceedingly granular – i.e. G-code. G-code formats state specific instructions or functional commands for the 3D printer to sequentially or linearly execute. These tend to fit with deposition methods such as Filament Extrusion, Direct-Ink-Writing (robo-casting) and direct energy deposition (DED) methods. Typically, these tends to be for deposition with a machine systems, which requires coordination of physical motion in-conjunction with some mechanised actuation to deposit/fuse material.

Machine Build File Formats for L-PBF

For exposure (laser, electron-beam) based AM processes, commercial systems use a compact notation solely for representing the scan path the exposure source will traverse . The formats are often binary to aid their compactness.

To summarise, within these build files, an intermediate representation consists of index-based referenceable parameters for the build. The remainder consists of a series of layers, that contain geometric entities (points, vectors) that are used to to control the exposure for the border or contour or raster or infill the interior region. For L-PBF processes, the digital files, commonly referred as “machine build file” comes in various flavours dependent on the machine manufacture:

  • Renshaw .mtt,
  • SLM Solution .slm,
  • DMG Mori Realizer .rea
  • EOS .sli
  • Aconity .cli+ or .ilt wrapper

Some file formats, such as Open Beam Path format can specify bezier curves [3]. Another recently proposed open source format created by RWTH Aachen in 2022 called OpenVector Format based on Google’s Protobuf schema. The format aims to offer a specification universally compatible across a swathe of PBF processes and supplement existing commercial formats with additional build-process meta-data (e.g. build, platform temperature, dosing) and detailed definition with further advancements in the process, such as multi-beam builds.

Build-File Formats

Higher level representations that describe the distribution of material(s) defining geometry – this could be bitmap slices or even a 3D model. Processes such as Jetting, BJF, High Speed Sintering, DLP Vat-polymerisation currently available offer this a reality. With time, polymer and metal processes will evolve to become 2D:, diode aerial melting [4] or more aerial based scanning based on holographic additive manufacturing methods, such as those proposed by Seurat AM [5] based off research at LLNL, and recently at University of Cambridge [6] . In the future, we can already observe the exciting prospect of new processes such as computed axial lithograph [7] that will provide us near instantaneous volumetric additive manufacturing.

For now, single and multi point exposure systems for the imminent future will remain with us as the currently available process. PySLM uses an intermediate representation – specifying a set of points and lines to control the exposure of energy into a layer.

The Slicing and Hatching Process in L-PBF

With nearly most conventional 3D printing process, it begins with a 3D representation of a solid volume or geometry. 2D planar slices or layers are extracted from a 3D mesh or B-Rep surface in CAD by taking cross-sections from a geometry. Each slice layer consist of a set of boundaries and holes describing the cross-section of an object. Note: non-planar deposition does exist for DED/Filament processes, such as this Curved Layer Fused Deposition Modeling [ref] and a spherical slicing technique [8].

For consolidating material, an exposure beam must raster across the surface medium (metal or polymer powder, or a photo-polymer resin) depending on the process. Currently this is a single or multiple point which moves at a velocity vwith a power P across the surface. The designated exposure or energy deposited into the medium is principally a function of these two parameters, depending on the type of laser:

  • (Quasi)-Continious Wave: The laser remains active switched on (typically modulated using a form of PWM) across the entire length of the scan vector
  • Pulsed Mode (Q-Switched): Laser is pulsed at set distances and exposure times across the scan vector

Numerous experiments often tend to result in parametric power/speed maps to the achieved part bulk density, that result in usually optimal processing windows that produce stable and consistent melt-tracks [9][10]. Recently, process maps are based on a non-dimensional parameter such as the normalised enthalpy approach, that more reliably assist selecting a suitable process windows [11].

Illustration of a scan vector commonly used in Laser Powder-Bed Fusion (SLM)

However, the complexity of the process extends further and is related to a many additional variables dependent on the process such as layer thickness, absorption coefficient (powder and material), exposure beam profile etc.. Additionally, the cumulative energy deposited spatially over a period of time must consider overlap of scan vectors within an area.

Scan Vector Generation

Each boundary polygon is offset initially to account for the the radius of the beam exposure, which is termed a ‘spot compensation factor‘. Some processes such as SLS or BJF account for global part shrinkage volumetrically throughout the part by having a global scale factor or deformed mesh to compensate to non-uniform shrinkage across the part.

The composition of laser scan vectors used in a slice or layer for L-PBF or Selective Laser Melting. The boundary is offset multiple times, with the interior or core filled with hatch vectors.
The typical composition of a layer used for scanning in exposure based processes. This consists of outer and inner contours, with the core interior filled with hatches.

This first initial offset is the outer-contour which would be visible on the exterior of the part. This contour will have a different set of laser parameters in order to optimise and improve the surface roughness of the part obtained. A further offset is applied to generate a set of inner-contours before hatching begins.

Depending on the orientation of the surface (e.g. up-skin or down-skin), the boundary and interior region may be intersected to fine-tune the laser parameters to provide better surface texture, or surface roughness – typically varying between Ra = 3-13 μm [12] primarily determined by the surface angle and a combination of the process variables including,

  • the powder feedstock (bulk material, powder size distribution)
  • laser parameters
  • layer thickness (pre-dominantly fixed or constant for most AM processes)

Overhang regions and surfaces with a low overhang angles tend to be susceptible to high surface-roughness. Roller re-coater L-PBF systems – available only on 3DSystems or AddUp system,, tend to offer far superior surface quality on low inclined or overhang regions. Additionally, progressive advancement and maturity of laser parameter optimisation, and those computationally driven using part geometry [13] are able to further enhance the quality and potentially eliminate the need for support structures. Depending on the machine platform, these regions are identified by sampling across two-three layers. Overhang regions obviously require support geometry, which is an entirely different topic discussed in this post.

Laser parameters in SLM (L-PBF) can be optimised based on the adjacent surface regions. Special regions, include the upskin, downskin and overhang regions
Laser parameters can be optimised based on the adjacent surface regions. Special regions, include the upskin, downskin and overhang regions needed to improve the surface roughness and reduce density in regions.

Following the generation of the contours, the inner core region requires filling with hatches. Hatches are a series of parallel scan vectors placed adjacent at a set hatch distance, h_d. This parameter is optimized according to the material processed, but is essentially related to the spot radius of the exposure point r_s in order to reduce inter-track and inter layer porosity. Across each layer these tend to be placed at a particular orientation \theta_h, which is is then incrementally rotated globally for subsequent layers, typically 66.6°. This rotation aims to smooth out the build process in order to minimise inter-track porosity, and generate homogeneous material, and in the case of SLM mitigate the effects of anisotropic residual stress generation.

The composition and terminology (hatch distance, hatch spacing, hatch angle) used in L-PBF. The Layer Geometry objects used to scan across a Layer in Selective Laser Melting (L-PBF). The various parameters such as the hatch distance and hatch angle are shown.
A general composition of the various LayerGeometry objects used to scan across a Layer. The various parameters such as the hatch distance, spacing and hatch angle are shown.

The distribution (position, length, rotation) of these hatch vectors are arranged using a laser scan strategy. The most common include a simple alternating hatch, stripe and island or checkerboard scan strategy.

Each set or group of scan vectors is stored together in a LayerGeometry, depending on the type (either a set of point exposures, contour or hatch vectors). These LayerGeometry groups usually share a set of exposure parameters – power, laser scan speed (point exposure time, point distance for a pulsed laser), focus position).

Some systems offer a greater degree of control and can control individual power across the scan vectors. Other can fine tune the acceleration and modulate the power along the scan vectors to support techniques known as ‘skywriting‘. For instance in SLM, it has been proposed that careful tuning of the laser parameters towards the end of the scan vector, i.e. turning can reduce porosity by preventing premature collapse of key holing phenomena [14]. In theory, PySLM could be extended to provide greater control of the electro-optic systems used in the process if so desired.

Hopefully, this provides enough background for those who are interested and engaged in working with developing scan strategies and material development using PySLM.

References

References
1 Plotkowski, A., Ferguson, J., Stump, B., Halsey, W., Paquit, V., Joslin, C., Babu, S. S., Marquez Rossy, A., Kirka, M. M., & Dehoff, R. R. (2021). A stochastic scan strategy for grain structure control in complex geometries using electron beam powder bed fusion. Additive Manufacturing46. https://doi.org/10.1016/j.addma.2021.102092
2 Ghouse, S., Babu, S., van Arkel, R. J., Nai, K., Hooper, P. A., & Jeffers, J. R. T. (2017). The influence of laser parameters and scanning strategies on the mechanical properties of a stochastic porous material. Materials and Design131, 498–508. https://doi.org/10.1016/j.matdes.2017.06.041
3 Open Beam Path – Freemelt, https://gitlab.com/freemelt/openmelt/obplib-python
4 Zavala Arredondo, Miguel Angel (2017) Diode Area Melting Use of High Power Diode Lasers in Additive Manufacturing of Metallic Components. PhD thesis, University of Sheffield.
5 Seurat AM. https://www.seuratech.com/
6 https://www.theengineer.co.uk/holographic-additive-manufacturing-lasers/
7 Kelly, B., Bhattacharya, I., Shusteff, M., Panas, R. M., Taylor, H. K., & Spadaccini, C. M. (2017). Computed Axial Lithography (CAL): Toward Single Step 3D Printing of Arbitrary Geometries. Retrieved from http://arxiv.org/abs/1705.05893
8 Yigit, I. E., & Lazoglu, I. (2020). Spherical slicing method and its application on robotic additive manufacturing. Progress in Additive Manufacturing, 5(4), 387–394. https://doi.org/10.1007/s40964-020-00135-5
9 Yadroitsev, I., & Smurov, I. (2010). Selective laser melting technology: From the single laser melted track stability to 3D parts of complex shape. Physics Procedia, 5(Part 2), 551–560. https://doi.org/10.1016/j.phpro.2010.08.083
10 Maamoun, A. H., Xue, Y. F., Elbestawi, M. A., & Veldhuis, S. C. (2018). Effect of selective laser melting process parameters on the quality of al alloy parts: Powder characterization, density, surface roughness, and dimensional accuracy. Materials, 11(12). https://doi.org/10.3390/ma11122343
11 Ferro, P., Meneghello, R., Savio, G., & Berto, F. (2020). A modified volumetric energy density–based approach for porosity assessment in additive manufacturing process design. International Journal of Advanced Manufacturing Technology, 110(7–8), 1911–1921. https://doi.org/10.1007/s00170-020-05949-9
12 Ni, C., Shi, Y., & Liu, J. (2019). Effects of inclination angle on surface roughness and corrosion properties of selective laser melted 316L stainless steel. Materials Research Express, 6(3). https://doi.org/10.1088/2053-1591/aaf2d3
13 Velo3D Sapphire Printer – SupportFree Technology. https://blog.velo3d.com/blog/supportfree-what-does-it-mean-why-is-it-important
14 Martin, A. A., Calta, N. P., Khairallah, S. A., Wang, J., Depond, P. J., Fong, A. Y., … Matthews, M. J. (2019). Dynamics of pore formation during laser powder bed fusion additive manufacturing. Nature Communications, 10(1), 1–10. https://doi.org/10.1038/s41467-019-10009-2