PySLM 0.4

PySLM is a more frequent update to the library with few additions, due to mainly resolving issues fixing several bugs including suggestions and feedback from several users along the way. I wish to thank them personally for their support along the way, as it does help improve the PySLM library for everyone.

The release primarily includes a new analysis Iterator feature that provides a set of classes for iterating across the layer geometry definitions. This builds upon previous work in the previous version of the c++ libSLM, however, for reference been implemented in Python.This includes iterating across individual LayerGeometry regions via LayerGeometryIterator and Scan Vectors via ScanvVectorIterator. Useful for simulation and numerical studies is the ScanIterator class that aims to replicate the spatial position and laser parameters across time given the laser input. This is particularly useful for numerical simulations for predicting the thermal and thermo-mechanical behaviour during the process. The exposure points generated in ScanIterator class are based on a timestep, that can be controlled directly via the analysis or exported to a .csv file for use by an external program. The infrastructure behind the iterators was derived for efficient generation and computation using a tree structure for caching data.

Another nice feature of note is the correct visualisation of scan paths based on their true scanning order across both hatch and contour scan geometry using the new visualise function plotSequential as described in detail in a previous post. This includes a nicer visualisation of jump vectors that was previous missing.

Another example for generating parametric studies is included to support research on the development of process parameters for new materials on machine systems, in conjunction with DOE tools available in python.

Once again the support module has been put on hold, whilst investigating and researching a more efficient approach for isolating support regions and generating their assigned support geometry regions.

The release finally marks a small achievement in getting an automatic build system in place used Github Actions for compiling and publishing the python packages to PyPi repository, for both most versions (3.5-3.9) of Python on both Windows and Mac OS X.

Finally, the complete release log may be found on github at pyslm/CHANGELOG.MD.

PySLM Scan Path Iterator

Historical Background

An upcoming key feature in PySLM is the Iterators primarily useful for simulation studies, such as predicting thermo-mechanical behavior of scan strategies. Much of this builds upon ideas in former work that was done during my PhD for investigating the generation of residual stress in selective laser melting. In that study, MSC Marc, a commercial Finite Element analysis package was used to predict residual stresses generated during the process. The discretised position and laser parameters of the exposure from the laser was controlled by combination of Fortran User Subroutines and libSLM, the former c++ library.

Prior to running, a configuration file was passed to the simulation, specifying the SLM machine build file used for simulating the scan paths. libSLM parsed the compatible build-file and then based on the current time and increment would interpolate the position of the exposure point and laser parameters during firing. Beyond this main functionality, there were additional house keeping required for running the simulation, including passing information between the programs, and also additional tools to efficiently seek at an arbitrarily point in time, the state and position of the laser. This was necessary for restarting simulations on a HPC and for adaptive time-stepping required to keep numerical stability. For efficient seeking across the layers and each layer geometry structure was cached within a tree, that could be parsed on demand if necessary. Much of the infrastructure was excessive, although the implementation had to be written in c++ or Fortran to be used by integrated with the commercial solver.

Although it is difficult to perceive the full benefit of having a Python version of the same functionality, there are some instances and some analysis codes where this could be of benefit for modelling this and other processes as well.

Implementation of PySLM Iterators

The implementation builds upon the existing design from the original libSLM library. For all the Iterator classes, similar to most of the other pyslm.analysis module’s tools, the list of Layers and Models with the Laser Parameters should be passed:

# Iterates across layer geometries
layerGeomIter = Iterator(models, layerList) 

# Iterates across individual scan vectors - currently only ContourGeometry/HatchGeometry
scanVectorIter = ScanVectorIterator(models, layerList) 

# Generates an scan exposure point iterator
scanIter = ScanIterator(models, layerList) 

The first stage is building a time cache tree across each LayerGeometry. In practice, the cache tree structure is not necessary if the scan iterator iteratively increments along in time. Having a cache structure enables non-linear movement of the iterator across the entire build . It also provides a fast random-access lookup to seek to a specific Layer or LayerGeometry for use in simulations or analyses.

This structure is formed by iteratively measuring the total time taken to scan an individual LayerGeometry group which is stored in a tree node (TimeNode). The cumulative time taken to scan across each LayerGeometry TimeNode to provide the total scan time across the Layer. The TimeNode can be assigned child and parent nodes using the attributes (TimeNode.parent and TimeNode.children) in order to navigate across the entire tree. Each TimeNode provides a key-value pair (id, value) to store the reference LayerGeometry or Layer for simplified access.

The Cache Tree is generated and stored in the Base Class, Iterator and is generated in the private method (Iterator._generateCache) and stored in the attribute Iterator.tree.

def _generateCache(self) -> None:
 self._tree = TimeNode() for layerId, layer in enumerate(self.layers):     # Create the layer     layerNode = TimeNode(self._tree, id=layerId, value=layer)     self._tree.children.append(layerNode)     for layerGeomId, layerGeom in enumerate(layer.geometry):         geomNode = TimeNode(layerNode, id=layerGeomId, value=layerGeom)         geomNode.time = getLayerGeometryTime(layerGeom, self._models)         layerNode.children.append(geomNode) self._cacheValid = True

The Iterator class has many useful facilities, such as build-time estimation, seeking access to the Layer or LayerGeometry at an arbitrary point in time. The class stores additional info such as the layer dwellTime – this can be re-implemented in a derived class. For implementing the iterator behavior used across all dependent classes it also stores the current time and reference pointers to the current Layer and LayerGeometry. Essentially the Iterator class can be used to iterate across each LayerGeometry within a build as a foundation to the other class. Each of these Iterator classes builds upon the magic methods available in Python: __iter__ and __next__ . The __iter__ method simply sets up the object and re-initialises the Iterators attributes. Once the cache tree is generated internally, it offers no penalty to generate a new iterator . Below is an excerpt taken from the ScanVectorIterator:

def __iter__(self):
    self._time = 0.0
    self._layerGeomTime = 0.0
    self._layerInc = 0
    self._layerGeomInc = 0

    return self

def __next__(self):
     if self._layerScanVecIt < len( self._layerScanVectors):
         scanVector = self._layerScanVectors[self._layerScanVecIt]
         self._layerScanVecIt += 1
         return scanVector
     else:
         # New layer
         if self._layerIt < len(self._layers):
             layerVectors = self.getLayerVectors(self._layers[self._layerIt])
             self._layerScanVectors = self.reshapeVectors(layerVectors)
             self._layerScanVecIt = 0
             self._layerIt += 1
             return self.next()
         else:
             raise StopIteration

The Iterator class and ScanVectorIterator class do not require much further attention, as the pointer to the geometry is incremented only. The ScanIterator class, however, is more useful for simulation and will be discussed further.

Scan Iterator Class

The ScanIterator class is used for incrementally advancing the exposure source across each scan vector. This is particularly important for visualising or simulating the AM process. The time increment is based on a chosen but adjustable timestep, and the laser parameters across each scan vector (i.e. the effective scan velocity) obtained from the assigned BuildStyle.

The exposure point is linearly interpolated across each scan vector based on the current time within the LayerGeometry depending on the type. For identifying the position, the cumulative distance is captured and the current timeOffset for the layer geometry is used to estimate the distance covered by the exposure source across the entire LayerGeometry section. For simplicity this assumes no acceleration terms and uses a constant velocity profile. Based on the timeOffset, the scan vector is obtained and then the final position is interpolated across the scan vector.

laserVelocity = getEffectiveLaserSpeed(buildStyle)

# Find the cumulative distance across the scan vectors in the LayerGeometry (Contour)
delta = np.diff(layerGeom.coords, axis=0)
dist = np.hypot(delta[:,0], delta[:,1])
cumDist = np.cumsum(dist)
cumDist2 = np.insert(cumDist, 0,0)

# If the offsetDist calculated is outside of the cumulative distance then some error has occured
if offsetDist > cumDist2[-1]:
    raise Exception('Error offset distance > cumDist {:.3f}, {:.3f}'.format(offsetDist, cumDist2[-1]))

id = 0

# Find the id of the current scan vector given the time offset
for i, vec in enumerate(cumDist2):
    if offsetDist < vec:
        id = i
        break

# interpolate the position based on offset distance in the scan vector
linearOffset = (offsetDist - cumDist2[id-1]) / dist[id-1]
point = layerGeom.coords[id-1] + delta[id-1] * linearOffset

The above example is specifically for the contour geometry. Note the for loop is not particularly efficient but serves its purpose for identifying the Iterator’s current scan vector.

Iterator Use:

Each iterator can be subsequently called after using the iter method in a variety of pythonic ways:

#Create a scan vector iterator
ScanVectorIterator(models, layerList)

# Create a python iter object from a ScanVectorIterator
scanIter = iter(scanVectorIter)

# Get a single scan vector
firstScanVec = more(scanIter)

# Collect all the remaining scan vectors
scanVectors = np.array([point for point in scanIter])

Current Limitations:

Note the current implementation of the iterators currently only consider ContourGeometry and HatchGeometry and does not include PointGeometry groups. The jump vectors are ignored, which will have a small but in most situations a negligible effect on the the overall accuracy of the timing used for the iterators.

Another obvious limitation is that this only accounts for single exposure source systems. It is not known to myself, how multiple-exposure systems scan (i.e. are they truly in parallel based on the laser number) or is there is some built-in machine heuristic which balances the scanning across all laser sources and spatially – e.g. to prevent overheating. This depends on the SLM system such as if multiple exposure sources are limited by zones or have full areal access to the bed. Anyone’s comments or experiences on this aspect would be sincerely welcomed.

Example

An example showing the basic usage and functions available with the Iterator classes are available in the Github Repo examples/example_laser_iterator.py

Finding Overhangs in 3D Printing using PySLM

Determining overhang regions are crucial for detected potential failure points for unsupported regions that are necessary to generate support structures for 3D Printing. Typically, for triangular meshes, this is calculated by taking the dot-product between the triangle normal and the vertical build direction and collecting the values across the entire mesh.

v0 = np.array([[0., 0., -1.0]])
v1 = part.geometry.face_normals
theta = np.arccos(np.clip(np.dot(v0, v1.T), -1.0, 1.0))
theta = np.degrees(theta).flatten()

The approach is not particularly complicated. However, depending on some geometries, especially those from topology optimised geometries tend to be have ‘noise’ in the surface triangles, so patches can appear within areas generally considering to require support.

overhang region for a topology optimised bracket showing disconnected 'noisy' regions as the service normal.
A close-up of an overhang angle showing ‘noisy’ disconnected regions. This is a result of the surfaces slightly above the overhang angle tolerance despite neighbors being under.

One approach taken is using the surface connectivity information to smooth or average the surface normal or overhang angle in order to reduce these discontinuities. The approach taken gathers the adjacent triangle faces and uses the information to identify the overhang angle. This option is enabled when passing the useConnectivity=true argument in pyslm.support.getOverhangMesh (will become available at a later date).

The approach uses the face adjacency information built-into Trimesh’s meshes and collects this in order to map the connectivity between faces. The list of adjacent faces are stored as a Python dict so that they can mapped on-demand.

def getAdjacentFaces(part: Part):
    mesh = part.geometry

    import networkx as nx

    graph = nx.Graph()
    graph.add_edges_from(mesh.face_adjacency)

    adjacentFaces = {node: list(graph.neighbors(node)) for node in graph.nodes}
    return adjacentFaces

Once the facial connectivity is found, each face within the mesh is iterated over. The average overhang angle is calculated accordingly by collecting the faces from the adjacency list into conFaces and then using this to index the numpy array directly.

theta = np.degrees(theta).flatten()   
thetaAvg = theta.copy()
adjacencyList = getAdjacentFaces(part)
for face in adjacencyList.keys():
    conFaces = [face] + adjacencyList[face]
    thetaAvg[face] = np.mean(theta[conFaces])

The difference is highlighted below.

Overhang regions identified in a 3D Printed part without the mesh connectivity
Overhang without the connectivity option
Overhang regions identified in a 3D Printed part wit the mesh connectivity considered
Overhang regions identified with connectivity option

Ray Projection Investigation

The support generation – described in this post, builds upon ray projection or ‘ray casting’ method to identify intersecting support areas. The idea was looking at how one could identify supports that are only in contact with the base/build plate – typically acceptable. For reasons later explained, it can be better to work beyond the overhang regions identified based on just the triangular meshes because they can have jagged edges, even with the connectivity approach. It it also not trivial to interpolate across the surface or use polygon offsetting algorithms.

The idea is simple, rays are uniformly distributed and projected underneath the mesh to identify the intercepts with the mesh to find the distance from the build plate. The polygon of the bounding box is generated from the mesh and then is discretised with points using the rasterize function for trimesh.path.Path2D. The rays are projecting using Trimesh’s in-built ray projection facility from the seed points (note: the PyEmbree wrapper library as an auxiliary method does not provide enough accuracy to do this).

# Generate a polygon covering the part's bounding box
offsetPoly  = trimesh.load_path(geometry.generatePolygonBoundingBox(mesh.bounds.reshape(2,3)))
 
# Rasterise the surface of overhang to generate projection points
supportArea = np.array(offsetPoly.rasterize(resolution, offsetPoly.bounds[0, :])).T

coords = np.argwhere(supportArea).astype(np.float32) * resolution

# Project upwards to intersect with the upper surface
# Set the z-coordinates for the ray origin
coords = np.insert(coords, 2, values=-1e5, axis=1)
rays = np.repeat([upVec], coords.shape[0], axis=0)

#Find the first location of any triangles which intersect with the part
hitLoc, index_ray, index_tri = mesh.ray.intersects_location(ray_origins=coords, ray_directions=rays,multiple_hits=False)
print('\t - finished projecting rays')

The first hit locations with the mesh are stored and the heights are then transformed back to a height-map based on the original bounding-box and resolution used.

heightMap = np.ones(supportArea.shape) * -1.0
 if len(hitLoc) > 0:
     hitLocCpy = hitLoc.copy()
     hitLocCpy[:, :2] -= offsetPoly.bounds[0, :]
     hitLocCpy[:, :2] /= resolution
 hitLocIdx = np.ceil(hitLocCpy[:, :2]).astype(np.int32) # Assign the heights heightMap[hitLocIdx[:, 0], hitLocIdx[:, 1]] = hitLoc[:, 2]
Height or depth map created from a ray projection method used for identifying overhang regions in 3D Printing.
The height or depth map (projection) along Z-Direction from the ray projection method

From the height map, the gradients are calculated using 1st order difference stencil within Numpy’s np.gradient function, along each direction. The magnitude is found and the overhang angle is calculated based on the resolution.

gradX = np.gradient(heightMapFix, axis=1)
gradY = np.gradient(heightMapFix, axis=0)
mag = np.hypot(gradX,gradY)
angle = np.degrees(np.arctan(mag/resolution))

Once the angle is obtained the projected overhang regions can be found using the Marching Square algorithm built into Scikit Image (skimage.measure.find_contours). The isolevel chosen is based on the overhang angle and the resolution chosen alongside some carefully selected fudge-factor. A mask is used to remove areas in the background (non-hit areas) based on the background

isolevel = 1.1 * np.tan(np.deg2rad(overhangAngle)) * resolution
contours = skimage.measure.find_contours(mag,isolevel, mask=heightMapFix>1e-3)

The result can be seen below.

Overhang regions identified in a 3D Printed part using PySLM. The is contours show the unsupported regions with an overhang angle of 45degrees
The projected overhang region (45°) isolevel overlaid on the calculated overhang angle

The benefits of this approach are not immediately apparent. However, the contour information can be used and smoothed when projecting supports back onto the part. The limitation with the approach is that the projection’s resolution is determined by the overhang angle captured and obviously is based on the line of sight – so any self-intersecting supports with the part are neglected. The approach can be further explored using GLSL shaders in order to remove the bottleneck of ray tracing and achieve the same result.

This method is included as a utility function (pyslm.support.generateHeightMap) and both functions will be made later available with the introduction of the support module.