Category: PySLM

Improving Performance of Island Checkerboard Scan Strategy Hatching in PySLM

The hatching performance of PySLM using ClipperLib via PyClipper is reasonably good considering the age of the library using the Vatti polygon clipping algorithm. Without attempting to optimise the underlying library and clipping algorithm for most scenarios, the hatch clipping process should be sufficient for most use case. Future investigation will explore alternative clipping algorithms to further improve the performance of this intensive computational process

For the unfamiliar with the basic hatching process of a single layer, the laser or electron beam (a 1D single point source) must scan across an aerial (2D) region. This is done by creating a series of lines/vectors which infill or raster across the surface.

The most basic form of hatch infill for bulk regions is an alternating, meander, or in some locales referred to a serpentine scan strategy. This tends to be undesirable in SLM due to the creation of localised heat build-up [1] resulting in porosity, poor surface finish [2], residual stress and resultant distortion and anisotropy due to preferential grain growth [3]. Stripe or Island scan strategies are employed in attempt to mitigate these by limiting the length of scan vectors used across a region [4][5][6]. Within the layer hatch vectors for each island are oriented orthogonal to each other and the scan vector length can be precisely controlled in order to reduce the magnitude of residual stresses generated [7].

However, when the user desires a stripe or an island scan strategy, the number of clipping operations for the individual hatch vectors increases drastically. The increase in number of clipping operations increases due to division of the area into fixed size regions corresponding to the desired scan vector length (typically 5 mm)]:

  • Standard Meander Scan Strategy: n_{clip} \propto \frac{A}{hatchDistance(h_d)}
  • Stripe Scan Strategy: n_{clip} \propto \frac{A}{StripeWidth}
  • Island Scan Strategy: n_{clip} \propto \frac{A}{IslandWidth^2}

As can be observed, the performance of hatching with an island scan strategy degrades rapidly when using the island scan due to reciprocal square. As a result, using a naive approach, hatching a very large planar region using an island scan strategy could quickly result in 100,000+ clipping operations for a single layer for a large flat. In addition, this is irrespective of the sparsity of the layer geometry. The way the hatch filling approach works in PySLM, the maximum extent of a contour/polygon region is found. A circle is projected based on this maximum extent, and an outer bounding box is covered. This is explained in a previous post.

The scan vectors are tiled across the region. The reason behind this is to guarantee complete coverage irrespective of the chosen hatch angle, \theta_h, across the layer and largely simplifies the computation. The issue is that many regions will be outside the boundary of the part. Sparse regions both void and solid will not require additional clipping.

The Proposed Technique:

In summary, the proposed technique takes advantage that each island is regular, and therefore each island can be used to discretise the region. This can be used to perform intersection tests for region that may be clipped, whilst recycling existing hatch vectors for those within the interior boundary.

Given that use an island scan strategy provides essentially structured grid, this can be easily transformed into a a method for selecting regions. Using the shapely library, each island boundary consisting of 4 edges can be quickly tested to check if it overlaps internally with the solid part and also intersected with the boundary. This is an efficient operation to perform, despite shapely (libGEOS) being not as efficient as PyClipper.

from shapely.geometry.polygon import LinearRing, Polygon

intersectIslands = []
overlapIslands = []

intersectIslandsSet = set()
overlapIslandsSet= set()

for i in range(len(islands)):
    
    island = islands[i]
    s = Polygon(LinearRing(island[:-1]))

    if poly.overlaps(s):
        overlapIslandsSet.add(i) # id
        overlapIslands.append(island)

    if poly.intersects(s):
        intersectIslandsSet.add(i)  # id
        intersectIslands.append(island)


# Perform difference between the python sets
unTouchedIslandSet = intersectIslandsSet-overlapIslandsSet
unTouchedIslands = [islands[i] for i in unTouchedIslandSet]

This library is used because the user may re-test the same polygon consecutively, unlike re-building the polygon state in ClipperLib. Ultimately, this presents three unique cases:

  1. Non-Intersecting (shapely.polygon.intersects(island) == False) – The Island resides outside of the boundary and is discarded,
  2. Intersecting (shapely.polygon.intersects(island) == True) – The Island is in an internal region, but may be also clipped by the boundary,
  3. Clipped (shapely.polygon.intersects(island) == True) – The island intersects with the boundary and requires clipping.

PySLM - Clipping of island regions when generating Island Scan Strategies for Selective Laser Melting
The result is shown here for a simple 200 mm square filled with 5 mm islands:

Taking the difference between cases 2) and 3), the islands with hatch scan vectors can be generated without requiring unnecessary clipping of the interior scan vectors. As a result this significantly reduces the computational effort required.

Although extreme, the previous example generated a total number of 2209 5 mm islands to cover the entire region. The breakdown of the island intersections are:

  1. Non-intersecting islands: 1591 (72%),
  2. Non-clipped islands: 419 (19%),
  3. Clipped islands: 199 (9%).

With respect to solid regions, the number of clipped islands account for 32% of the total area. The overall result is shown below. The total area of the hatch region that was hatched is 1.97 \times 10^3 \ mm^2, which is equivalent to a square length of 445 mm, significantly larger than what is capable on most commercial SLM systems. Using an island size of 5 mm with an 80 μm hatch spacing, the approximate hatching time is 6.5 s on a modest laptop system. For this example, 780 000 hatch vectors were generated.

PySLM - A close-up view showing the clipped scan vectors using the more efficient island scan strategy.
A close up view showing the 5mm Island Hatching with 0.8 mm Hatch Distance. Blue Lines show the overall path traversed by the laser beam. The total time taken for hatching was approximately 8 seconds.

The order of hatching scanned is shown by the blue lines, which trace the midpoints of the vectors. Hatches inside the island are scanned sequentially. The order of scanning in this case is chosen to go vertically upwards and then horizontally across using the in-built Python 3 sorting function with a lambda expression Remarkably, all performed using one line:

sortedIslands = sorted(islands, key=lambda island: (island.posId[0], island.posId[1]) )

A future post will elaborate further methods for sorting hatch vectors and island groups.

Comparison to Original Implementation:

The following is a non-scientific benchmark performed to illustrate the performance profile of the proposed method in PySLM.

Island Size [mm]Original Method Time [s]Proposed Method Time [s]
34665.3
52586.5
101217.9
20758.23
Approximate benchmark comparing Island Hatching Techniques in PySLM

It is clearly evident that the proposed method reduces the overall time by 1-2 orders for hatching a region. What is strange is that with the new proposed method, the overall time increases with the island size.

Generally it is expected that the number of clipping operations n_{clip} to be the following:

n_{clip} \propto \frac{Perimiter}{IslandWidth}

Potentially, this allows bespoke complex ‘sub-island’ scan strategies to be employed without a significant additional cost because scan vectors within un-clipped island regions can be very quickly replicated across the layer.

Other Benefits

The other benefits of taking approach is making a more modular object orientated approach for generating island based strategies, which don’t arbitrarily follow regular structured patterns. A future article will illustrate further explain the procedures for generating these.

The example can be seen and run in examples/example_island_hatcher.py in the Github repository.

References

References
1 Parry, L. A., Ashcroft, I. A., & Wildman, R. D. (2019). Geometrical effects on residual stress in selective laser melting. Additive Manufacturing, 25. https://doi.org/10.1016/j.addma.2018.09.026
2 Valente, E. H., Gundlach, C., Christiansen, T. L., & Somers, M. A. J. (2019). Effect of scanning strategy during selective laser melting on surface topography, porosity, and microstructure of additively manufactured Ti-6Al-4V. Applied Sciences (Switzerland), 9(24). https://doi.org/10.3390/app9245554
3, 4 Zhang, W., Tong, M., & Harrison, N. M. (2020). Scanning strategies effect on temperature, residual stress and deformation by multi-laser beam powder bed fusion manufacturing. Additive Manufacturing, 36(June), 101507. https://doi.org/10.1016/j.addma.2020.101507
5 Ali, H., Ghadbeigi, H., & Mumtaz, K. (2018). Effect of scanning strategies on residual stress and mechanical properties of Selective Laser Melted Ti6Al4V. Materials Science and Engineering A, 712(October 2017), 175–187. https://doi.org/10.1016/j.msea.2017.11.103
6 Robinson, J., Ashton, I., Fox, P., Jones, E., & Sutcliffe, C. (2018). Determination of the effect of scan strategy on residual stress in laser powder bed fusion additive manufacturing. Additive Manufacturing, 23(February), 13–24. https://doi.org/10.1016/j.addma.2018.07.001
7 Mercelis, P., & Kruth, J.-P. (2006). Residual stresses in selective laser sintering and selective laser melting. Rapid Prototyping Journal, 12(5), 254–265. https://doi.org/10.1108/13552540610707013

Build Time Estimation in L-PBF (SLM) Using PySLM (Part I)

Build-Time = Cost

This quantity is arguably the greatest driver of individual part cost for the majority of Additive Manufacture parts (excluding the additional costs of post-processing). It inherently relates to the proportional utilisation of the AM system that has a fixed capital cost at purchase under an assumed operation time (estimate is around 6-10 years).

Predicting this quickly and effectively for parts built using Powder Bed Fusion processes may initially sound simple, but actually there aren’t many free or opensource tools that provide a utility to predict this. Also the data isn’t not easily obtainable without having some inputs. In the literature, investigations into build-time estimation, embodied energy consumption and the analysis of costs associated with powder-bed for both SLM and EBM have been undertaken [1][2][3][4].

This usually involves submitting your design to an online portal or building up a spreadsheet and calculating some values. A large part of the cost for a part designed for AM is related to its build-time and this as a value can indicate the relative cost of the AM part.

Build-time, as a ‘lump’ measure is quintessentially the most significant factor in determining the ultimate cost of parts manufactured on powder-bed fusion systems. Obviously, this is oblivious to other factors such as post-processing of parts (i.e. heat-treatment, post-machining) surface coatings and post-inspection and part level qualification, usually essentially as part of the entire manufacturing processes for an AM part.

The reference to a ‘lump’ cost value coincides with various parameters inherent to the part that are driven by the decisions of design to meet the functional requirements / performance. The primary factors affecting this:

  • Material alloy
  • Geometrical shape of the part
  • Machine system

These may be further specified as a set of chosen parameters

  • Part Orientation
  • Build Volume Packing (i.e. number of parts within the build)
  • Number of laser beams in the SLM system
  • Recoater time
  • Material Alloy laser [arameters (i.e. effective laser scan speed)
  • Part Volume (V)

From the build-time, the cost estimate solely for building the piece part can be calculated across ‘batches’ or a number of builds, which largely takes into account fixed costs such as capital investment in the machine and those direct costs associated with material inputs, consumables and energy consumption [5].

In this post, additional factors intrinsic to the machine operation, such as build-chamber warm-up and cool-down time, out-gassing time are ignored. Exploring the economics of the process, these should be accounted for because it can in some processes e.g. Selective Laser Sintering (SLS) and High-Speed-Sintering (HSS) of polymers can account for a significant contribution to the actual ‘accumulated‘ build time within the machine.

Calculation of the Build Time in L-PBF

There are many different approaches for calculating the estimate of the build-time depending on the accuracy required.

Build Bulk Volume Method

The build volume method is the most crudest forms for calculating the build time, t_{build}. The method takes the total volume of the part(s) within a build V and divided by machine’s build volume rate \dot{V} – a lumped empirical value corresponding to a specific material deposited or manufactured by an AM system.

t_{build}=\frac{V}{\dot{V}}

This is very approximate, therefore limited, because the prediction ignores build height within the chamber that is a primary contributor to the build time. Also it ignores build volume packaging – the density of numerous parts contained packed inside a chamber, which for each build contributes a fixed cost. However, it is a good measure for accounting the cost of the part based simply on its mass – potentially a useful indicator early during the design conceptualisation phase.

Layer-wise Method

This approach accounts for the actual geometry of the part as part of the estimation. It performs slicing of the part and accounts for the area and boundaries of the part, which may be assigned separate laser scan speeds. This has been implemented as a multi-threaded/process example in order to demonstrate how one can analysis the cost of a part relatively quickly and simply using this as a template.

The entire part is sliced at the constant layer thickness L_t in the function calculateLayer(). In this function, the part is sliced using getVectorSlice(), at the particular z-height and by disabling returnCoordPaths parameter will return a list of Shapely.geometry.Polygon objects.

def calculateLayer(input):
    d = input[0]
    zid= input[1]

    layerThickness = d['layerThickness']
    solidPart = d['part']

    # Slice the boundary
    geomSlice = solidPart.getVectorSlice(zid*layerThickness, returnCoordPaths=False)

The slice represents boundaries across the layer. Each boundary is a Shapely.Polygon, which can be easily queried for its boundary length and area. This is performed later after the python multi-processing map call:

d = Manager().dict()
d['part'] = solidPart
d['layerThickness'] = layerThickness

# Rather than give the z position, we give a z index to calculate the z from.
numLayers = int(solidPart.boundingBox[5] / layerThickness)
z = np.arange(0, numLayers).tolist()

# The layer id and manager shared dict are zipped into a list of tuple pairs
processList = list(zip([d] * len(z), z))

startTime = time.time()

layers = p.map(calculateLayer, processList)
p.close()
print('multiprocessing time', time.time()-startTime)

polys = []
for layer in layers:
    for poly in layer:
        polys.append(poly)

layers = polys

"""
Calculate total layer statistics:
"""
totalHeight = solidPart.boundingBox[5]
totalVolume = solidPart.volume

totalPerimeter = np.sum([layer.length for layer in layers]) * numCountourOffsets
totalArea = np.sum([layer.area for layer in layers])

Once the sum of the total part area and perimeter are calculated the total scan time can be calculated from these. The approximate measure of scan time across the part volume (bulk region) is related by the total scan area accumulated across each layer of the partA, the hatch distance h_d and the laser scan speed v_{bulk}.

t_{hatch} = \frac{A}{L_t v_{bulk}}

Similarly the scan time across the boundary for contour scans (typically scanned at a lower speed is simply the total perimeter length L divided by the contour scan speed v_{contour}

t_{boundary} = \frac{L}{v_{contour}}

Finally, the re-coating time is simply a multiple of the number of layers.

"""
Calculate the time estimates
"""
hatchTimeEstimate = totalArea / hatchDistance / hatchLaserScanSpeed
boundaryTimeEstimate = totalPerimeter / contourLaserScanSpeed
scanTime = hatchTimeEstimate + boundaryTimeEstimate
recoaterTimeEstimate = numLayers * layerRecoatTime

totalTime = hatchTimeEstimate + boundaryTimeEstimate + recoaterTimeEstimate

Compound approach using Surface and Volume

In fact, it may be possible to deduce that much of this is unnecessary for finding the approximate scanning time. Instead, a simpler formulation can be derived. The scan time can be deduced from simply the volume Vand the total surface area of the part S

t_{total}=\frac{V}{L_t h_d v_{bulk}} + \frac{S}{L_t v_{contour}} + N*t_{recoat},

where N=h_{build}/L_t. After realising this, further looking into literature, it was proposed by Giannatsis et al. back in 2001 for SLA time estimation [6]. Surprisingly, I haven’t come across this before. They propose that taking the vertical projection of the surface better represents the true area of the boundary, under the slicing process.

t_{total}=\frac{V}{L_t h_d v_{bulk}} + \frac{S_P}{L_t v_{contour}} + N*t_{recoat}

The projected area is calculated by taking the dot product with the vertical vector v_{up} = (0.,0.,1.0)^T and the surface normal \hat{n} using the relation: a\cdot b = \|a\| \|b\| \cos(\theta) for each triangle and calculating the sine component using the identity (\cos^2(\theta) + \sin^2(\theta) = 1) to project the triangle area across the vertical extent.

""" Projected Area"""
# Calculate the vertical face angles
v0 = np.array([[0., 0., 1.0]])
v1 = solidPart.geometry.face_normals

sin_theta = np.sqrt((1-np.dot(v0, v1.T)**2))
triAreas = solidPart.geometry.area_faces *sin_theta
projectedArea = np.sum(triAreas)

Comparison between build time estimation approaches

The difference in scan time with the approximation is relatively close for a simple example:

  • Discretised Layer Scan Time – 4.996 hr
  • Approximate Scan Time – 5.126 hr
  • Approximate Scan Time (with projection) – 4.996 hr

Arriving at the rather simple result may not be interesting, but given the frequency of most cost models not stating this hopefully may be useful for some. It is useful in that it can account for the complexity of the boundary rather than simply the volume and the build-height, whilst factoring in the laser parameters used – typically available for most materials on commercial systems .

The second part of the posting will share more details about more precisely measuring the scan time using the analysis tools available in PySLM.

References

References
1 Baumers, M., Tuck, C., Wildman, R., Ashcroft, I., & Hague, R. (2017). Shape Complexity and Process Energy Consumption in Electron Beam Melting: A Case of Something for Nothing in Additive Manufacturing? Journal of Industrial Ecology, 21(S1), S157–S167. https://doi.org/10.1111/jiec.12397
2 Baumers, M., Dickens, P., Tuck, C., & Hague, R. (2016). The cost of additive manufacturing: Machine productivity, economies of scale and technology-push. Technological Forecasting and Social Change, 102, 193–201. https://doi.org/10.1016/j.techfore.2015.02.015
3 Faludi, J., Baumers, M., Maskery, I., & Hague, R. (2017). Environmental Impacts of Selective Laser Melting: Do Printer, Powder, Or Power Dominate? Journal of Industrial Ecology, 21(S1), S144–S156. https://doi.org/10.1111/jiec.12528
4 Liu, Z. Y., Li, C., Fang, X. Y., & Guo, Y. B. (2018). Energy Consumption in Additive Manufacturing of Metal Parts. Procedia Manufacturing, 26, 834–845. https://doi.org/10.1016/j.promfg.2018.07.104
5 Leach, R., & Carmignato, S. (2020). Precision Metal Additive Manufacturing (R. Leach & S. Carmignato. https://doi.org/10.1201/9780429436543
6 Giannatsis, J., Dedoussis, V., & Laios, L. (2001). A study of the build-time estimation problem for Stereolithography systems. Robotics and Computer-Integrated Manufacturing, 17(4), 295–304. https://doi.org/10.1016/S0736-5845(01)00007-2

Optimising and Improving PyClipper (ClipperLib) Hatch Clipping Performance for AM

In previous incarnations throughout my time, the renowned ClipperLib has been used to provide the polygon and line clipping and offsetting operations for generating the hatching used in Selective Laser Melting . This library is also used as part of the slicing engine in the popular Cura software used in the Ultimakr filament 3D printers. his opensource c++ polygon clipping library at present is very efficient, robust and arguably does its specific job very well.

The First Attempt

The first implementation was done through a custom mex extension for Matlab – infact there is actually now a mex extension available on mathworks. The clipping of hatch vectors with a polygon boundary was done on an individual basis; one hatch vector at a time. This is necessary because returned clipped lines in ClipperLib are not guaranteed to be returned in the same sequential order as they were passed due to the underlying Vatti algorithm which works using a ‘scan-line’ approach.

Unfortunately, this was very inefficient operation requiring the re-initialisation of the boundary polygon for each clipped line. Through bruteforce and persistence, this method did a satisfactory job hatching simple geometries for research purposes.

The Second Attempt

The next progression was transitioning to Python, using PyClipper. Appreciating the challenges of trying to run hatching on embedded hardware, clipping each hatch vector didn’t fair well. The second approach was to clip all the hatch vectors together to remove the overhead of re-initializing the ClipperLib state. After clipping the hatch vectors were sorted using the following method.

The inner product or dot project between the midpoint of each hatch vector and the effective vector of the hatch angle is made. The projected distance is the used to sort the relative line position. This can be seen in LinearSort class or in the following code excerpt

theta_h = np.deg2rad(hatchAngle)

# Find the unit vector normal based on the hatch angle
norm = np.array([np.cos(theta_h), np.sin(theta_h)])

midPoints = np.mean(scanVectors, axis=1)

# Project the midpoint onto the hatch angle vector
idx2 = norm.dot(midPoints.T)

# Sort the scan vectors
idx3 = np.argsort(idx2)
sortIdx = np.arange(len(midPoints))[idx3]

return scanVectors[sortIdx]

This worked fairly well. However, it has severe drawback in that it is limited to just hatching and sorting across a single region and relies on a constant hatch direction. The sorting mechanism severely constrains what approaches can be used for scan strategy design, i.e. island scan strategies.

The current approach in PySLM

The current implementation builds on a clever trick built into ClipperLib that unfortunately is not implemented in PyClipper. Within ClipperLib there is an option to include the z coordinate as part of the Point Struct by defining the use_xyz preprocessor directive. This allows the user to pass an additional 4 byte integer for each point. Infact, this can represent any information, and this can infact be the unique hatch id, which defines the order of scanning. Note: We could further modify the library to use any data-type but this is unnecessary.

Each hatch vector coordinate is given an individual id.During the clipping operation with the boundary, the intersection point is assigned the id from the original ‘z’ coordinate.

During clipping, the z-component is not actually used for clipping. ClipperLib does not know which z-component should be used at the intersection between edges; the boundary or the hatch line. A function maxZFillFunc is defined in ClipperLib.cpp which resolves this ambiguity. To resolve this, a specific function finds the maximum z component from all intersecting edge points, and stores it in the new point pt generated. As a result the original index used for sorting is stored in the clipped result.


void maxZFillFunc(const IntPoint& e1bot, IntPoint& e1top, IntPoint& e2bot, IntPoint& e2top, IntPoint& pt) {
        // Find the maximum z value from all points. This provides a non ambiguious cases, as for PySLM the background
        // contour has a value of zero.
        long maxZ = -1;
        if(e1bot.Z > maxZ)
            maxZ = e1bot.Z;

        if(e1top.Z > maxZ)
            maxZ = e1top.Z;

        if(e2top.Z > maxZ)
            maxZ = e2top.Z;

        if(e2bot.Z > maxZ)
            maxZ = e2bot.Z;

        // Assign the z value to pt
        pt.Z = maxZ;
  };

void Clipper::ZFillFunction(ZFillCallback zFillFunc)
{
  m_ZFill = zFillFunc;
}

Further changes in the cython pyrex definition – pyclipper.pyx was needed to enable this functionality, hence, this needs to be compiled internally for this project.

Back in Python, the order index is assigned to each pairs of coordinates representing a hatch vector, resulting in an 2n\times3 array where n is the number of hatches:

idx = np.arange(len(x) / 2)
hatchArray = np.hstack([x,y,id])

After clipping the hatch vectors generated requiring a quick sort based on their id. Typically this isn’t expensive because most of the clipped hatch vectors are returned in the same order.

clippedLines = self.clipperToHatchArray(clippedPaths)

# Extract only x-y coordinates and sort based on the pseudo-order stored in the z component.
clippedLines = clippedLines[:, :, :3]
id = np.argsort(clippedLines[:, 0, 2])
clippedLines = clippedLines[id, :, :]

Conclusions

Throughout PySLM, the user simply has to generate a sequence of hatch vectors as the same order they wish them to be scanned. An increasing unique index is applied to each hatch vector.

This proposed technique is remarkbly simple, efficient and effective for clipping . It is very trivial but given the limit number of polygon clipping libraries available this was particularly useful.

In the future, I plan to explore storing additional data associated with the hatch vectors. This itself may not be necessary because this data can be looked up from the global hatch data array, but it may result in simplification of the code.

Basics of Hatching in PySLM

The generation of hatch infills in PySLM is relatively straightforward and can be adapted to suit the user’s needs. Anecdotally, this was a remarkable improvement on performance and simplicity over preexisting code that was used previously during University, which used a combination of Matlab and a ClipperLib library.

The generation of hatch vectors are nearly always a series of adjacent parallel scan vectors that an infill a bounded area. There has been some unusual approaches for infills that have been tried such as spiral [1][2] and fractal [3][4][5] scan strategies. Generally, exploring different raster patterns across an SLM part remains largely unexplored due to the unavailability of opensource tools, code and open access to machine platforms now in research environments. However, potentially this could unlock the ability to micro-structural development in some metal alloys. Attard et al. demonstrates this by modifying the island size throughout the part to promote different microstructures [6].

The first step of generating the hatch is finding the region to ensure scan vector generated guarantees full coverage across the boundary for the chosen hatch angle, h_d. This is needed irrespective of scan strategy. My previous approach was rather cumbersome and required trigonometry to be used to calculate the start and end points according the chosen (\theta_h) to cover the boundary. A far simpler approach was conceived.

Basic method:

The approach simply relies on covering the entire boundary irrespective of hatch angle and letting the polygon clipping library do the actual heavy lifting.

In this method, the bounding box for a closed region is found. A rather simple calculation made by taking the min and max of the boundary coordinates – easily obtained via shapely.

The maximum extent or diagonal length, r, from the bounding box corner to the bounding centre is then calculated. In the diagram, it is observed that this forms a circular locus, which upon choosing any hatch angle, all hatch lines after rotation will be guaranteed to fully cover boundary. Remarkably simple.

The basic code can be found in hatching.py:

# Expand the bounding box
bboxCentre = np.mean(bbox.reshape(2,2), axis=0)

# Calculates the diagonal length for which is the longest
diagonal = bbox[2:] - bboxCentre
bboxRadius = np.sqrt(diagonal.dot(diagonal))

From the locus, we assume the hatch coordinate local coordinate system with an origin (-r,-r). A series of parallel hatch vectors with hatch distance, h_d, can be generated conveniently covering the entire square region . This can be done using numpy, using np.tile:

# Construct a square which wraps the radius
x = np.tile(np.arange(-bboxRadius, bboxRadius, hatchSpacing, dtype=np.float32).reshape(-1, 1), (2)).flatten()
y = np.array([-bboxRadius, bboxRadius]);
y = np.resize(y, x.shape)

Unfortunately, ClipperLib by default does not guarantee the order lines are clipped in. The naive approach is to clip these individually in a sequential order, which introduces a significant overhead. Alternatively the clipped hatch lines can be sorted. For achieving this, the PyClipper library was customized to guarantee the sequential order of the scan , which is explained in this post.

The approach to guarantee the z-order is to provide a z-coordinate which defines the order of scanning. This should be in pairs of ascending order which group the coordinates in each scan vector: i.e. 0,0, 1,1,..,2,2 etc. Finally before being clipped, an n\times3 coordinate matrix is formed.

# Generate a linear sorting according to the order of scanning.
# i.e. 0,0,1,1,..,n-1,n-1,n,n 
z = np.arange(0, x.shape[0] / 2, 0.5).astype(np.int64)

# Group the XY coordinates and z order 
coords = np.hstack([x.reshape(-1, 1),
                    y.reshape(-1, 1),
                    z.reshape(-1, 1)])

From this, a 2D rotation matrix, R(\theta_h), and translation to the bounding box centre can be applied to the coordinates of the unclipped hatch lines.

# Create the 2D rotation matrix
c, s = np.cos(theta_h), np.sin(theta_h)
R = np.array([(c, -s, 0),
              (s, c, 0),
              (0, 0, 1.0)])

# Apply the rotation matrix and translate to bounding box centre
coords = np.matmul(R, coords.T)
coords = coords.T + np.hstack([bboxCentre, 0.0])

There is one issue with the method is that using region / island based scan strategies. Many of the regions are not clipped, therefore, for large regions it can become inefficient. However, assuming a normal hatch in-fill is used, the clipping operation mostly amortises the additional cost.

References

References
1 Zhang, W., Tong, M., & Harrison, N. M. (2020). Scanning strategies effect on temperature, residual stress and deformation by multi-laser beam powder bed fusion manufacturing. Additive Manufacturing, 36(June), 101507. https://doi.org/10.1016/j.addma.2020.101507
2 Qian, B., Shi, Y. S., Wei, Q. S., & Wang, H. B. (2012). The helix scan strategy applied to the selective laser melting. International Journal of Advanced Manufacturing Technology, 63(5–8), 631–640. https://doi.org/10.1007/s00170-012-3922-9
3 Ma, L., & Bin, H. (2006). Temperature and stress analysis and simulation in fractal scanning-based laser sintering. The International Journal of Advanced Manufacturing Technology, 34(9–10), 898–903. https://doi.org/10.1007/s00170-006-0665-5
4 Catchpole-Smith, S., Aboulkhair, N., Parry, L., Tuck, C., Ashcroft, I., & Clare, A. (2017). Fractal scan strategies for selective laser melting of ‘unweldable’ nickel superalloys. Additive Manufacturing, 15, 113–122. https://doi.org/10.1016/j.addma.2017.02.002
5 Sebastian, R., Catchpole-Smith, S., Simonelli, M., Rushworth, A., Chen, H., & Clare, A. (2020). ‘Unit cell’ type scan strategies for powder bed fusion: The Hilbert fractal. Additive Manufacturing, 36(July), 101588. https://doi.org/10.1016/j.addma.2020.101588
6 Attard, B., Cruchley, S., Beetz, C., Megahed, M., Chiu, Y. L., & Attallah, M. M. (2020). Microstructural control during laser powder fusion to create graded microstructure Ni-superalloy components. Additive Manufacturing, 36, 101432. https://doi.org/10.1016/j.addma.2020.101432

Slicing and Hatching for Selective Laser Melting (L-PBF)

Much of slicing and hatching process is already taken for granted in commercial software mostly offered by the OEMs of these systems rarely discussed amongst academic research. Already we observe practically the implications direct control over laser parameters and scan strategy on the quality of the bulk material – reduction in defects, minimising distortion due to residual stress, and the surface quality of parts manufactured using these process. Additionally, it can have a profound impact the the metallic phase generation, micro-structural texture driven via physics-informed models [1], grading of the bulk properties and offer precise control over manufacturing intricate features such as thin-wall or lattice structures [2].

This post hopefully highlights to those unfamiliar some of the basis process encountered in the generation of machine build files used in AM systems and get a better understanding to the operation behind PySLM. I have tried my best to generalise this as much as possible, but I imagine there are subtleties I have not come across.

This post is to provide some reference into the generation of hatches or scan vectors are created for use in AM processes such as selective laser melting (SLM), which uses a point energy source to raster across a medium. Some people prefer to more generally to classify the family of processes using the technical ASTM F42 committee standards 52900 and 52911 – Powder Bed Fusion (PBF). I won’t go into the basic process of the manufacturing processes such as EBM, SLM, SLA, BJF, as there are many excellent articles already that explain these in far greater detail.

Machine Build Files

AM processes require a digital representation to manufacture an object. These tend to be computed offline – separate from the 3D Printer, using specialist or dedicated pre-processing software. I expect this will become a closed-loop system in the future, such that the manufacturing integrated directly into the machine.

For some AM process families, the control operations may be exceedingly granular – i.e. G-code. G-code formats state specific instructions or functional commands for the 3D printer to sequentially or linearly execute. These tend to fit with deposition methods such as Filament Extrusion, Direct-Ink-Writing (robo-casting) and direct energy deposition (DED) methods. Typically, these tends to be for deposition with a machine systems, which requires coordination of physical motion in-conjunction with some mechanised actuation to deposit/fuse material.

Machine Build File Formats for L-PBF

For exposure (laser, electron-beam) based AM processes, commercial systems use a compact notation solely for representing the scan path the exposure source will traverse . The formats are often binary to aid their compactness.

To summarise, within these build files, an intermediate representation consists of index-based referenceable parameters for the build. The remainder consists of a series of layers, that contain geometric entities (points, vectors) that are used to to control the exposure for the border or contour or raster or infill the interior region. For L-PBF processes, the digital files, commonly referred as “machine build file” comes in various flavours dependent on the machine manufacture:

  • Renshaw .mtt,
  • SLM Solution .slm,
  • DMG Mori Realizer .rea
  • EOS .sli
  • Aconity .cli+ or .ilt wrapper

Some file formats, such as Open Beam Path format can specify bezier curves [3]. Another recently proposed open source format created by RWTH Aachen in 2022 called OpenVector Format based on Google’s Protobuf schema. The format aims to offer a specification universally compatible across a swathe of PBF processes and supplement existing commercial formats with additional build-process meta-data (e.g. build, platform temperature, dosing) and detailed definition with further advancements in the process, such as multi-beam builds.

Build-File Formats

Higher level representations that describe the distribution of material(s) defining geometry – this could be bitmap slices or even a 3D model. Processes such as Jetting, BJF, High Speed Sintering, DLP Vat-polymerisation currently available offer this a reality. With time, polymer and metal processes will evolve to become 2D:, diode aerial melting [4] or more aerial based scanning based on holographic additive manufacturing methods, such as those proposed by Seurat AM [5] based off research at LLNL, and recently at University of Cambridge [6] . In the future, we can already observe the exciting prospect of new processes such as computed axial lithograph [7] that will provide us near instantaneous volumetric additive manufacturing.

For now, single and multi point exposure systems for the imminent future will remain with us as the currently available process. PySLM uses an intermediate representation – specifying a set of points and lines to control the exposure of energy into a layer.

The Slicing and Hatching Process in L-PBF

With nearly most conventional 3D printing process, it begins with a 3D representation of a solid volume or geometry. 2D planar slices or layers are extracted from a 3D mesh or B-Rep surface in CAD by taking cross-sections from a geometry. Each slice layer consist of a set of boundaries and holes describing the cross-section of an object. Note: non-planar deposition does exist for DED/Filament processes, such as this Curved Layer Fused Deposition Modeling [ref] and a spherical slicing technique [8].

For consolidating material, an exposure beam must raster across the surface medium (metal or polymer powder, or a photo-polymer resin) depending on the process. Currently this is a single or multiple point which moves at a velocity vwith a power P across the surface. The designated exposure or energy deposited into the medium is principally a function of these two parameters, depending on the type of laser:

  • (Quasi)-Continious Wave: The laser remains active switched on (typically modulated using a form of PWM) across the entire length of the scan vector
  • Pulsed Mode (Q-Switched): Laser is pulsed at set distances and exposure times across the scan vector

Numerous experiments often tend to result in parametric power/speed maps to the achieved part bulk density, that result in usually optimal processing windows that produce stable and consistent melt-tracks [9][10]. Recently, process maps are based on a non-dimensional parameter such as the normalised enthalpy approach, that more reliably assist selecting a suitable process windows [11].

Illustration of a scan vector commonly used in Laser Powder-Bed Fusion (SLM)

However, the complexity of the process extends further and is related to a many additional variables dependent on the process such as layer thickness, absorption coefficient (powder and material), exposure beam profile etc.. Additionally, the cumulative energy deposited spatially over a period of time must consider overlap of scan vectors within an area.

Scan Vector Generation

Each boundary polygon is offset initially to account for the the radius of the beam exposure, which is termed a ‘spot compensation factor‘. Some processes such as SLS or BJF account for global part shrinkage volumetrically throughout the part by having a global scale factor or deformed mesh to compensate to non-uniform shrinkage across the part.

The composition of laser scan vectors used in a slice or layer for L-PBF or Selective Laser Melting. The boundary is offset multiple times, with the interior or core filled with hatch vectors.
The typical composition of a layer used for scanning in exposure based processes. This consists of outer and inner contours, with the core interior filled with hatches.

This first initial offset is the outer-contour which would be visible on the exterior of the part. This contour will have a different set of laser parameters in order to optimise and improve the surface roughness of the part obtained. A further offset is applied to generate a set of inner-contours before hatching begins.

Depending on the orientation of the surface (e.g. up-skin or down-skin), the boundary and interior region may be intersected to fine-tune the laser parameters to provide better surface texture, or surface roughness – typically varying between Ra = 3-13 μm [12] primarily determined by the surface angle and a combination of the process variables including,

  • the powder feedstock (bulk material, powder size distribution)
  • laser parameters
  • layer thickness (pre-dominantly fixed or constant for most AM processes)

Overhang regions and surfaces with a low overhang angles tend to be susceptible to high surface-roughness. Roller re-coater L-PBF systems – available only on 3DSystems or AddUp system,, tend to offer far superior surface quality on low inclined or overhang regions. Additionally, progressive advancement and maturity of laser parameter optimisation, and those computationally driven using part geometry [13] are able to further enhance the quality and potentially eliminate the need for support structures. Depending on the machine platform, these regions are identified by sampling across two-three layers. Overhang regions obviously require support geometry, which is an entirely different topic discussed in this post.

Laser parameters in SLM (L-PBF) can be optimised based on the adjacent surface regions. Special regions, include the upskin, downskin and overhang regions
Laser parameters can be optimised based on the adjacent surface regions. Special regions, include the upskin, downskin and overhang regions needed to improve the surface roughness and reduce density in regions.

Following the generation of the contours, the inner core region requires filling with hatches. Hatches are a series of parallel scan vectors placed adjacent at a set hatch distance, h_d. This parameter is optimized according to the material processed, but is essentially related to the spot radius of the exposure point r_s in order to reduce inter-track and inter layer porosity. Across each layer these tend to be placed at a particular orientation \theta_h, which is is then incrementally rotated globally for subsequent layers, typically 66.6°. This rotation aims to smooth out the build process in order to minimise inter-track porosity, and generate homogeneous material, and in the case of SLM mitigate the effects of anisotropic residual stress generation.

The composition and terminology (hatch distance, hatch spacing, hatch angle) used in L-PBF. The Layer Geometry objects used to scan across a Layer in Selective Laser Melting (L-PBF). The various parameters such as the hatch distance and hatch angle are shown.
A general composition of the various LayerGeometry objects used to scan across a Layer. The various parameters such as the hatch distance, spacing and hatch angle are shown.

The distribution (position, length, rotation) of these hatch vectors are arranged using a laser scan strategy. The most common include a simple alternating hatch, stripe and island or checkerboard scan strategy.

Each set or group of scan vectors is stored together in a LayerGeometry, depending on the type (either a set of point exposures, contour or hatch vectors). These LayerGeometry groups usually share a set of exposure parameters – power, laser scan speed (point exposure time, point distance for a pulsed laser), focus position).

Some systems offer a greater degree of control and can control individual power across the scan vectors. Other can fine tune the acceleration and modulate the power along the scan vectors to support techniques known as ‘skywriting‘. For instance in SLM, it has been proposed that careful tuning of the laser parameters towards the end of the scan vector, i.e. turning can reduce porosity by preventing premature collapse of key holing phenomena [14]. In theory, PySLM could be extended to provide greater control of the electro-optic systems used in the process if so desired.

Hopefully, this provides enough background for those who are interested and engaged in working with developing scan strategies and material development using PySLM.

References

References
1 Plotkowski, A., Ferguson, J., Stump, B., Halsey, W., Paquit, V., Joslin, C., Babu, S. S., Marquez Rossy, A., Kirka, M. M., & Dehoff, R. R. (2021). A stochastic scan strategy for grain structure control in complex geometries using electron beam powder bed fusion. Additive Manufacturing46. https://doi.org/10.1016/j.addma.2021.102092
2 Ghouse, S., Babu, S., van Arkel, R. J., Nai, K., Hooper, P. A., & Jeffers, J. R. T. (2017). The influence of laser parameters and scanning strategies on the mechanical properties of a stochastic porous material. Materials and Design131, 498–508. https://doi.org/10.1016/j.matdes.2017.06.041
3 Open Beam Path – Freemelt, https://gitlab.com/freemelt/openmelt/obplib-python
4 Zavala Arredondo, Miguel Angel (2017) Diode Area Melting Use of High Power Diode Lasers in Additive Manufacturing of Metallic Components. PhD thesis, University of Sheffield.
5 Seurat AM. https://www.seuratech.com/
6 https://www.theengineer.co.uk/holographic-additive-manufacturing-lasers/
7 Kelly, B., Bhattacharya, I., Shusteff, M., Panas, R. M., Taylor, H. K., & Spadaccini, C. M. (2017). Computed Axial Lithography (CAL): Toward Single Step 3D Printing of Arbitrary Geometries. Retrieved from http://arxiv.org/abs/1705.05893
8 Yigit, I. E., & Lazoglu, I. (2020). Spherical slicing method and its application on robotic additive manufacturing. Progress in Additive Manufacturing, 5(4), 387–394. https://doi.org/10.1007/s40964-020-00135-5
9 Yadroitsev, I., & Smurov, I. (2010). Selective laser melting technology: From the single laser melted track stability to 3D parts of complex shape. Physics Procedia, 5(Part 2), 551–560. https://doi.org/10.1016/j.phpro.2010.08.083
10 Maamoun, A. H., Xue, Y. F., Elbestawi, M. A., & Veldhuis, S. C. (2018). Effect of selective laser melting process parameters on the quality of al alloy parts: Powder characterization, density, surface roughness, and dimensional accuracy. Materials, 11(12). https://doi.org/10.3390/ma11122343
11 Ferro, P., Meneghello, R., Savio, G., & Berto, F. (2020). A modified volumetric energy density–based approach for porosity assessment in additive manufacturing process design. International Journal of Advanced Manufacturing Technology, 110(7–8), 1911–1921. https://doi.org/10.1007/s00170-020-05949-9
12 Ni, C., Shi, Y., & Liu, J. (2019). Effects of inclination angle on surface roughness and corrosion properties of selective laser melted 316L stainless steel. Materials Research Express, 6(3). https://doi.org/10.1088/2053-1591/aaf2d3
13 Velo3D Sapphire Printer – SupportFree Technology. https://blog.velo3d.com/blog/supportfree-what-does-it-mean-why-is-it-important
14 Martin, A. A., Calta, N. P., Khairallah, S. A., Wang, J., Depond, P. J., Fong, A. Y., … Matthews, M. J. (2019). Dynamics of pore formation during laser powder bed fusion additive manufacturing. Nature Communications, 10(1), 1–10. https://doi.org/10.1038/s41467-019-10009-2