Build-Time = Cost
This quantity is arguably the greatest driver of individual part cost for the majority of Additive Manufacture parts (excluding the additional costs of post-processing). It inherently relates to the proportional utilisation of the AM system that has a fixed capital cost at purchase under an assumed operation time (estimate is around 6-10 years).
Predicting this quickly and effectively for parts built using Powder Bed Fusion processes may initially sound simple, but actually there aren’t many free or opensource tools that provide a utility to predict this. Also the data isn’t not easily obtainable without having some inputs. In the literature, investigations into build-time estimation, embodied energy consumption and the analysis of costs associated with powder-bed for both SLM and EBM have been undertaken [1][2][3][4].
This usually involves submitting your design to an online portal or building up a spreadsheet and calculating some values. A large part of the cost for a part designed for AM is related to its build-time and this as a value can indicate the relative cost of the AM part.
Build-time, as a ‘lump’ measure is quintessentially the most significant factor in determining the ultimate cost of parts manufactured on powder-bed fusion systems. Obviously, this is oblivious to other factors such as post-processing of parts (i.e. heat-treatment, post-machining) surface coatings and post-inspection and part level qualification, usually essentially as part of the entire manufacturing processes for an AM part.
The reference to a ‘lump’ cost value coincides with various parameters inherent to the part that are driven by the decisions of design to meet the functional requirements / performance. The primary factors affecting this:
- Material alloy
- Geometrical shape of the part
- Machine system
These may be further specified as a set of chosen parameters
- Part Orientation
- Build Volume Packing (i.e. number of parts within the build)
- Number of laser beams in the SLM system
- Recoater time
- Material Alloy laser [arameters (i.e. effective laser scan speed)
- Part Volume (V)
From the build-time, the cost estimate solely for building the piece part can be calculated across ‘batches’ or a number of builds, which largely takes into account fixed costs such as capital investment in the machine and those direct costs associated with material inputs, consumables and energy consumption [5].
In this post, additional factors intrinsic to the machine operation, such as build-chamber warm-up and cool-down time, out-gassing time are ignored. Exploring the economics of the process, these should be accounted for because it can in some processes e.g. Selective Laser Sintering (SLS) and High-Speed-Sintering (HSS) of polymers can account for a significant contribution to the actual ‘accumulated‘ build time within the machine.
Calculation of the Build Time in L-PBF
There are many different approaches for calculating the estimate of the build-time depending on the accuracy required.
Build Bulk Volume Method
The build volume method is the most crudest forms for calculating the build time, t_{build}. The method takes the total volume of the part(s) within a build V and divided by machine’s build volume rate \dot{V} – a lumped empirical value corresponding to a specific material deposited or manufactured by an AM system.
t_{build}=\frac{V}{\dot{V}}
This is very approximate, therefore limited, because the prediction ignores build height within the chamber that is a primary contributor to the build time. Also it ignores build volume packaging – the density of numerous parts contained packed inside a chamber, which for each build contributes a fixed cost. However, it is a good measure for accounting the cost of the part based simply on its mass – potentially a useful indicator early during the design conceptualisation phase.
Layer-wise Method
This approach accounts for the actual geometry of the part as part of the estimation. It performs slicing of the part and accounts for the area and boundaries of the part, which may be assigned separate laser scan speeds. This has been implemented as a multi-threaded/process example in order to demonstrate how one can analysis the cost of a part relatively quickly and simply using this as a template.
The entire part is sliced at the constant layer thickness L_t in the function calculateLayer()
. In this function, the part is sliced using getVectorSlice()
, at the particular z-height and by disabling returnCoordPaths
parameter will return a list of Shapely.geometry.Polygon
objects.
def calculateLayer(input):
d = input[0]
zid= input[1]
layerThickness = d['layerThickness']
solidPart = d['part']
# Slice the boundary
geomSlice = solidPart.getVectorSlice(zid*layerThickness, returnCoordPaths=False)
The slice represents boundaries across the layer. Each boundary is a Shapely.Polygon
, which can be easily queried for its boundary length and area. This is performed later after the python multi-processing map call:
d = Manager().dict()
d['part'] = solidPart
d['layerThickness'] = layerThickness
# Rather than give the z position, we give a z index to calculate the z from.
numLayers = int(solidPart.boundingBox[5] / layerThickness)
z = np.arange(0, numLayers).tolist()
# The layer id and manager shared dict are zipped into a list of tuple pairs
processList = list(zip([d] * len(z), z))
startTime = time.time()
layers = p.map(calculateLayer, processList)
p.close()
print('multiprocessing time', time.time()-startTime)
polys = []
for layer in layers:
for poly in layer:
polys.append(poly)
layers = polys
"""
Calculate total layer statistics:
"""
totalHeight = solidPart.boundingBox[5]
totalVolume = solidPart.volume
totalPerimeter = np.sum([layer.length for layer in layers]) * numCountourOffsets
totalArea = np.sum([layer.area for layer in layers])
Once the sum of the total part area and perimeter are calculated the total scan time can be calculated from these. The approximate measure of scan time across the part volume (bulk region) is related by the total scan area accumulated across each layer of the partA, the hatch distance h_d and the laser scan speed v_{bulk}.
t_{hatch} = \frac{A}{L_t v_{bulk}}
Similarly the scan time across the boundary for contour scans (typically scanned at a lower speed is simply the total perimeter length L divided by the contour scan speed v_{contour}
t_{boundary} = \frac{L}{v_{contour}}
Finally, the re-coating time is simply a multiple of the number of layers.
"""
Calculate the time estimates
"""
hatchTimeEstimate = totalArea / hatchDistance / hatchLaserScanSpeed
boundaryTimeEstimate = totalPerimeter / contourLaserScanSpeed
scanTime = hatchTimeEstimate + boundaryTimeEstimate
recoaterTimeEstimate = numLayers * layerRecoatTime
totalTime = hatchTimeEstimate + boundaryTimeEstimate + recoaterTimeEstimate
Compound approach using Surface and Volume
In fact, it may be possible to deduce that much of this is unnecessary for finding the approximate scanning time. Instead, a simpler formulation can be derived. The scan time can be deduced from simply the volume Vand the total surface area of the part S
t_{total}=\frac{V}{L_t h_d v_{bulk}} + \frac{S}{L_t v_{contour}} + N*t_{recoat},
where N=h_{build}/L_t. After realising this, further looking into literature, it was proposed by Giannatsis et al. back in 2001 for SLA time estimation [6]. Surprisingly, I haven’t come across this before. They propose that taking the vertical projection of the surface better represents the true area of the boundary, under the slicing process.
t_{total}=\frac{V}{L_t h_d v_{bulk}} + \frac{S_P}{L_t v_{contour}} + N*t_{recoat}
The projected area is calculated by taking the dot product with the vertical vector v_{up} = (0.,0.,1.0)^T and the surface normal \hat{n} using the relation: a\cdot b = \|a\| \|b\| \cos(\theta) for each triangle and calculating the sine component using the identity (\cos^2(\theta) + \sin^2(\theta) = 1) to project the triangle area across the vertical extent.
""" Projected Area"""
# Calculate the vertical face angles
v0 = np.array([[0., 0., 1.0]])
v1 = solidPart.geometry.face_normals
sin_theta = np.sqrt((1-np.dot(v0, v1.T)**2))
triAreas = solidPart.geometry.area_faces *sin_theta
projectedArea = np.sum(triAreas)
Comparison between build time estimation approaches
The difference in scan time with the approximation is relatively close for a simple example:
- Discretised Layer Scan Time – 4.996 hr
- Approximate Scan Time – 5.126 hr
- Approximate Scan Time (with projection) – 4.996 hr
Arriving at the rather simple result may not be interesting, but given the frequency of most cost models not stating this hopefully may be useful for some. It is useful in that it can account for the complexity of the boundary rather than simply the volume and the build-height, whilst factoring in the laser parameters used – typically available for most materials on commercial systems .
The second part of the posting will share more details about more precisely measuring the scan time using the analysis tools available in PySLM.
References
↑1 | Baumers, M., Tuck, C., Wildman, R., Ashcroft, I., & Hague, R. (2017). Shape Complexity and Process Energy Consumption in Electron Beam Melting: A Case of Something for Nothing in Additive Manufacturing? Journal of Industrial Ecology, 21(S1), S157–S167. https://doi.org/10.1111/jiec.12397 |
---|---|
↑2 | Baumers, M., Dickens, P., Tuck, C., & Hague, R. (2016). The cost of additive manufacturing: Machine productivity, economies of scale and technology-push. Technological Forecasting and Social Change, 102, 193–201. https://doi.org/10.1016/j.techfore.2015.02.015 |
↑3 | Faludi, J., Baumers, M., Maskery, I., & Hague, R. (2017). Environmental Impacts of Selective Laser Melting: Do Printer, Powder, Or Power Dominate? Journal of Industrial Ecology, 21(S1), S144–S156. https://doi.org/10.1111/jiec.12528 |
↑4 | Liu, Z. Y., Li, C., Fang, X. Y., & Guo, Y. B. (2018). Energy Consumption in Additive Manufacturing of Metal Parts. Procedia Manufacturing, 26, 834–845. https://doi.org/10.1016/j.promfg.2018.07.104 |
↑5 | Leach, R., & Carmignato, S. (2020). Precision Metal Additive Manufacturing (R. Leach & S. Carmignato. https://doi.org/10.1201/9780429436543 |
↑6 | Giannatsis, J., Dedoussis, V., & Laios, L. (2001). A study of the build-time estimation problem for Stereolithography systems. Robotics and Computer-Integrated Manufacturing, 17(4), 295–304. https://doi.org/10.1016/S0736-5845(01)00007-2 |