The in-built visualisation for scan paths in PySLM leverages matplotlib – refer to a previous post. This is sufficient for most user’s needs when attempting to interpret and visualise the scan paths generated in PySLM, or those imported from a slice taken from an existing machine build files. Extending this beyond multiple layers or large parts becomes more tricky when factoring in visualisation of some parameters (e.g. Laser Power, effective scan speed). Admittedly, the performance of Matplotlib becomes limited to explore the intricacies and complexities embedded within the scan vectors.
For scientific research, the fusion of scan vector geometry with volumetric datasets such as X-Ray CT during post-inspection of parts/samples, or those generated within the build process including pyrometry data, thermal-imaging offer the ability to increase our understanding and insight to observations of the effect of process on the material produced using L-PBF. GPU based visualisation libraries such (vispy) would offer the possibility to accelerate the performance, but are not user-friendly nor offer interactivity when manipulating views and the data and are often cumbersome when processing volumetric datasets often encountered in Additive Manufacturing. Paraview is a cross-platform open-source scientific visualisation tool that is especially powerful for processing, interaction and visualisation of large-scale scientific datasets.
Paraview and the underlying VTK library offers an alternative ready-made solution to visualise this information, and are most importantly hardware accelerated with the option for raytracing provided by OSPRay and OptiX for latest RTX NVIDIA cards that include Raytracing (RT) cores. Additionally, the data can be augmented and processed using parallelised filters and tools in Paraview.
VTK File Format
Ignoring the HDF5 variations that are most useful for structured data, the underlying format within vtk that used for storing vector based data and point cloud data is the .vtp file format. The modern VTK file formats use an XML schema – unlike the legacy format, to store a structured series of geometry (volumetric data, lines, polygons, 3D elements and point clouds). The internal data format can be stored using ascii encoding or binary. Binary data can be incorporated directly within a parsable .xml format using a Base64 encoding and may additional incorporate internal compression. Alternatively data can be stored in an appended data section located at the footer of the file, which treats data section as a contiguous block of raw data. Different sub-formats exist, that are appropriate for different types of data e.g. volumetric, element based (Finite Volume / Finite Element derived) or polygon based. An approach relevant to export scan vector geometry the .vtp – format is most suitable.
The data stored in the VTK Point file consists of:
- 3D points coordinates
- Data attributes stored at each point location
- Geometric elements (lines, polygons) defining connectivity with reference to the list of point coordinates
Paraview exporter implementation:
The Paraview exporter is simplistic, because the data compression is currently ignored. The process is similar to the technique used in the function pyslm.visualise.plotSequential
, whereby hatch and contour vectors are merged and reprocessed in order that they represent always a series of lines (an n x 2 x 2 array). This is not the most efficient option for ContourGeometry (border scans) where scan vectors are continuously joined up, but simplifies the processing working with the data.
Once the scan vector coordinates and the relevant data are packaged up into a single array, the data is wrote within the sub-sections of the XML file. Data is stored using floating points or integers accordingly in a binary representation. The data used to represent coordinates and indices for each vector, are stored with the ‘appended’ option within the <DataArray>
element of each section. The raw data is stored and collected that are then written in the <AppendedData>
element at the end of file with raw encoding option chosen. The byte offsets for the position of each ‘chunk’ of data that are referenced by the <DataArray>
element are collected and stored incrementally.
For reference, the following information is provided for writing raw data, because this was difficult to obtain from the VTK documentation directly.
<AppendedData encoding=”raw”> | Start of Raw Data Section |
_ | Underscore character is starting location for reading raw data |
Section Size (Int32/Int64) | Integer representing size of following section (include the size in bytes with the offsets provided). The integer type should match the size used in the header. |
Raw data (e.g Int32, float32, float64) | |
…. | Repeated the above (two rows) for each referenced data section |
</AppendedData> |
Example Scan Vector Data exported to VTK
An example Aconity .ILT file was imported into PySLM and then exported to a .vtp VTK file that was processed in Paraview. The scan order is visualised by the colour map with each vertex assigned a global-id. The ‘Tube‘ filter was applied to each scan vector in order to improve their visibility.

The script excerpt can currently be found on a Gist. This will be later included in future versions of PySLM along with other import/exporters.
from typing import Dict, List, Tuple, | |
import struct | |
import numpy as np | |
import pyslm | |
import pyslm.geometry | |
from pyslm import hatching as hatching | |
from pyslm import geometry as slm | |
def writeVTKLayers(layers: List[slm.Layer], filename: str, scaleFactor: float = 1e3): | |
""" | |
:param layers: List of layers to export to VTK | |
:param filename: The filename of the VTK .vtp file to write to | |
:param scaleFactor: The scale factor to use for the Z coordinates | |
""" | |
fileScanVectors = [] | |
scanData = [] | |
appendData = [] | |
appendDataOffsets = [] | |
i = 0 | |
with open(filename, 'w') as fp: | |
fp.write('<VTKFile type="PolyData" version="1.0" byte_order="LittleEndian" header_type="UInt32">') | |
for layer in layers: | |
i += 1 | |
scanVectors = [] | |
for geom in layer.geometry: | |
if isinstance(geom, slm.HatchGeometry): | |
coords = geom.coords.reshape(-1, 2, 2) | |
elif isinstance(geom, slm.ContourGeometry): | |
coords = np.hstack([geom.coords, np.roll(geom.coords, -1, axis=0)])[:-1, :].reshape(-1, 2, 2) | |
elif isinstance(geom, slm.PointsGeometry): | |
# Note that we duplicate the coordinates to represent emulate hatch vectors | |
coords = np.tile(geom.coords.reshape(-1, 2), (1, 2)).reshape(-1, 2, 2) | |
scanVectors.append(coords) | |
if len(scanVectors) == 0: | |
continue | |
scanVectors = np.vstack(scanVectors).reshape(-1, 2) | |
# Append the Layer's Z coordinate to the scan vectors | |
scanVectors = np.hstack([scanVectors, np.ones((len(scanVectors), 1)) * layer.z / scaleFactor]) | |
# Append the current layer's scan vectors to the accumulated list of scan vectors | |
fileScanVectors.append(scanVectors) | |
#pointsGeom = layer.getPointsGeometry() | |
#if len(pointsGeom) > 0: | |
# coords = np.vstack([geom.coords for geom in pointsGeom]) | |
""" | |
Plot the sequential index of the hatch vector and generating the colourmap by using the cumulative distance | |
across all the scan vectors in order to normalise the length based effectively on the distance | |
""" | |
scanVectors2 = scanVectors[:, :2].reshape(-1, 2, 2) | |
delta = scanVectors2[:, 1, :] - scanVectors2[:, 0, :] | |
dist = np.sqrt(delta[:, 0] * delta[:, 0] + delta[:, 1] * delta[:, 1]) | |
cumDist = np.cumsum(dist) | |
scanData.append(cumDist) | |
fileScanVectors = np.vstack(fileScanVectors) | |
# Add all the line collections to the figure | |
fp.write('\t<PolyData>') | |
fp.write('\t\t<Piece NumberOfPoints="{:d}" NumberOfVerts="0" NumberOfLines="{:d}" NumberOfStrips="0" NumberOfPolys="0">\n'.format( | |
fileScanVectors.shape[0], int(fileScanVectors.shape[0] / 2.0))) | |
# Write the Points Array | |
fp.write('\t\t\t <Points>\n') | |
fp.write('\t\t\t\t<DataArray type="Float32" NumberOfComponents="3" Name="Points" format="appended" offset="0" />') | |
fp.write('\t\t\t</Points>\n') | |
""" | |
Pack the coordinate data | |
""" | |
s = fileScanVectors.astype(np.float32) | |
b = struct.pack('=%sf' % s.size, *s.flatten()) | |
appendData.append(b) | |
appendDataOffsets.append(len(b) + 4) # Increment by 4 bytes for uint32 size | |
# Write the point data | |
fp.write('\t\t\t <PointData Scalars="GlobalNodeID">\n') | |
orderId = np.arange(0, len(fileScanVectors) * 2, 1) | |
""" | |
Pack the data | |
""" | |
fp.write('\t\t\t\t<DataArray type="Int32" NumberOfComponents="1" ' | |
'Name="GlobalNodeID" format="appended" offset="{:d}" />\n'.format(appendDataOffsets[-1])) | |
s = orderId.astype(np.int32) | |
b = struct.pack('=%sI' % s.size, *s.flatten()) | |
appendData.append(b) | |
appendDataOffsets.append(appendDataOffsets[-1] + len(b) + 4) | |
# p.write('\t\t\t\t</DataArray>\n') | |
fp.write('\t\t\t</PointData>\n') | |
""" | |
Write out the individual hatch vectors | |
""" | |
con = np.arange(0, len(fileScanVectors), 1) | |
fp.write('\t\t\t<Lines>\n') | |
fp.write('\t\t\t\t<DataArray type="Int32" NumberOfComponents="1" ' | |
'Name="connectivity" format="appended" offset="{:d}" />\n'.format(appendDataOffsets[-1])) | |
s = con.astype(np.int32) | |
b = struct.pack('=%sI' % s.size, *s.flatten()) | |
appendData.append(b) | |
appendDataOffsets.append(appendDataOffsets[-1] + len(b) + 4) | |
fp.write('\t\t\t\t<DataArray type="Int32" NumberOfComponents="1" ' | |
'Name="offsets" format="appended" offset="{:d}" />\n'.format(appendDataOffsets[-1])) | |
offsets = np.arange(2, (len(fileScanVectors) + 1), 2) | |
s = offsets.astype(np.int32) | |
b = struct.pack('=%sI' % s.size, *s.flatten()) | |
appendData.append(b) | |
appendDataOffsets.append(appendDataOffsets[-1] + len(b) + 4) | |
fp.write('\t\t\t</Lines>\n') | |
fp.write('\t\t\t</Piece>\n') | |
fp.write('\t</PolyData>\n') | |
fp.write('<AppendedData encoding="raw">\n') | |
""" | |
The previous file must be opened with binary flag to permit the raw data to be written in the | |
<AppendedData> section. | |
""" | |
with open(filename, 'ab') as fp: | |
# Inside the <AppendedData> section, write the underscore character to indicate the start | |
# of the raw data section | |
fp.write(bytes("\t_", 'ascii')) | |
# Iterate across the stored data sections, pack the data and write both the size (bytes) and the | |
# data into the file's Appended | |
for data in appendData: | |
# Write the size of the data section | |
numSize = struct.pack('I', len(data)) | |
fp.write(numSize) | |
# Write the raw data section | |
fp.write(data) | |
# Write the end of the raw data section and close the XML file | |
fp.write(bytes('</AppendedData>\n', 'ascii')) | |
fp.write(bytes('</VTKFile>\n', 'ascii')) | |
""" | |
Create an example data structure for a build file and export this to VTK | |
""" | |
testLayer = slm.Layer() | |
testLayer.z = 1000 | |
testGeom = slm.HatchGeometry() | |
testGeom.coords = [[1.0, 2.0], | |
[10.0, 2.0], | |
[5.0, 4.0], | |
[15.0, 4.0]] | |
testLayer.geometry.append(testGeom) | |
# Write the layer geometry to VTK file format | |
writeVTKLayers([testLayer], './testFile.vtp') |