ovito.io
¶
This module primarily provides two high-level functions for reading and writing external data files:
-
ovito.io.
export_file
(data, file, format, **params)¶ High-level function that exports data to a file. See the Data export section for an overview of this topic.
- Parameters
data – The object to be exported. See below for options.
file (str) – The output file path.
format (str) – The type of file to write. See below for options.
Data to export
Various kinds of objects are accepted by the function as data argument:
Pipeline
: Exports the dynamically generated output of a data pipeline. Since pipelines can be evaluated at different animation times, multi-frame sequences can be produced when passing aPipeline
object to theexport_file()
function.DataCollection
: Exports the static data of a data collection. Data objects contained in the collection that are not compatible with the chosen output format are ignored.DataObject
: Exports just the data object as if it were the only part of aDataCollection
. The provided data object must be compatible with the selected output format. For example, when exporting to the"txt/table"
format (see below), aDataTable
object should be passed to theexport_file()
function.None
: All pipelines that are part of the current scene (seeovito.Scene.pipelines
) are exported. This option makes sense for scene description formats such as the POV-Ray format.
Output format
The format parameter determines the type of file to write; the filename suffix is ignored. However, for filenames that end with
.gz
, automatic gzip compression is activated if the selected format is text-based. The following format strings are supported:Format string
Description
"txt/attr"
Export global attributes to a text file (see below)
"txt/table"
Export a
DataTable
to a text file"lammps/dump"
LAMMPS text-based dump format
"lammps/data"
LAMMPS data format
"imd"
IMD format
"vasp"
POSCAR format
"xyz"
XYZ format
"fhi-aims"
FHI-aims format
"gsd/hoomd"
GSD format used by the HOOMD simulation code
"netcdf/amber"
Binary format for MD data following the AMBER format convention
"vtk/trimesh"
ParaView VTK format for exporting
SurfaceMesh
objects"vtk/disloc"
ParaView VTK format for exporting
DislocationNetwork
objects"vtk/grid"
ParaView VTK format for exporting
VoxelGrid
objects"ca"
"povray"
POV-Ray scene format
Depending on the selected output format, additional keyword arguments must be passed to
export_file()
, which are documented below.File columns
For the output formats lammps/dump, xyz, imd and netcdf/amber, you must specify the set of particle properties to export using the
columns
keyword parameter:export_file(pipeline, "output.xyz", "xyz", columns = ["Particle Identifier", "Particle Type", "Position.X", "Position.Y", "Position.Z"] )
You can export the standard particle properties and any user-defined properties present in the pipeline’s output
DataCollection
. For vector properties, the component name must be appended to the base name as demonstrated above for thePosition
property.Exporting several simulation frames
By default, only the current animation frame (frame 0 by default) is exported by the function. To export a different frame, pass the
frame
keyword parameter to theexport_file()
function. Alternatively, you can export all frames of an animation sequence at once by passingmultiple_frames=True
. Refined control of the exported frame sequence is available through the keyword argumentsstart_frame
,end_frame
, andevery_nth_frame
.The lammps/dump and xyz file formats can store multiple frames in a single output file. For other formats, or if you intentionally want to generate one file per frame, you must pass a wildcard filename to
export_file()
. This filename must contain exactly one*
character as in the following example, which will be replaced with the animation frame number:export_file(pipeline, "output.*.dump", "lammps/dump", multiple_frames=True)
The above call is equivalent to the following
for
-loop:for i in range(pipeline.source.num_frames): export_file(pipeline, "output.%i.dump" % i, "lammps/dump", frame=i)
Floating-point number precision
For text-based file formats, you can set the desired formatting precision for floating-point values using the
precision
keyword parameter. The default output precision is 10 digits; the maximum is 17.LAMMPS atom style
When writing files in the lammps/data format, the LAMMPS atom style “atomic” is used by default. If you want to output a data file with a different LAMMPS atom style, specify it with the
atom_style
keyword parameter:export_file(pipeline, "output.data", "lammps/data", atom_style = "bond") export_file(pipeline, "output.data", "lammps/data", atom_style = "hybrid", atom_substyles = ("template", "charge"))
If at least one
ParticleType
of the exported model has a non-zeromass
value, OVITO will write aMasses
section to the LAMMPS data file. You can suppress it by passingomit_masses=True
to the export function. Furthermore, the keyword argumentignore_identifiers=True
replaces any existing atom IDs (particle property Particle Identifier) with a contiguous sequence of numbers during file export.VASP (POSCAR) format
When exporting to the vasp file format, OVITO will output atomic positions and velocities in Cartesian coordinates by default. You can request output in reduced cell coordinates instead by specifying the
reduced
keyword parameter:export_file(pipeline, "structure.poscar", "vasp", reduced=True)
Global attributes
The txt/attr file format allows you to export global quantities computed by the data pipeline to a text file. For example, to write out the number of FCC atoms identified by a
CommonNeighborAnalysisModifier
as a function of simulation time, one would use the following:export_file(pipeline, "data.txt", "txt/attr", columns=["Timestep", "CommonNeighborAnalysis.counts.FCC"], multiple_frames=True)
See the documentation of the individual modifiers to find out which global quantities they generate. You can also determine at runtime which
attributes
are available in the output data collection of aPipeline
:print(pipeline.compute().attributes)
-
ovito.io.
import_file
(location, **params)¶ Imports data from an external file.
This Python function corresponds to the Load File menu command in OVITO’s user interface. The format of the imported file is automatically detected (see list of supported formats). Depending on the file’s format, additional keyword parameters may be required to specify how the data should be interpreted. These keyword parameters are documented below.
- Parameters
location – The file to import. This can be a local file path or a remote sftp:// or https:// URL.
- Returns
The new
Pipeline
that has been created for the imported data.
The function creates and returns a new
Pipeline
object, which uses the contents of the external data file as input. The pipeline will be wired to aFileSource
, which reads the input data from the external file and passes it on to the pipeline. You can access the data by calling thePipeline.compute()
method or, alternatively,FileSource.compute()
on the datasource
. As long as the newPipeline
contains no modifiers yet, both methods will return the same data.Note that the
Pipeline
is not automatically inserted into the three-dimensional scene. That means the loaded data won’t appear in rendered images or the interactive viewports of OVITO by default. For that to happen, you need to explicitly insert the pipeline into the scene by calling itsadd_to_scene()
method if desired.Furthermore, note that you can re-use the returned
Pipeline
if you want to load a different data file later on. Instead of callingimport_file()
again to load another file, you can use thepipeline.source.load(...)
method to replace the input file of the already existing pipeline.File columns
When importing simple-format XYZ files or legacy binary LAMMPS dump files, the mapping of file columns to particle properties in OVITO must be specified using the
columns
keyword parameter:pipeline = import_file('file.xyz', columns = ['Particle Identifier', 'Particle Type', 'Position.X', 'Position.Y', 'Position.Z'])
The number of column strings must match the actual number of data columns in the input file. See this table for standard particle property names. Alternatively, you can specify user-defined names for file columns that should be read as custom particle properties by OVITO. For vector properties, the component name must be appended to the property’s base name as demonstrated for the
Position
property in the example above. To ignore a file column during import, useNone
as entry in thecolumns
list.For LAMMPS dump files or extended-format XYZ files, OVITO automatically determines a reasonable column-to-property mapping, but you may override it using the
columns
keyword. This can make sense, for example, if the file columns containing the particle coordinates do not follow the standard naming schemex
,y
, andz
(as is the case when importing time-averaged atomic positions computed by LAMMPS, for example).Frame sequences
OVITO automatically detects if the imported file contains multiple data frames (timesteps). Alternatively (and additionally), it is possible to load a sequence of files in the same directory by using the
*
wildcard character in the filename. Note that*
may appear only once, only in the filename component of the path, and only in place of numeric digits. Furthermore, it is possible to pass an explicit list of file paths to theimport_file()
function, which will be loaded as an animatable sequence. All variants can be combined. For example, to load two file sets from different directories as one consecutive sequence:import_file('sim.xyz') # Load all frames contained in the given file import_file('sim.*.xyz') # Load 'sim.0.xyz', 'sim.100.xyz', 'sim.200.xyz', etc. import_file(['sim_a.xyz', 'sim_b.xyz']) # Load an explicit list of snapshot files import_file(['dir_a/sim.*.xyz', 'dir_b/sim.*.xyz']) # Load several file sequences from different directories
The number of frames found in the input file(s) is reported by the
num_frames
attribute of the pipeline’sFileSource
You can step through the frames with afor
-loop as follows:from ovito.io import import_file # Import a sequence of files. pipeline = import_file('input/simulation.*.dump') # Loop over all frames of the sequence. for frame_index in range(pipeline.source.num_frames): # Calling FileSource.compute() loads the requested frame # from the sequence into memory and returns the data as a new # DataCollection: data = pipeline.source.compute(frame_index) # The source path and the index of the current frame # are attached as attributes to the data collection: print('Frame source:', data.attributes['SourceFile']) print('Frame index:', data.attributes['SourceFrame']) # Accessing the loaded frame data, e.g the particle positions: print(data.particles.positions[...])
LAMMPS atom style
When loading a LAMMPS data file, the atom style may have to be specified using the
atom_style
keyword parameter unless the file contains a hint string, which allows OVITO to detect the style automatically. Data files written by the LAMMPSwrite_data
command or by OVITO contain such a hint, for example. For data files not containing a hint, the atom style must be specified explicitly as in these examples:import_file('full_model.data', atom_style = 'full') import_file('hybrid_model.data', atom_style = 'hybrid', atom_substyles = ('template', 'charge'))
Particle ordering
Particles are read and stored by OVITO in the same order as they are listed in the input file. Some file formats contain unique particle identifiers or tags which allow OVITO to track individual particles over time even if the storage order changes from frame to frame. OVITO will automatically make use of that information where appropriate without touching the original storage order. However, in some situations it may be desirable to explicitly have the particles sorted with respect to the IDs. You can request this reordering by passing the
sort_particles=True
option toimport_file()
. Note that this option is without effect if the input file contains no particle identifiers.Topology and trajectory files
Some simulation codes write a topology file and separate trajectory file. The former contains only static information like the bonding between atoms, the atom types, etc., which do not change during a simulation run, while the latter stores the varying data (primarily the atomic trajectories). To load such a topology-trajectory pair of files, first read the topology file with the
import_file()
function, then insert aLoadTrajectoryModifier
into the returnedPipeline
to also load the trajectory data.