API Documentation
highdicom package
- class highdicom.AlgorithmIdentificationSequence(name, family, version, source=None, parameters=None)
Bases:
Sequence
Sequence of data elements describing information useful for identification of an algorithm.
- Parameters
name (str) – Name of the algorithm
family (Union[pydicom.sr.coding.Code, highdicom.sr.CodedConcept]) – Kind of algorithm family
version (str) – Version of the algorithm
source (str, optional) – Source of the algorithm, e.g. name of the algorithm manufacturer
parameters (Dict[str, str], optional) – Name and actual value of the parameters with which the algorithm was invoked
- property family: CodedConcept
Kind of the algorithm family.
- Type
- Return type
- classmethod from_sequence(sequence, copy=True)
Construct instance from an existing data element sequence.
- Parameters
sequence (pydicom.sequence.Sequence) – Data element sequence representing the Algorithm Identification Sequence
copy (bool) – If True, the underlying sequence is deep-copied such that the original sequence remains intact. If False, this operation will alter the original sequence in place.
- Returns
Algorithm Identification Sequence
- Return type
highdicom.seg.content.AlgorithmIdentificationSequence
- property name: str
Name of the algorithm.
- Type
str
- Return type
str
- property parameters: dict[str, str] | None
Union[Dict[str, str], None]: Dictionary mapping algorithm parameter names to values, if any
- Return type
types.UnionType
[dict
[str
,str
],None
]
- property source: str | None
Union[str, None]: Source of the algorithm, e.g. name of the algorithm manufacturer, if any
- Return type
types.UnionType
[str
,None
]
- property version: str
Version of the algorithm.
- Type
str
- Return type
str
- class highdicom.AnatomicalOrientationTypeValues(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumerated values for Anatomical Orientation Type attribute.
- BIPED = 'BIPED'
- QUADRUPED = 'QUADRUPED'
- class highdicom.AxisHandedness(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumerated values for axis handedness.
Axis handedness refers to a property of a mapping between voxel indices and their corresponding coordinates in the frame-of-reference coordinate system, as represented by the affine matrix.
- LEFT_HANDED = 'LEFT_HANDED'
The unit vectors of the first, second and third axes form a left hand when drawn in the frame-of-reference coordinate system with the thumb representing the first vector, the index finger representing the second vector, and the middle finger representing the third vector.
- RIGHT_HANDED = 'RIGHT_HANDED'
The unit vectors of the first, second and third axes form a right hand when drawn in the frame-of-reference coordinate system with the thumb representing the first vector, the index finger representing the second vector, and the middle finger representing the third vector.
- class highdicom.ChannelDescriptor(identifier, is_custom=False, value_type=None)
Bases:
object
Descriptor of a channel (non-spatial) dimension within a Volume.
A channel dimension may be described either using a standard DICOM attribute (preferable where possible) or a custom descriptor that defines the quantity or characteristic that varies along the dimension.
- Parameters
identifier (str | int | highdicom.ChannelDescriptor) – Identifier of the attribute. May be a DICOM attribute identified either by its keyword or integer tag value. Alternatively, if
is_custom
is True, an arbitrary string used to identify the dimension.is_custom (bool) – Whether the identifier is a custom identifier, as opposed to a DICOM attribute.
value_type (type | None) – The python type of the values that vary along the dimension. Should be provided if and only if a custom identifier is used. Only ints, floats, strs, or enum.Enums, or their sub-classes, are allowed.
- property is_custom: bool
Whether the descriptor is custom, as opposed to using a DICOM attribute.
- Type
bool
- Return type
bool
- property is_enumerated: bool
Whether the value type is enumerated.
- Type
bool
- Return type
bool
- property keyword: str
The DICOM keyword or custom string for the descriptor.
- Type
str
- Return type
str
- property tag: pydicom.tag.BaseTag | None
The DICOM tag for the attribute.
None
for custom descriptors.- Type
str
- Return type
types.UnionType
[pydicom.tag.BaseTag
,None
]
- property value_type: type
Python type of the quantity that varies along the dimension.
- Type
type
- Return type
type
- class highdicom.ContentCreatorIdentificationCodeSequence(person_identification_codes, institution_name, person_address=None, person_telephone_numbers=None, person_telecom_information=None, institution_code=None, institution_address=None, institutional_department_name=None, institutional_department_type_code=None)
Bases:
Sequence
Sequence of data elements for identifying the person who created content.
- Parameters
person_identification_codes (Sequence[Union[pydicom.sr.coding.Code, highdicom.sr.CodedConcept]]) – Coded description(s) identifying the person.
institution_name (str) – Name of the to which the identified individual is responsible or accountable.
person_address (Union[str, None]) – Mailing address of the person.
person_telephone_numbers (Union[Sequence[str], None], optional) – Person’s telephone number(s).
person_telecom_information (Union[str, None], optional) – The person’s telecommunication contact information, including email or other addresses.
institution_code (Union[pydicom.sr.coding.Code, highdicom.sr.CodedConcept, None], optional) – Coded concept identifying the institution.
institution_address (Union[str, None], optional) – Mailing address of the institution.
institutional_department_name (Union[str, None], optional) – Name of the department, unit or service within the healthcare facility.
institutional_department_type_code (Union[pydicom.sr.coding.Code, highdicom.sr.CodedConcept, None], optional) – A coded description of the type of Department or Service.
- class highdicom.ContentQualificationValues(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumerated values for Content Qualification attribute.
- PRODUCT = 'PRODUCT'
- RESEARCH = 'RESEARCH'
- SERVICE = 'SERVICE'
- class highdicom.CoordinateSystemNames(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumerated values for coordinate system names.
- PATIENT = 'PATIENT'
- SLIDE = 'SLIDE'
- class highdicom.DimensionOrganizationTypeValues(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumerated values for Dimension Organization Type attribute.
- THREE_DIMENSIONAL = '3D'
- THREE_DIMENSIONAL_TEMPORAL = '3D_TEMPORAL'
- TILED_FULL = 'TILED_FULL'
- TILED_SPARSE = 'TILED_SPARSE'
- class highdicom.Image(*args, **kwargs)
Bases:
_Image
Class representing a general DICOM image.
An “image” is any object representing an Image Information Entity.
Note that this does not correspond to a particular SOP class in DICOM, but instead captures behavior that is common to a number of SOP classes. It provides various methods to access the frames in the image, apply transforms specified in the dataset to the pixels, and arrange them spatially.
The class may not be instantiated directly, but should be created from an existing dataset.
- Parameters
study_instance_uid (str) – UID of the study
series_instance_uid (str) – UID of the series
series_number (int) – Number of the series within the study
sop_instance_uid (str) – UID that should be assigned to the instance
instance_number (int) – Number that should be assigned to the instance
modality (str) – Name of the modality
manufacturer (Union[str, None], optional) – Name of the manufacturer (developer) of the device (software) that creates the instance
transfer_syntax_uid (Union[str, None], optional) – UID of transfer syntax that should be used for encoding of data elements. Defaults to Implicit VR Little Endian (UID
"1.2.840.10008.1.2"
)patient_id (Union[str, None], optional) – ID of the patient (medical record number)
patient_name (Union[str, pydicom.valuerep.PersonName, None], optional) – Name of the patient
patient_birth_date (Union[str, None], optional) – Patient’s birth date
patient_sex (Union[str, highdicom.PatientSexValues, None], optional) – Patient’s sex
study_id (Union[str, None], optional) – ID of the study
accession_number (Union[str, None], optional) – Accession number of the study
study_date (Union[str, datetime.date, None], optional) – Date of study creation
study_time (Union[str, datetime.time, None], optional) – Time of study creation
referring_physician_name (Union[str, pydicom.valuerep.PersonName, None], optional) – Name of the referring physician
content_qualification (Union[str, highdicom.ContentQualificationValues, None], optional) – Indicator of content qualification
coding_schemes (Union[Sequence[highdicom.sr.CodingSchemeIdentificationItem], None], optional) – private or public coding schemes that are not part of the DICOM standard
series_description (Union[str, None], optional) – Human readable description of the series
manufacturer_model_name (Union[str, None], optional) – Name of the device model (name of the software library or application) that creates the instance
software_versions (Union[str, Tuple[str]]) – Version(s) of the software that creates the instance
device_serial_number (str) – Manufacturer’s serial number of the device
institution_name (Union[str, None], optional) – Name of the institution of the person or device that creates the SR document instance.
institutional_department_name (Union[str, None], optional) – Name of the department of the person or device that creates the SR document instance.
Note
The constructor only provides attributes that are required by the standard (type 1 and 2) as part of the Patient, General Study, Patient Study, General Series, General Equipment and SOP Common modules. Derived classes are responsible for providing additional attributes required by the corresponding Information Object Definition (IOD). Additional optional attributes can subsequently be added to the dataset.
- are_dimension_indices_unique(dimension_index_pointers)
Check if a list of index pointers uniquely identifies frames.
For a given list of dimension index pointers, check whether every combination of index values for these pointers identifies a unique image frame. This is a pre-requisite for indexing using this list of dimension index pointers.
- Parameters
Sequence[Union[int – Sequence of tags serving as dimension index pointers. If strings, the items are interpreted as keywords.
pydicom.tag.BaseTag – Sequence of tags serving as dimension index pointers. If strings, the items are interpreted as keywords.
str]] – Sequence of tags serving as dimension index pointers. If strings, the items are interpreted as keywords.
- Returns
True if dimension indices are unique.
- Return type
bool
- property coordinate_system: highdicom.enum.CoordinateSystemNames | None
Frame-of-reference coordinate system, if any, within which the image exists.
- Type
- Return type
types.UnionType
[highdicom.enum.CoordinateSystemNames
,None
]
- copy_patient_and_study_information(dataset)
Copies patient- and study-related metadata from dataset that are defined in the following modules: Patient, General Study, Patient Study, Clinical Trial Subject and Clinical Trial Study.
- Parameters
dataset (pydicom.dataset.Dataset) – DICOM Data Set from which attributes should be copied
- Return type
None
- copy_specimen_information(dataset)
Copies specimen-related metadata from dataset that are defined in the Specimen module.
- Parameters
dataset (pydicom.dataset.Dataset) – DICOM Data Set from which attributes should be copied
- Return type
None
- property dimension_index_pointers: list[pydicom.tag.BaseTag]
List[pydicom.tag.BaseTag]: List of tags used as dimension indices.
- Return type
list
[pydicom.tag.BaseTag
]
- classmethod from_dataset(dataset, copy=True)
Create an Image from an existing pydicom Dataset.
- Parameters
dataset (pydicom.Dataset) – Dataset representing an image.
copy (bool) – If True, the underlying dataset is deep-copied such that the original dataset remains intact. If False, this operation will alter the original dataset in place.
- Returns
Image object from the input dataset.
- Return type
Self
- classmethod from_file(fp, lazy_frame_retrieval=False)
Read an image stored in DICOM File Format.
- Parameters
fp (Union[str, bytes, os.PathLike]) – Any file-like object representing a DICOM file containing an image.
lazy_frame_retrieval (bool) – If True, the returned image will retrieve frames from the file as requested, rather than loading in the entire object to memory initially. This may be a good idea if file reading is slow and you are likely to need only a subset of the frames in the image.
- Returns
Image read from the file.
- Return type
Self
- get_frame(frame_number, as_index=False, *, dtype=<class 'numpy.float64'>, apply_real_world_transform=None, real_world_value_map_selector=0, apply_modality_transform=None, apply_voi_transform=False, voi_transform_selector=0, voi_output_range=(0.0, 1.0), apply_presentation_lut=True, apply_palette_color_lut=None, apply_icc_profile=None)
Get a single frame of pixels, with transforms applied.
This method retrieves a frame of stored values and applies various intensity transforms specified within the dataset to them, depending on the options provided.
- Parameters
frame_number (int) – Number of the frame to retrieve. Under the default behavior, this is interpreted as a 1-based frame number (i.e. the first frame is numbered 1). This matches the convention used within DICOM when referring to frames within an image. To use a 0-based index instead (as is more common in Python), use the as_index parameter.
as_index (bool) – Interpret the input frame_number as a 0-based index, instead of the default behavior of interpreting it as a 1-based frame number.
dtype (Union[type, str, numpy.dtype],) – Data type of the output array.
apply_real_world_transform (bool | None, optional) –
Whether to apply a real-world value map to the frame. A real-world value maps converts stored pixel values to output values with a real-world meaning, either using a LUT or a linear slope and intercept.
If True, the transform is applied if present, and if not present an error will be raised. If False, the transform will not be applied, regardless of whether it is present. If
None
, the transform will be applied if present but no error will be raised if it is not present.Note that if the dataset contains both a modality LUT and a real world value map, the real world value map will be applied preferentially. This also implies that specifying both
apply_real_world_transform
andapply_modality_transform
to True is not permitted.real_world_value_map_selector (int | str | pydicom.sr.coding.Code | highdicom.sr.coding.CodedConcept, optional) – Specification of the real world value map to use (multiple may be present in the dataset). If an int, it is used to index the list of available maps. A negative integer may be used to index from the end of the list following standard Python indexing convention. If a str, the string will be used to match the
"LUTLabel"
attribute to select the map. If apydicom.sr.coding.Code
orhighdicom.sr.coding.CodedConcept
, this will be used to match the units (contained in the"MeasurementUnitsCodeSequence"
attribute).apply_modality_transform (bool | None, optional) –
Whether to apply the modality transform (if present in the dataset) to the frame. The modality transform maps stored pixel values to output values, either using a LUT or rescale slope and intercept.
If True, the transform is applied if present, and if not present an error will be raised. If False, the transform will not be applied, regardless of whether it is present. If
None
, the transform will be applied if it is present and no real world value map takes precedence, but no error will be raised if it is not present.apply_voi_transform (bool | None, optional) –
Apply the value-of-interest (VOI) transform (if present in the dataset) which limits the range of pixel values to a particular range of interest, using either a windowing operation or a LUT.
If True, the transform is applied if present, and if not present an error will be raised. If False, the transform will not be applied, regardless of whether it is present. If
None
, the transform will be applied if it is present and no real world value map takes precedence, but no error will be raised if it is not present.voi_transform_selector (int | str | highdicom.content.VOILUTTransformation, optional) –
Specification of the VOI transform to select (multiple may be present). May either be an int or a str. If an int, it is interpreted as a (zero-based) index of the list of VOI transforms to apply. A negative integer may be used to index from the end of the list following standard Python indexing convention. If a str, the string that will be used to match the
"WindowCenterWidthExplanation"
or the"LUTExplanation"
attributes to choose from multiple VOI transforms. Note that such explanations are optional according to the standard and therefore may not be present. Ignored ifapply_voi_transform
isFalse
or no VOI transform is included in the datasets.Alternatively, a user-defined
highdicom.content.VOILUTTransformation
may be supplied. This will override any such transform specified in the dataset.voi_output_range (Tuple[float, float], optional) – Range of output values to which the VOI range is mapped. Only relevant if
apply_voi_transform
is True and a VOI transform is present.apply_palette_color_lut (bool | None, optional) – Apply the palette color LUT, if present in the dataset. The palette color LUT maps a single sample for each pixel stored in the dataset to a 3 sample-per-pixel color image.
apply_presentation_lut (bool, optional) – Apply the presentation LUT transform to invert the pixel values. If the PresentationLUTShape is present with the value
'INVERSE'
, or the PresentationLUTShape is not present but the Photometric Interpretation is MONOCHROME1, convert the range of the output pixels corresponds to MONOCHROME2 (in which high values are represent white and low values represent black). Ignored if PhotometricInterpretation is not MONOCHROME1 and the PresentationLUTShape is not present, or if a real world value transform is applied.apply_icc_profile (bool | None, optional) – Whether colors should be corrected by applying an ICC transform. Will only be performed if metadata contain an ICC Profile.
- Returns
Numpy array of stored values. This will have shape (Rows, Columns) for a grayscale image, or (Rows, Columns, 3) for a color image. The data type will depend on how the pixels are stored in the file, and may be signed or unsigned integers or float.
- Return type
numpy.ndarray
- get_raw_frame(frame_number, as_index=False)
Get the raw data for an encoded frame as bytes.
- Parameters
frame_number (int) – Number of the frame to retrieve. Under the default behavior, this is interpreted as a 1-based frame number (i.e. the first frame is numbered 1). This matches the convention used within DICOM when referring to frames within an image. To use a 0-based index instead (as is more common in Python), use the as_index parameter.
as_index (bool) – Interpret the input frame_number as a 0-based index, instead of the default 1-based index.
- Returns
Raw encoded data relating to the requested frame.
- Return type
bytes
Note
In some situations, where the number of bits allocated is 1, the transfer syntax is not encapsulated (i.e. is native), and the number of pixels per frame is not a multiple of 8, frame boundaries are not aligned with byte boundaries in the raw bytes. In this situation, the returned bytes will contain the minimum range of bytes required to entirely contain the requested frame, however some bits may need stripping from the start and/or end to get the bits related to the requested frame.
- get_source_image_uids()
Get UIDs of source image instances referenced in the image.
- Returns
(Study Instance UID, Series Instance UID, SOP Instance UID) triplet for every image instance referenced in the image.
- Return type
List[Tuple[highdicom.UID, highdicom.UID, highdicom.UID]]
- get_stored_frame(frame_number, as_index=False)
Get a single frame of stored values.
Stored values are the pixel values stored within the dataset. They have been decompressed from the raw bytes (if necessary), interpreted as the correct pixel datatype (according to the pixel representation and planar configuration) and reshaped into a 2D (grayscale image) or 3D (color) NumPy array. However, no further pixel transform, such as the modality transform, VOI transforms, palette color LUTs, or ICC profile, has been applied.
To get frames with pixel transforms applied (as is appropriate for most applications), use
highdicom.Image.get_frame()
instead.- Parameters
frame_number (int) – Number of the frame to retrieve. Under the default behavior, this is interpreted as a 1-based frame number (i.e. the first frame is numbered 1). This matches the convention used within DICOM when referring to frames within an image. To use a 0-based index instead (as is more common in Python), use the as_index parameter.
as_index (bool) – Interpret the input frame_number as a 0-based index, instead of the default 1-based index.
- Returns
Numpy array of stored values. This will have shape (Rows, Columns) for a grayscale image, or (Rows, Columns, 3) for a color image. The data type will depend on how the pixels are stored in the file, and may be signed or unsigned integers or float.
- Return type
numpy.ndarray
- get_total_pixel_matrix(row_start=None, row_end=None, column_start=None, column_end=None, dtype=<class 'numpy.float64'>, apply_real_world_transform=None, real_world_value_map_selector=0, apply_modality_transform=None, apply_voi_transform=False, voi_transform_selector=0, voi_output_range=(0.0, 1.0), apply_presentation_lut=True, apply_palette_color_lut=None, apply_icc_profile=None, as_indices=False)
Get the pixel array as a (region of) the total pixel matrix.
This is only possible for tiled images, which are images in which the frames are arranged over a 2D plane (like tiles over a floor) and typically occur in microscopy. This method is not relevant for other types of image.
- Parameters
row_start (int, optional) – 1-based row number in the total pixel matrix of the first row to include in the output array. Alternatively a zero-based row index if
as_indices
is True. May be negative, in which case the last row is considered index -1. IfNone
, the first row of the output is the first row of the total pixel matrix (regardless of the value ofas_indices
).row_end (Union[int, None], optional) – 1-based row index in the total pixel matrix of the first row beyond the last row to include in the output array. A
row_end
value ofn
will include rowsn - 1
and below, similar to standard Python indexing. IfNone
, rows up until the final row of the total pixel matrix are included. May be negative, in which case the last row is considered index -1.column_start (int, optional) – 1-based column number in the total pixel matrix of the first column to include in the output array. Alternatively a zero-based column index if
as_indices
is True.May be negative, in which case the last column is considered index -1.column_end (Union[int, None], optional) – 1-based column index in the total pixel matrix of the first column beyond the last column to include in the output array. A
column_end
value ofn
will include columnsn - 1
and below, similar to standard Python indexing. IfNone
, columns up until the final column of the total pixel matrix are included. May be negative, in which case the last column is considered index -1.dtype (Union[type, str, numpy.dtype], optional) – Data type of the returned array.
apply_real_world_transform (bool | None, optional) –
Whether to apply a real-world value map to the frame. A real-world value maps converts stored pixel values to output values with a real-world meaning, either using a LUT or a linear slope and intercept.
If True, the transform is applied if present, and if not present an error will be raised. If False, the transform will not be applied, regardless of whether it is present. If
None
, the transform will be applied if present but no error will be raised if it is not present.Note that if the dataset contains both a modality LUT and a real world value map, the real world value map will be applied preferentially. This also implies that specifying both
apply_real_world_transform
andapply_modality_transform
to True is not permitted.real_world_value_map_selector (int | str | pydicom.sr.coding.Code | highdicom.sr.coding.CodedConcept, optional) – Specification of the real world value map to use (multiple may be present in the dataset). If an int, it is used to index the list of available maps. A negative integer may be used to index from the end of the list following standard Python indexing convention. If a str, the string will be used to match the
"LUTLabel"
attribute to select the map. If apydicom.sr.coding.Code
orhighdicom.sr.coding.CodedConcept
, this will be used to match the units (contained in the"MeasurementUnitsCodeSequence"
attribute).apply_modality_transform (bool | None, optional) –
Whether to apply the modality transform (if present in the dataset) to the frame. The modality transform maps stored pixel values to output values, either using a LUT or rescale slope and intercept.
If True, the transform is applied if present, and if not present an error will be raised. If False, the transform will not be applied, regardless of whether it is present. If
None
, the transform will be applied if it is present and no real world value map takes precedence, but no error will be raised if it is not present.apply_voi_transform (bool | None, optional) –
Apply the value-of-interest (VOI) transform (if present in the dataset), which limits the range of pixel values to a particular range of interest using either a windowing operation or a LUT.
If True, the transform is applied if present, and if not present an error will be raised. If False, the transform will not be applied, regardless of whether it is present. If
None
, the transform will be applied if it is present and no real world value map takes precedence, but no error will be raised if it is not present.voi_transform_selector (int | str | highdicom.content.VOILUTTransformation, optional) –
Specification of the VOI transform to select (multiple may be present). May either be an int or a str. If an int, it is interpreted as a (zero-based) index of the list of VOI transforms to apply. A negative integer may be used to index from the end of the list following standard Python indexing convention. If a str, the string that will be used to match the
"WindowCenterWidthExplanation"
or the"LUTExplanation"
attributes to choose from multiple VOI transforms. Note that such explanations are optional according to the standard and therefore may not be present. Ignored ifapply_voi_transform
isFalse
or no VOI transform is included in the datasets.Alternatively, a user-defined
highdicom.content.VOILUTTransformation
may be supplied. This will override any such transform specified in the dataset.voi_output_range (Tuple[float, float], optional) – Range of output values to which the VOI range is mapped. Only relevant if
apply_voi_transform
is True and a VOI transform is present.apply_palette_color_lut (bool | None, optional) – Apply the palette color LUT, if present in the dataset. The palette color LUT maps a single sample for each pixel stored in the dataset to a 3 sample-per-pixel color image.
apply_presentation_lut (bool, optional) – Apply the presentation LUT transform to invert the pixel values. If the PresentationLUTShape is present with the value
'INVERSE'
, or the PresentationLUTShape is not present but the Photometric Interpretation is MONOCHROME1, convert the range of the output pixels corresponds to MONOCHROME2 (in which high values are represent white and low values represent black). Ignored if PhotometricInterpretation is not MONOCHROME1 and the PresentationLUTShape is not present, or if a real world value transform is applied.apply_icc_profile (bool | None, optional) –
Whether colors should be corrected by applying an ICC transform. Will only be performed if metadata contain an ICC Profile.
If True, the transform is applied if present, and if not present an error will be raised. If False, the transform will not be applied, regardless of whether it is present. If
None
, the transform will be applied if it is present, but no error will be raised if it is not present.as_indices (bool, optional) – If True, interpret all row/column numbering parameters (
row_start
,row_end
,column_start
, andcolumn_end
) as zero-based indices as opposed to the default one-based numbers used within DICOM.
- Returns
pixel_array – Pixel array representing the image’s total pixel matrix.
- Return type
numpy.ndarray
Note
By default, this method uses 1-based indexing of rows and columns in order to match the conventions used in the DICOM standard. The first row of the total pixel matrix is row 1, and the last is
self.TotalPixelMatrixRows
. This is is unlike standard Python and NumPy indexing which is 0-based. For negative indices, the two are equivalent with the final row/column having index -1. To switch to standard Python behavior, specifyas_indices=True
.
- get_volume(*, slice_start=None, slice_end=None, row_start=None, row_end=None, column_start=None, column_end=None, as_indices=False, dtype=<class 'numpy.float64'>, apply_real_world_transform=None, real_world_value_map_selector=0, apply_modality_transform=None, apply_voi_transform=False, voi_transform_selector=0, voi_output_range=(0.0, 1.0), apply_presentation_lut=True, apply_palette_color_lut=None, apply_icc_profile=None, allow_missing_positions=False, rtol=None, atol=None)
Create a
highdicom.Volume
from the image.This is only possible in two situations: either the image represents a regularly-spaced 3D volume, or a tiled 2D total pixel matrix.
- Parameters
slice_start (int | none, optional) – zero-based index of the “volume position” of the first slice of the returned volume. the “volume position” refers to the position of slices after sorting spatially, and may correspond to any frame in the segmentation file, depending on its construction. may be negative, in which case standard python indexing behavior is followed (-1 corresponds to the last volume position, etc).
slice_end (union[int, none], optional) – zero-based index of the “volume position” one beyond the last slice of the returned volume. the “volume position” refers to the position of slices after sorting spatially, and may correspond to any frame in the segmentation file, depending on its construction. may be negative, in which case standard python indexing behavior is followed (-1 corresponds to the last volume position, etc). if none, the last volume position is included as the last output slice.
row_start (int, optional) – 1-based row number in the total pixel matrix of the first row to include in the output array. alternatively a zero-based row index if
as_indices
is true. may be negative, in which case the last row is considered index -1. ifnone
, the first row of the output is the first row of the total pixel matrix (regardless of the value ofas_indices
).row_end (union[int, none], optional) – 1-based row index in the total pixel matrix of the first row beyond the last row to include in the output array. a
row_end
value ofn
will include rowsn - 1
and below, similar to standard python indexing. ifnone
, rows up until the final row of the total pixel matrix are included. may be negative, in which case the last row is considered index -1.column_start (int, optional) – 1-based column number in the total pixel matrix of the first column to include in the output array. alternatively a zero-based column index if
as_indices
is true.may be negative, in which case the last column is considered index -1.column_end (union[int, none], optional) – 1-based column index in the total pixel matrix of the first column beyond the last column to include in the output array. a
column_end
value ofn
will include columnsn - 1
and below, similar to standard python indexing. ifnone
, columns up until the final column of the total pixel matrix are included. may be negative, in which case the last column is considered index -1.as_indices (bool, optional) – if true, interpret all slice/row/column numbering parameters (
row_start
,row_end
,column_start
, andcolumn_end
) as zero-based indices as opposed to the default one-based numbers used within dicom.dtype (Union[type, str, numpy.dtype], optional) – Data type of the returned array.
apply_real_world_transform (bool | None, optional) –
Whether to apply a real-world value map to the frame. A real-world value maps converts stored pixel values to output values with a real-world meaning, either using a LUT or a linear slope and intercept.
If True, the transform is applied if present, and if not present an error will be raised. If False, the transform will not be applied, regardless of whether it is present. If
None
, the transform will be applied if present but no error will be raised if it is not present.Note that if the dataset contains both a modality LUT and a real world value map, the real world value map will be applied preferentially. This also implies that specifying both
apply_real_world_transform
andapply_modality_transform
to True is not permitted.real_world_value_map_selector (int | str | pydicom.sr.coding.Code | highdicom.sr.coding.CodedConcept, optional) – Specification of the real world value map to use (multiple may be present in the dataset). If an int, it is used to index the list of available maps. A negative integer may be used to index from the end of the list following standard Python indexing convention. If a str, the string will be used to match the
"LUTLabel"
attribute to select the map. If apydicom.sr.coding.Code
orhighdicom.sr.coding.CodedConcept
, this will be used to match the units (contained in the"MeasurementUnitsCodeSequence"
attribute).apply_modality_transform (bool | None, optional) –
Whether to apply the modality transform (if present in the dataset) to the frame. The modality transform maps stored pixel values to output values, either using a LUT or rescale slope and intercept.
If True, the transform is applied if present, and if not present an error will be raised. If False, the transform will not be applied, regardless of whether it is present. If
None
, the transform will be applied if it is present and no real world value map takes precedence, but no error will be raised if it is not present.apply_voi_transform (bool | None, optional) –
Apply the value-of-interest (VOI) transform (if present in the dataset), which limits the range of pixel values to a particular range of interest using either a windowing operation or a LUT.
If True, the transform is applied if present, and if not present an error will be raised. If False, the transform will not be applied, regardless of whether it is present. If
None
, the transform will be applied if it is present and no real world value map takes precedence, but no error will be raised if it is not present.voi_transform_selector (int | str | highdicom.content.VOILUTTransformation, optional) –
Specification of the VOI transform to select (multiple may be present). May either be an int or a str. If an int, it is interpreted as a (zero-based) index of the list of VOI transforms to apply. A negative integer may be used to index from the end of the list following standard Python indexing convention. If a str, the string that will be used to match the
"WindowCenterWidthExplanation"
or the"LUTExplanation"
attributes to choose from multiple VOI transforms. Note that such explanations are optional according to the standard and therefore may not be present. Ignored ifapply_voi_transform
isFalse
or no VOI transform is included in the dataset.Alternatively, a user-defined
highdicom.content.VOILUTTransformation
may be supplied. This will override any such transform specified in the dataset.voi_output_range (Tuple[float, float], optional) – Range of output values to which the VOI range is mapped. Only relevant if
apply_voi_transform
is True and a VOI transform is present.apply_palette_color_lut (bool | None, optional) – Apply the palette color LUT, if present in the dataset. The palette color LUT maps a single sample for each pixel stored in the dataset to a 3 sample-per-pixel color image.
apply_presentation_lut (bool, optional) – Apply the presentation LUT transform to invert the pixel values. If the PresentationLUTShape is present with the value
'INVERSE'
, or the PresentationLUTShape is not present but the Photometric Interpretation is MONOCHROME1, convert the range of the output pixels corresponds to MONOCHROME2 (in which high values are represent white and low values represent black). Ignored if PhotometricInterpretation is not MONOCHROME1 and the PresentationLUTShape is not present, or if a real world value transform is applied.apply_icc_profile (bool | None, optional) –
Whether colors should be corrected by applying an ICC transform. Will only be performed if metadata contain an ICC Profile.
If True, the transform is applied if present, and if not present an error will be raised. If False, the transform will not be applied, regardless of whether it is present. If
None
, the transform will be applied if it is present, but no error will be raised if it is not present.allow_missing_positions (bool, optional) – Allow spatial positions the output array to be blank because these frames are omitted from the image. If False and missing positions are found, an error is raised.
rtol (float | None, optional) – Relative tolerance for determining spacing regularity. If slice spacings vary by less that this proportion of the average spacing, they are considered to be regular. If neither
rtol
oratol
are provided, a default relative tolerance of 0.01 is used.atol (float | None, optional) – Absolute tolerance for determining spacing regularity. If slice spacings vary by less that this value (in mm), they are considered to be regular. Incompatible with
rtol
.
- Returns
Volume formed from frames of the image.
- Return type
Note
By default, this method uses 1-based indexing of rows and columns in order to match the conventions used in the DICOM standard. The first row of the total pixel matrix is row 1, and the last is
self.TotalPixelMatrixRows
. This is is unlike standard Python and NumPy indexing which is 0-based. For negative indices, the two are equivalent with the final row/column having index -1. To switch to standard Python behavior, specifyas_indices=True
.Note
The parameters
row_start
,row_end
,column_start
andcolumn_end
are provided primarily for the case where the volume is formed from frames tiled into a total pixel matrix. In other scenarios, it will behave as expected, but will not reduce the number of frames that have to be decoded and transformed.
- get_volume_geometry(*, rtol=None, atol=None, allow_missing_positions=False, allow_duplicate_positions=True)
Get geometry of the image in 3D space.
This will succeed in two situations. Either the image is a consists of a set of frames that are stacked together to give a regularly-spaced 3D volume array (typical of CT, MRI, and PET) or the image is a tiled image consisting of a set of 2D tiles that are placed together in the same plane to form a total pixel matrix.
A single frame image has a volume geometry if it provides any information about its position and orientation within a frame-of-reference coordinate system.
- Parameters
rtol (float | None, optional) – Relative tolerance for determining spacing regularity. If slice spacings vary by less that this proportion of the average spacing, they are considered to be regular. If neither
rtol
oratol
are provided, a default relative tolerance of 0.01 is used.atol (float | None, optional) – Absolute tolerance for determining spacing regularity. If slice spacings vary by less that this value (in mm), they are considered to be regular. Incompatible with
rtol
.allow_missing_positions (bool, optional) – Allow volume positions for which no frame exists in the image.
allow_duplicate_positions (bool, optional) – Allow multiple slices to occupy the same position within the volume. If False, duplicated image positions will result in failure.
- Returns
Geometry of the volume if the image represents a regularly-spaced 3D volume or tiled total pixel matrix.
None
otherwise.- Return type
highdicom.VolumeGeometry | None
- is_indexable_as_total_pixel_matrix()
Whether the image can be indexed as a total pixel matrix.
- Returns
True if the image may be indexed using row and column positions in the total pixel matrix. False otherwise.
- Return type
bool
- property is_tiled: bool
Whether the image is a tiled multi-frame image.
- Type
bool
- Return type
bool
- property number_of_frames: int
Number of frames in the image.
- Type
int
- Return type
int
- property pixel_array
Get the full pixel array of stored values for all frames.
This method is consistent with the behavior of the pydicom Dataset class, but additionally functions correctly when lazy frame retrieval is used.
- Returns
Full pixel array of stored values, ordered by frame number. Shape is (frames, rows, columns, samples). The frame dimension is omitted if it is equal to 1. The samples dimension is omitted for grayscale images, and 3 for color images.
- Return type
numpy.ndarray
- property transfer_syntax_uid: UID
TransferSyntaxUID.
- Type
- Return type
pydicom.uid.UID
- class highdicom.IssuerOfIdentifier(issuer_of_identifier, issuer_of_identifier_type=None)
Bases:
Dataset
Dataset describing the issuer or a specimen or container identifier.
- Parameters
issuer_of_identifier (str) – Identifier of the entity that created the examined specimen
issuer_of_identifier_type (Union[str, highdicom.enum.UniversalEntityIDTypeValues], optional) – Type of identifier of the entity that created the examined specimen (required if issuer_of_specimen_id is a Unique Entity ID)
- classmethod from_dataset(dataset, copy=True)
Construct object from an existing dataset.
- Parameters
dataset (pydicom.dataset.Dataset) – Dataset
copy (bool) – If True, the underlying dataset is deep-copied such that the original dataset remains intact. If False, this operation will alter the original dataset in place.
- Returns
Issuer of identifier
- Return type
- property issuer_of_identifier: str
Identifier of the issuer.
- Type
str
- Return type
str
- property issuer_of_identifier_type: highdicom.enum.UniversalEntityIDTypeValues | None
Type of the issuer.
- Type
- Return type
types.UnionType
[highdicom.enum.UniversalEntityIDTypeValues
,None
]
- class highdicom.LUT(first_mapped_value, lut_data, lut_explanation=None)
Bases:
Dataset
Dataset describing a lookup table (LUT).
- Parameters
first_mapped_value (int) – Pixel value that will be mapped to the first value in the lookup-table.
lut_data (numpy.ndarray) – Lookup table data. Must be of type uint8 or uint16.
lut_explanation (Union[str, None], optional) – Free-form text explanation of the meaning of the LUT.
Note
After the LUT is applied, a pixel in the image with value equal to
first_mapped_value
is mapped to an output value oflut_data[0]
, an input value offirst_mapped_value + 1
is mapped tolut_data[1]
, and so on.- apply(array, dtype=None)
Apply the LUT to a pixel array.
- Parameters
apply (numpy.ndarray) – Pixel array to which the LUT should be applied. Can be of any shape but must have an integer datatype.
dtype (Union[type, str, numpy.dtype, None], optional) – Datatype of the output array. If
None
, an unsigned integer datatype corresponding to the number of bits in the LUT will be used (eithernumpy.uint8
ornumpy.uint16
). Only safe casts are permitted.
- Returns
Array with LUT applied.
- Return type
numpy.ndarray
- property bits_per_entry: int
Bits allocated for the lookup table data. 8 or 16.
- Type
int
- Return type
int
- property first_mapped_value: int
Pixel value that will be mapped to the first value in the lookup table.
- Type
int
- Return type
int
- classmethod from_dataset(dataset, copy=True)
Create a LUT from an existing Dataset.
- Parameters
dataset (pydicom.Dataset) – Dataset representing a LUT.
copy (bool) – If True, the underlying dataset is deep-copied such that the original dataset remains intact. If False, this operation will alter the original dataset in place.
- Returns
Constructed object
- Return type
- get_inverted_lut_data()
Get LUT data array with output values inverted within the same range.
This returns the LUT data inverted within its original range. So if the original LUT data has output values in the range 10-20 inclusive, then the entries with output value 10 will be mapped to 20, the entries with output value 11 will be mapped to value 19, and so on until the entries with value 20 are mapped to 10.
- Returns
Inverted LUT data array, with the same size and data type as the original array.
- Return type
numpy.ndarray
- get_scaled_lut_data(output_range=(0.0, 1.0), dtype=<class 'numpy.float64'>, invert=False)
Get LUT data array with output values scaled to a given range.
- Parameters
output_range (Tuple[float, float], optional) – Tuple containing (lower, upper) value of the range into which to scale the output values. The lowest value in the LUT data will be mapped to the lower limit, and the highest value will be mapped to the upper limit, with a linear scaling used elsewhere.
dtype (Union[type, str, numpy.dtype, None], optional) – Data type of the returned array (must be a floating point NumPy data type).
invert (bool, optional) – Invert the returned array such that the lowest original value in the LUT is mapped to the upper limit and the highest original value is mapped to the lower limit. This may be used to efficiently combined a LUT with a Resentation transform that inverts the range.
- Returns
Rescaled LUT data array.
- Return type
numpy.ndarray
- property lut_data: ndarray
LUT data
- Type
numpy.ndarray
- Return type
numpy.ndarray
- property number_of_entries: int
Number of entries in the lookup table.
- Type
int
- Return type
int
- class highdicom.LateralityValues(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumerated values for Laterality attribute.
- L = 'L'
Left
- R = 'R'
Right
- class highdicom.ModalityLUT(lut_type, first_mapped_value, lut_data, lut_explanation=None)
Bases:
LUT
Dataset describing an item of the Modality LUT Sequence.
- Parameters
lut_type (Union[highdicom.RescaleTypeValues, str]) – String or enumerated value specifying the units of the output of the LUT operation.
first_mapped_value (int) – Pixel value that will be mapped to the first value in the lookup-table.
lut_data (numpy.ndarray) – Lookup table data. Must be of type uint16.
lut_explanation (Union[str, None], optional) – Free-form text explanation of the meaning of the LUT.
- apply(array, dtype=None)
Apply the LUT to a pixel array.
- Parameters
apply (numpy.ndarray) – Pixel array to which the LUT should be applied. Can be of any shape but must have an integer datatype.
dtype (Union[type, str, numpy.dtype, None], optional) – Datatype of the output array. If
None
, an unsigned integer datatype corresponding to the number of bits in the LUT will be used (eithernumpy.uint8
ornumpy.uint16
). Only safe casts are permitted.
- Returns
Array with LUT applied.
- Return type
numpy.ndarray
- property bits_per_entry: int
Bits allocated for the lookup table data. 8 or 16.
- Type
int
- Return type
int
- property first_mapped_value: int
Pixel value that will be mapped to the first value in the lookup table.
- Type
int
- Return type
int
- classmethod from_dataset(dataset, copy=True)
Create a LUT from an existing Dataset.
- Parameters
dataset (pydicom.Dataset) – Dataset representing a LUT.
copy (bool) – If True, the underlying dataset is deep-copied such that the original dataset remains intact. If False, this operation will alter the original dataset in place.
- Returns
Constructed object
- Return type
- get_inverted_lut_data()
Get LUT data array with output values inverted within the same range.
This returns the LUT data inverted within its original range. So if the original LUT data has output values in the range 10-20 inclusive, then the entries with output value 10 will be mapped to 20, the entries with output value 11 will be mapped to value 19, and so on until the entries with value 20 are mapped to 10.
- Returns
Inverted LUT data array, with the same size and data type as the original array.
- Return type
numpy.ndarray
- get_scaled_lut_data(output_range=(0.0, 1.0), dtype=<class 'numpy.float64'>, invert=False)
Get LUT data array with output values scaled to a given range.
- Parameters
output_range (Tuple[float, float], optional) – Tuple containing (lower, upper) value of the range into which to scale the output values. The lowest value in the LUT data will be mapped to the lower limit, and the highest value will be mapped to the upper limit, with a linear scaling used elsewhere.
dtype (Union[type, str, numpy.dtype, None], optional) – Data type of the returned array (must be a floating point NumPy data type).
invert (bool, optional) – Invert the returned array such that the lowest original value in the LUT is mapped to the upper limit and the highest original value is mapped to the lower limit. This may be used to efficiently combined a LUT with a Resentation transform that inverts the range.
- Returns
Rescaled LUT data array.
- Return type
numpy.ndarray
- property lut_data: ndarray
LUT data
- Type
numpy.ndarray
- Return type
numpy.ndarray
- property number_of_entries: int
Number of entries in the lookup table.
- Type
int
- Return type
int
- class highdicom.ModalityLUTTransformation(rescale_intercept=None, rescale_slope=None, rescale_type=None, modality_lut=None)
Bases:
Dataset
Dataset describing the Modality LUT Transformation as part of the Pixel Transformation Sequence to transform the manufacturer dependent pixel values into pixel values that are meaningful for the modality and are manufacturer independent.
- Parameters
rescale_intercept (Union[int, float, None], optional) – Intercept of linear function used for rescaling pixel values.
rescale_slope (Union[int, float, None], optional) – Slope of linear function used for rescaling pixel values.
rescale_type (Union[highdicom.RescaleTypeValues, str, None], optional) – String or enumerated value specifying the units of the output of the Modality LUT or rescale operation.
modality_lut (Union[highdicom.ModalityLUT, None], optional) – Lookup table specifying a pixel rescaling operation to apply to the stored values to give modality values.
Note
Either modality_lut may be specified or all three of rescale_slope, rescale_intercept, and rescale_type may be specified. All four parameters should not be specified simultaneously.
- apply(array, dtype=None)
Apply the transformation to a pixel array.
- Parameters
apply (numpy.ndarray) – Pixel array to which the transformation should be applied. Can be of any shape but must have an integer datatype if the transformation uses a LUT.
dtype (Union[type, str, numpy.dtype, None], optional) – Ensure the output type has this value. By default, this will have type
numpy.float64
if the transformation uses a rescale operation, or the datatype of the Modality LUT (numpy.uint8
ornumpy.uint16
) if it uses a LUT. An integer datatype may be specified if a rescale operation is used, however if Rescale Slope or Rescale Intecept are non-integer values an error will be raised.
- Returns
Array with transformation applied.
- Return type
numpy.ndarray
- has_lut()
Determine whether the transformation contains a lookup table.
- Returns
True if the transformation contains a look-up table. False otherwise, when the mapping is represented by slope and intercept defining a linear relationship.
- Return type
bool
- class highdicom.PadModes(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumerated values of modes to pad an array.
- CONSTANT = 'CONSTANT'
Pad with a specified constant value.
- EDGE = 'EDGE'
Pad with the edge value.
- MAXIMUM = 'MAXIMUM'
Pad with the maximum value.
- MEAN = 'MEAN'
Pad with the mean value.
- MEDIAN = 'MEDIAN'
Pad with the median value.
- MINIMUM = 'MINIMUM'
Pad with the minimum value.
- class highdicom.PaletteColorLUT(first_mapped_value, lut_data, color)
Bases:
Dataset
Dataset describing a palette color lookup table (LUT).
- Parameters
first_mapped_value (int) – Pixel value that will be mapped to the first value in the lookup table.
lut_data (numpy.ndarray) – Lookup table data. Must be of type uint8 or uint16.
color (str) – Text representing the color (
red
,green
, orblue
).
Note
After the LUT is applied, a pixel in the image with value equal to
first_mapped_value
is mapped to an output value oflut_data[0]
, an input value offirst_mapped_value + 1
is mapped tolut_data[1]
, and so on.- apply(array, dtype=None)
Apply the LUT to a pixel array.
- Parameters
apply (numpy.ndarray) – Pixel array to which the LUT should be applied. Can be of any shape but must have an integer datatype.
dtype (Union[type, str, numpy.dtype, None], optional) – Datatype of the output array. If
None
, an unsigned integer datatype corresponding to the number of bits in the LUT will be used (eithernumpy.uint8
ornumpy.uint16
). Only safe casts are permitted.
- Returns
Array with LUT applied.
- Return type
numpy.ndarray
- property bits_per_entry: int
Bits allocated for the lookup table data. 8 or 16.
- Type
int
- Return type
int
- classmethod extract_from_dataset(dataset, color)
Construct from an existing dataset.
Note that unlike many other
from_dataset()
methods, this method extracts only the attributes it needs from the original dataset, and always returns a new object.- Parameters
dataset (pydicom.Dataset) – Dataset containing the attributes of the Palette Color Lookup Table Transformation.
color (str) – Text representing the color (
red
,green
, orblue
).
- Returns
New object containing attributes found in
dataset
.- Return type
- property first_mapped_value: int
Pixel value that will be mapped to the first value in the lookup table.
- Type
int
- Return type
int
- property lut_data: ndarray
lookup table data
- Type
numpy.ndarray
- Return type
numpy.ndarray
- property number_of_entries: int
Number of entries in the lookup table.
- Type
int
- Return type
int
- class highdicom.PaletteColorLUTTransformation(red_lut, green_lut, blue_lut, palette_color_lut_uid=None)
Bases:
Dataset
Dataset describing the Palette Color LUT Transformation as part of the Pixel Transformation Sequence to transform grayscale into RGB color pixel values.
- Parameters
red_lut (Union[highdicom.PaletteColorLUT, highdicom.SegmentedPaletteColorLUT]) – Lookup table for the red output color channel.
green (Union[highdicom.PaletteColorLUT, highdicom.SegmentedPaletteColorLUT]) – Lookup table for the green output color channel.
blue_lut (Union[highdicom.PaletteColorLUT, highdicom.SegmentedPaletteColorLUT]) – Lookup table for the blue output color channel.
palette_color_lut_uid (Union[highdicom.UID, str, None], optional) – Unique identifier for the palette color lookup table.
- apply(array, dtype=None)
Apply the LUT to a pixel array.
- Parameters
apply (numpy.ndarray) – Pixel array to which the LUT should be applied. Can be of any shape but must have an integer datatype.
dtype (Union[type, str, numpy.dtype, None], optional) – Datatype of the output array. If
None
, an unsigned integer datatype corresponding to the number of bits in the LUT will be used (eithernumpy.uint8
ornumpy.uint16
). Only safe casts are permitted.
- Returns
Array with LUT applied. The RGB channels will be stacked along a new final dimension.
- Return type
numpy.ndarray
- property bits_per_entry: int
Bits allocated for the lookup table data. 8 or 16.
- Type
int
- Return type
int
- property blue_lut: highdicom.content.PaletteColorLUT | highdicom.content.SegmentedPaletteColorLUT
Union[highdicom.PaletteColorLUT, highdicom.SegmentedPaletteColorLUT]: Lookup table for the blue output color channel
- Return type
types.UnionType
[highdicom.content.PaletteColorLUT
,highdicom.content.SegmentedPaletteColorLUT
]
- property combined_lut_data: ndarray
numpy.ndarray:
An NumPy array of shape (number_of_entries, 3) containing the red, green and blue lut data stacked along the final dimension of the array. Data type with be 8 or 16 bit unsigned integer depending on the number of bits per entry in the LUT.
- Return type
numpy.ndarray
- classmethod extract_from_dataset(dataset)
Construct from an existing dataset.
Note that unlike many other
from_dataset()
methods, this method extracts only the attributes it needs from the original dataset, and always returns a new object.- Parameters
dataset (pydicom.Dataset) – Dataset containing Palette Color LUT information. Note that any number of other attributes may be included and will be ignored (for example allowing an entire image with Palette Color LUT information at the top level to be passed).
- Returns
New object containing attributes found in
dataset
.- Return type
- property first_mapped_value: int
Pixel value that will be mapped to the first value in the lookup table.
- Type
int
- Return type
int
- classmethod from_colors(colors, first_mapped_value=0, palette_color_lut_uid=None)
Create a palette color lookup table from a list of colors.
- Parameters
colors (Sequence[str]) –
List of colors. Item
i
of the list will be used as the color for input valuefirst_mapped_value + i
. Each color should be a string understood by PIL’sgetrgb()
function (see here for the documentation of that function or here) for the original list of colors). This includes many case-insensitive color names (e.g."red"
,"Crimson"
, or"INDIGO"
), hex codes (e.g."#ff7733"
) or decimal integers in the format of this example:"RGB(255, 255, 0)"
.first_mapped_value (int) – Pixel value that will be mapped to the first value in the lookup table.
palette_color_lut_uid (Union[highdicom.UID, str, None], optional) – Unique identifier for the palette color lookup table.
Examples
Create a
PaletteColorLUTTransformation
for a small number of values (4 in this case). This would be typical for a labelmap segmentation.>>> import highdicom as hd >>> >>> lut = hd.PaletteColorLUTTransformation.from_colors( >>> colors=['black', 'red', 'orange', 'yellow'], >>> palette_color_lut_uid=hd.UID(), >>> )
- Returns
Palette Color Lookup table created from the given colors. This will always be an 8 bit LUT.
- Return type
- classmethod from_combined_lut(lut_data, first_mapped_value=0, palette_color_lut_uid=None)
Create a palette color lookup table from a combined LUT array.
- Parameters
lut_data (numpy.ndarray) – LUT array with shape
(number_of_entries, 3)
where the entries are stacked as rows and the 3 columns represent the red, green, and blue channels (in that order). Data type must benumpy.uint8
ornumpy.uint16
.first_mapped_value (int) – Input pixel value that will be mapped to the first value in the lookup table.
palette_color_lut_uid (Union[highdicom.UID, str, None], optional) – Unique identifier for the palette color lookup table.
- Returns
Palette Color Lookup table created from the given colors. This will be an 8-bit or 16-bit LUT depending on the data type of the input
lut_data
.- Return type
Examples
Create a
PaletteColorLUTTransformation
from a built-in colormap from the well-knownmatplotlib
python package (must be installed separately).>>> import numpy as np >>> from matplotlib import colormaps >>> import highdicom as hd >>> >>> # Use matplotlib's built-in 'gist_rainbow_r' colormap as an example >>> cmap = colormaps['gist_rainbow_r'] >>> >>> # Create an 8-bit RGBA LUT array from the colormap >>> num_entries = 10 # e.g. number of classes in a segmentation >>> lut_data = cmap(np.arange(num_entries) / (num_entries + 1), bytes=True) >>> >>> # Remove the alpha channel (at index 3) >>> lut_data = lut_data[:, :3] >>> >>> lut = hd.PaletteColorLUTTransformation.from_combined_lut( >>> lut_data, >>> palette_color_lut_uid=hd.UID(), >>> )
- property green_lut: highdicom.content.PaletteColorLUT | highdicom.content.SegmentedPaletteColorLUT
Union[highdicom.PaletteColorLUT, highdicom.SegmentedPaletteColorLUT]: Lookup table for the green output color channel
- Return type
types.UnionType
[highdicom.content.PaletteColorLUT
,highdicom.content.SegmentedPaletteColorLUT
]
- property is_segmented: bool
True if the transformation is a segmented LUT. False otherwise.
- Type
bool
- Return type
bool
- property number_of_entries: int
Number of entries in the lookup table.
- Type
int
- Return type
int
- property red_lut: highdicom.content.PaletteColorLUT | highdicom.content.SegmentedPaletteColorLUT
Union[highdicom.PaletteColorLUT, highdicom.SegmentedPaletteColorLUT]: Lookup table for the red output color channel
- Return type
types.UnionType
[highdicom.content.PaletteColorLUT
,highdicom.content.SegmentedPaletteColorLUT
]
- class highdicom.PatientOrientationValuesBiped(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumerated values for Patient Orientation attribute if Anatomical Orientation Type attribute has value
"BIPED"
.- A = 'A'
Anterior
- F = 'F'
Foot
- H = 'H'
Head
- L = 'L'
Left
- P = 'P'
Posterior
- R = 'R'
Right
- class highdicom.PatientOrientationValuesQuadruped(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumerated values for Patient Orientation attribute if Anatomical Orientation Type attribute has value
"QUADRUPED"
.- CD = 'CD'
Caudal
- CR = 'CR'
Cranial
- D = 'D'
Dorsal
- DI = 'DI'
Distal
- L = 'L'
Lateral
- LE = 'LE'
Left
- M = 'M'
Medial
- PA = 'PA'
Palmar
- PL = 'PL'
Plantar
- PR = 'PR'
Proximal
- R = 'R'
Rostral
- RT = 'RT'
Right
- V = 'V'
Ventral
- class highdicom.PatientSexValues(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumerated values for Patient’s Sex attribute.
- F = 'F'
Female
- M = 'M'
Male
- O = 'O'
Other
- class highdicom.PhotometricInterpretationValues(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumerated values for Photometric Interpretation attribute.
See Section C.7.6.3.1.2 for more information.
- MONOCHROME1 = 'MONOCHROME1'
- MONOCHROME2 = 'MONOCHROME2'
- PALETTE_COLOR = 'PALETTE COLOR'
- RGB = 'RGB'
- YBR_FULL = 'YBR_FULL'
- YBR_FULL_422 = 'YBR_FULL_422'
- YBR_ICT = 'YBR_ICT'
- YBR_PARTIAL_420 = 'YBR_PARTIAL_420'
- YBR_RCT = 'YBR_RCT'
- class highdicom.PixelIndexDirections(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumerated values used to describe indexing conventions of pixel arrays.
- D = 'D'
Pixel index that increases moving down the columns from top to bottom.
- Type
Down
- L = 'L'
Pixel index that increases moving across the rows from right to left.
- Type
Left
- R = 'R'
Pixel index that increases moving across the rows from left to right.
- Type
Right
- U = 'U'
Pixel index that increases moving up the columns from bottom to top.
- Type
Up
- class highdicom.PixelMeasuresSequence(pixel_spacing, slice_thickness, spacing_between_slices=None)
Bases:
Sequence
Sequence of data elements describing physical spacing of an image based on the Pixel Measures functional group macro.
- Parameters
pixel_spacing (Sequence[float]) – Distance in physical space between neighboring pixels in millimeters along the row and column dimension of the image. First value represents the spacing between rows (vertical) and second value represents the spacing between columns (horizontal).
slice_thickness (Union[float, None]) – Depth of physical space volume the image represents in millimeter.
spacing_between_slices (Union[float, None], optional) – Distance in physical space between two consecutive images in millimeters. Only required for certain modalities, such as MR.
- classmethod from_sequence(sequence, copy=True)
Create a PixelMeasuresSequence from an existing Sequence.
- Parameters
sequence (pydicom.sequence.Sequence) – Sequence to be converted.
copy (bool) – If True, the underlying sequence is deep-copied such that the original sequence remains intact. If False, this operation will alter the original sequence in place.
- Returns
Plane Measures Sequence.
- Return type
- Raises
TypeError: – If sequence is not of the correct type.
ValueError: – If sequence does not contain exactly one item.
AttributeError: – If sequence does not contain the attributes required for a pixel measures sequence.
- class highdicom.PixelRepresentationValues(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumerated values for Planar Representation attribute.
- COMPLEMENT = 1
- UNSIGNED_INTEGER = 0
- class highdicom.PlanarConfigurationValues(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumerated values for Planar Representation attribute.
- COLOR_BY_PIXEL = 0
- COLOR_BY_PLANE = 1
- class highdicom.PlaneOrientationSequence(coordinate_system, image_orientation)
Bases:
Sequence
Sequence of data elements describing the image position in the patient or slide coordinate system based on either the Plane Orientation (Patient) or the Plane Orientation (Slide) functional group macro, respectively.
- Parameters
coordinate_system (Union[str, highdicom.CoordinateSystemNames]) – Frame of reference coordinate system
image_orientation (Sequence[float]) – Direction cosines for the first row (first triplet) and the first column (second triplet) of an image with respect to the X, Y, and Z axis of the three-dimensional coordinate system
- classmethod from_sequence(sequence, copy=True)
Create a PlaneOrientationSequence from an existing Sequence.
The coordinate system is inferred from the attributes in the sequence.
- Parameters
sequence (pydicom.sequence.Sequence) – Sequence to be converted.
copy (bool) – If True, the underlying sequence is deep-copied such that the original sequence remains intact. If False, this operation will alter the original sequence in place.
- Returns
Plane Orientation Sequence.
- Return type
- Raises
TypeError: – If sequence is not of the correct type.
ValueError: – If sequence does not contain exactly one item.
AttributeError: – If sequence does not contain the attributes required for a plane orientation sequence.
- class highdicom.PlanePositionSequence(coordinate_system, image_position, pixel_matrix_position=None)
Bases:
Sequence
Sequence of data elements describing the position of an individual plane (frame) in the patient coordinate system based on the Plane Position (Patient) functional group macro or in the slide coordinate system based on the Plane Position (Slide) functional group macro.
- Parameters
coordinate_system (Union[str, highdicom.CoordinateSystemNames]) – Frame of reference coordinate system
image_position (Sequence[float]) – Offset of the first row and first column of the plane (frame) in millimeter along the x, y, and z axis of the three-dimensional patient or slide coordinate system
pixel_matrix_position (Tuple[int, int], optional) – Offset of the first column and first row of the plane (frame) in pixels along the row and column direction of the total pixel matrix (only required if coordinate_system is
"SLIDE"
)
Note
The values of both image_position and pixel_matrix_position are one-based.
- classmethod from_sequence(sequence, copy=True)
Create a PlanePositionSequence from an existing Sequence.
The coordinate system is inferred from the attributes in the sequence.
- Parameters
sequence (pydicom.sequence.Sequence) – Sequence to be converted.
copy (bool) – If True, the underlying sequence is deep-copied such that the original sequence remains intact. If False, this operation will alter the original sequence in place.
- Returns
Plane Position Sequence.
- Return type
- Raises
TypeError: – If sequence is not of the correct type.
ValueError: – If sequence does not contain exactly one item.
AttributeError: – If sequence does not contain the attributes required for a plane position sequence.
- class highdicom.PresentationLUT(first_mapped_value, lut_data, lut_explanation=None)
Bases:
LUT
Dataset describing an item of the Presentation LUT Sequence.
- Parameters
first_mapped_value (int) – Pixel value that will be mapped to the first value in the lookup-table.
lut_data (numpy.ndarray) – Lookup table data. Must be of type uint16.
lut_explanation (Union[str, None], optional) – Free-form text explanation of the meaning of the LUT.
- apply(array, dtype=None)
Apply the LUT to a pixel array.
- Parameters
apply (numpy.ndarray) – Pixel array to which the LUT should be applied. Can be of any shape but must have an integer datatype.
dtype (Union[type, str, numpy.dtype, None], optional) – Datatype of the output array. If
None
, an unsigned integer datatype corresponding to the number of bits in the LUT will be used (eithernumpy.uint8
ornumpy.uint16
). Only safe casts are permitted.
- Returns
Array with LUT applied.
- Return type
numpy.ndarray
- property bits_per_entry: int
Bits allocated for the lookup table data. 8 or 16.
- Type
int
- Return type
int
- property first_mapped_value: int
Pixel value that will be mapped to the first value in the lookup table.
- Type
int
- Return type
int
- classmethod from_dataset(dataset, copy=True)
Create a LUT from an existing Dataset.
- Parameters
dataset (pydicom.Dataset) – Dataset representing a LUT.
copy (bool) – If True, the underlying dataset is deep-copied such that the original dataset remains intact. If False, this operation will alter the original dataset in place.
- Returns
Constructed object
- Return type
- get_inverted_lut_data()
Get LUT data array with output values inverted within the same range.
This returns the LUT data inverted within its original range. So if the original LUT data has output values in the range 10-20 inclusive, then the entries with output value 10 will be mapped to 20, the entries with output value 11 will be mapped to value 19, and so on until the entries with value 20 are mapped to 10.
- Returns
Inverted LUT data array, with the same size and data type as the original array.
- Return type
numpy.ndarray
- get_scaled_lut_data(output_range=(0.0, 1.0), dtype=<class 'numpy.float64'>, invert=False)
Get LUT data array with output values scaled to a given range.
- Parameters
output_range (Tuple[float, float], optional) – Tuple containing (lower, upper) value of the range into which to scale the output values. The lowest value in the LUT data will be mapped to the lower limit, and the highest value will be mapped to the upper limit, with a linear scaling used elsewhere.
dtype (Union[type, str, numpy.dtype, None], optional) – Data type of the returned array (must be a floating point NumPy data type).
invert (bool, optional) – Invert the returned array such that the lowest original value in the LUT is mapped to the upper limit and the highest original value is mapped to the lower limit. This may be used to efficiently combined a LUT with a Resentation transform that inverts the range.
- Returns
Rescaled LUT data array.
- Return type
numpy.ndarray
- property lut_data: ndarray
LUT data
- Type
numpy.ndarray
- Return type
numpy.ndarray
- property number_of_entries: int
Number of entries in the lookup table.
- Type
int
- Return type
int
- class highdicom.PresentationLUTShapeValues(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumerated values for the Presentation LUT Shape attribute.
- IDENTITY = 'IDENTITY'
No further translation of values is performed.
- INVERSE = 'INVERSE'
A value of INVERSE shall mean the same as a value of IDENTITY, except that the minimum output value shall convey the meaning of the maximum available luminance, and the maximum value shall convey the minimum available luminance.
- class highdicom.PresentationLUTTransformation(presentation_lut_shape=None, presentation_lut=None)
Bases:
Dataset
Dataset describing the Presentation LUT Transformation as part of the Pixel Transformation Sequence to transform polarity pixel values into device-indendent presentation values (P-Values).
- Parameters
presentation_lut_shape (Union[highdicom.pr.PresentationLUTShapeValues, str, None], optional) – Shape of the presentation LUT
presentation_lut (Optional[highdicom.PresentationLUT], optional) – Presentation LUT
Note
Only one of
presentation_lut_shape
orpresentation_lut
should be provided.
- class highdicom.RGBColorChannels(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
- B = 'B'
Blue color channel.
- G = 'G'
Green color channel.
- R = 'R'
Red color channel.
- class highdicom.ReferencedImageSequence(referenced_images=None, referenced_frame_number=None, referenced_segment_number=None)
Bases:
Sequence
Sequence of data elements describing a set of referenced images.
- Parameters
referenced_images (Union[Sequence[pydicom.Dataset], None], optional) – Images to which the VOI LUT described in this dataset applies. Note that if unspecified, the VOI LUT applies to every image referenced in the presentation state object that this dataset is included in.
referenced_frame_number (Union[int, Sequence[int], None], optional) – Frame number(s) within a referenced multiframe image to which this VOI LUT applies.
referenced_segment_number (Union[int, Sequence[int], None], optional) – Segment number(s) within a referenced segmentation image to which this VOI LUT applies.
- class highdicom.RescaleTypeValues(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumerated values for attribute Rescale Type.
This specifies the units of the result of the rescale operation. Other values may be used, but they are not defined by the DICOM standard.
- ED = 'ED'
Electron density in 1023 electrons/ml.
- EDW = 'EDW'
Electron density normalized to water.
Units are N/Nw where N is number of electrons per unit volume, and Nw is number of electrons in the same unit of water at standard temperature and pressure.
- HU = 'HU'
Hounsfield Units (CT).
- HU_MOD = 'HU_MOD'
Modified Hounsfield Unit.
- MGML = 'MGML'
Milligrams per milliliter.
- OD = 'OD'
The number in the LUT represents thousands of optical density.
That is, a value of 2140 represents an optical density of 2.140.
- PCT = 'PCT'
Percentage (%)
- US = 'US'
Unspecified.
- Z_EFF = 'Z_EFF'
Effective Atomic Number (i.e., Effective-Z).
- class highdicom.SOPClass(study_instance_uid, series_instance_uid, series_number, sop_instance_uid, sop_class_uid, instance_number, modality, manufacturer=None, transfer_syntax_uid=None, patient_id=None, patient_name=None, patient_birth_date=None, patient_sex=None, accession_number=None, study_id=None, study_date=None, study_time=None, referring_physician_name=None, content_qualification=None, coding_schemes=None, series_description=None, manufacturer_model_name=None, software_versions=None, device_serial_number=None, institution_name=None, institutional_department_name=None)
Bases:
Dataset
Base class for DICOM SOP Instances.
- Parameters
study_instance_uid (str) – UID of the study
series_instance_uid (str) – UID of the series
series_number (int) – Number of the series within the study
sop_instance_uid (str) – UID that should be assigned to the instance
instance_number (int) – Number that should be assigned to the instance
modality (str) – Name of the modality
manufacturer (Union[str, None], optional) – Name of the manufacturer (developer) of the device (software) that creates the instance
transfer_syntax_uid (Union[str, None], optional) – UID of transfer syntax that should be used for encoding of data elements. Defaults to Implicit VR Little Endian (UID
"1.2.840.10008.1.2"
)patient_id (Union[str, None], optional) – ID of the patient (medical record number)
patient_name (Union[str, pydicom.valuerep.PersonName, None], optional) – Name of the patient
patient_birth_date (Union[str, None], optional) – Patient’s birth date
patient_sex (Union[str, highdicom.PatientSexValues, None], optional) – Patient’s sex
study_id (Union[str, None], optional) – ID of the study
accession_number (Union[str, None], optional) – Accession number of the study
study_date (Union[str, datetime.date, None], optional) – Date of study creation
study_time (Union[str, datetime.time, None], optional) – Time of study creation
referring_physician_name (Union[str, pydicom.valuerep.PersonName, None], optional) – Name of the referring physician
content_qualification (Union[str, highdicom.ContentQualificationValues, None], optional) – Indicator of content qualification
coding_schemes (Union[Sequence[highdicom.sr.CodingSchemeIdentificationItem], None], optional) – private or public coding schemes that are not part of the DICOM standard
series_description (Union[str, None], optional) – Human readable description of the series
manufacturer_model_name (Union[str, None], optional) – Name of the device model (name of the software library or application) that creates the instance
software_versions (Union[str, Tuple[str]]) – Version(s) of the software that creates the instance
device_serial_number (str) – Manufacturer’s serial number of the device
institution_name (Union[str, None], optional) – Name of the institution of the person or device that creates the SR document instance.
institutional_department_name (Union[str, None], optional) – Name of the department of the person or device that creates the SR document instance.
Note
The constructor only provides attributes that are required by the standard (type 1 and 2) as part of the Patient, General Study, Patient Study, General Series, General Equipment and SOP Common modules. Derived classes are responsible for providing additional attributes required by the corresponding Information Object Definition (IOD). Additional optional attributes can subsequently be added to the dataset.
- copy_patient_and_study_information(dataset)
Copies patient- and study-related metadata from dataset that are defined in the following modules: Patient, General Study, Patient Study, Clinical Trial Subject and Clinical Trial Study.
- Parameters
dataset (pydicom.dataset.Dataset) – DICOM Data Set from which attributes should be copied
- Return type
None
- copy_specimen_information(dataset)
Copies specimen-related metadata from dataset that are defined in the Specimen module.
- Parameters
dataset (pydicom.dataset.Dataset) – DICOM Data Set from which attributes should be copied
- Return type
None
- property transfer_syntax_uid: UID
TransferSyntaxUID.
- Type
- Return type
pydicom.uid.UID
- class highdicom.SegmentedPaletteColorLUT(first_mapped_value, segmented_lut_data, color)
Bases:
Dataset
Dataset describing a segmented palette color lookup table (LUT).
- Parameters
first_mapped_value (int) – Pixel value that will be mapped to the first value in the lookup table.
segmented_lut_data (numpy.ndarray) – Segmented lookup table data. Must be of type uint8 or uint16.
color (str) – Free-form text explanation of the color (
red
,green
, orblue
).
Note
After the LUT is applied, a pixel in the image with value equal to
first_mapped_value
is mapped to an output value oflut_data[0]
, an input value offirst_mapped_value + 1
is mapped tolut_data[1]
, and so on.See here for details of how the segmented LUT data is encoded. Highdicom may provide utilities to assist in creating these arrays in a future release.
- property bits_per_entry: int
Bits allocated for the lookup table data. 8 or 16.
- Type
int
- Return type
int
- classmethod extract_from_dataset(dataset, color)
Construct from an existing dataset.
Note that unlike many other
from_dataset()
methods, this method extracts only the attributes it needs from the original dataset, and always returns a new object.- Parameters
dataset (pydicom.Dataset) – Dataset containing the attributes of the Palette Color Lookup Table Transformation.
color (str) – Text representing the color (
red
,green
, orblue
).
- Returns
New object containing attributes found in
dataset
.- Return type
- property first_mapped_value: int
Pixel value that will be mapped to the first value in the lookup table.
- Type
int
- Return type
int
- property lut_data: ndarray
expanded lookup table data
- Type
numpy.ndarray
- Return type
numpy.ndarray
- property number_of_entries: int
Number of entries in the lookup table.
- Type
int
- Return type
int
- property segmented_lut_data: ndarray
segmented lookup table data
- Type
numpy.ndarray
- Return type
numpy.ndarray
- class highdicom.SpecimenCollection(procedure)
Bases:
ContentSequence
Sequence of SR content items describing a specimen collection procedure.
- Parameters
procedure (Union[pydicom.sr.coding.Code, highdicom.sr.CodedConcept]) – Surgical procedure used to collect the examined specimen
- append(val)
Append a content item to the sequence.
- Parameters
item (highdicom.sr.ContentItem) – SR Content Item
- Return type
None
- extend(val)
Extend multiple content items to the sequence.
- Parameters
val (Iterable[highdicom.sr.ContentItem, highdicom.sr.ContentSequence]) – SR Content Items
- Return type
None
- find(name)
Find contained content items given their name.
- Parameters
name (Union[pydicom.sr.coding.Code, highdicom.sr.CodedConcept]) – Name of SR Content Items
- Returns
Matched content items
- Return type
- classmethod from_sequence(sequence, is_root=False, is_sr=True, copy=True)
Construct object from a sequence of datasets.
- Parameters
sequence (Sequence[pydicom.dataset.Dataset]) – Datasets representing SR Content Items
is_root (bool, optional) – Whether the sequence is used to contain SR Content Items that are intended to be added to an SR document at the root of the document content tree
is_sr (bool, optional) – Whether the sequence is use to contain SR Content Items that are intended to be added to an SR document as opposed to other types of IODs based on an acquisition, protocol or workflow context template
copy (bool) – If True, the underlying sequence is deep-copied such that the original sequence remains intact. If False, this operation will alter the original sequence in place.
- Returns
Content Sequence containing SR Content Items
- Return type
- get_nodes()
Get content items that represent nodes in the content tree.
A node is hereby defined as a content item that has a ContentSequence attribute.
- Returns
Matched content items
- Return type
- index(val)
Get the index of a given item.
- Parameters
val (highdicom.sr.ContentItem) – SR Content Item
- Returns
int
- Return type
Index of the item in the sequence
- insert(position, val)
Insert a content item into the sequence at a given position.
- Parameters
position (int) – Index position
val (highdicom.sr.ContentItem) – SR Content Item
- Return type
None
- property is_root: bool
whether the sequence is intended for use at the root of the SR content tree.
- Type
bool
- Return type
bool
- property is_sr: bool
whether the sequence is intended for use in an SR document
- Type
bool
- Return type
bool
- property procedure: CodedConcept
Surgical procedure
- Type
- Return type
- class highdicom.SpecimenDescription(specimen_id, specimen_uid, specimen_location=None, specimen_preparation_steps=None, issuer_of_specimen_id=None, primary_anatomic_structures=None, specimen_type=None, specimen_short_description=None, specimen_detailed_description=None)
Bases:
Dataset
Dataset describing a specimen.
- Parameters
specimen_id (str) – Identifier of the examined specimen
specimen_uid (str) – Unique identifier of the examined specimen
specimen_location (Union[str, Tuple[float, float, float]], optional) – Location of the examined specimen relative to the container provided either in form of text or in form of spatial X, Y, Z coordinates specifying the position (offset) relative to the three-dimensional slide coordinate system in millimeter (X, Y) and micrometer (Z) unit.
specimen_preparation_steps (Sequence[highdicom.SpecimenPreparationStep], optional) – Steps that were applied during the preparation of the examined specimen in the laboratory prior to image acquisition
specimen_type (Union[pydicom.sr.coding.Code, highdicom.sr.CodedConcept], optional) – The anatomic pathology specimen type of the specimen (see CID 8103 “Anatomic Pathology Specimen Type” for options).
specimen_short_description (str, optional) – Short description of the examined specimen.
specimen_detailed_description (str, optional) – Detailed description of the examined specimen.
issuer_of_specimen_id (highdicom.IssuerOfIdentifier, optional) – Description of the issuer of the specimen identifier
primary_anatomic_structures (Sequence[Union[pydicom.sr.Code, highdicom.sr.CodedConcept]]) – Body site at which specimen was collected
- classmethod from_dataset(dataset)
Construct object from an existing dataset.
- Parameters
dataset (pydicom.dataset.Dataset) – Dataset representing an item of Specimen Description Sequence
- Returns
Constructed object
- Return type
- property issuer_of_specimen_id: highdicom.content.IssuerOfIdentifier | None
Issuer of identifier for the specimen.
- Type
- Return type
types.UnionType
[highdicom.content.IssuerOfIdentifier
,None
]
- property primary_anatomic_structures: list[highdicom.sr.coding.CodedConcept] | None
List of anatomic structures of the specimen.
- Type
- Return type
types.UnionType
[list
[highdicom.sr.coding.CodedConcept
],None
]
- property specimen_detailed_description: str | None
Detailed description of specimen.
- Type
str
- Return type
types.UnionType
[str
,None
]
- property specimen_id: str
Specimen identifier.
- Type
str
- Return type
str
- property specimen_location: str | tuple[float, float, float] | None
Specimen location in container.
- Type
Tuple[float, float, float]
- Return type
types.UnionType
[str
,tuple
[float
,float
,float
],None
]
- property specimen_preparation_steps: list[highdicom.content.SpecimenPreparationStep]
Specimen preparation steps.
- Type
- Return type
- property specimen_short_description: str | None
Short description of specimen.
- Type
str
- Return type
types.UnionType
[str
,None
]
- property specimen_type: highdicom.sr.coding.CodedConcept | None
Specimen type.
- Type
- Return type
types.UnionType
[highdicom.sr.coding.CodedConcept
,None
]
- class highdicom.SpecimenPreparationStep(specimen_id, processing_procedure, processing_description=None, processing_datetime=None, issuer_of_specimen_id=None, fixative=None, embedding_medium=None, specimen_container=None, specimen_type=None)
Bases:
Dataset
Dataset describing a specimen preparation step according to structured reporting template TID 8001 Specimen Preparation.
- Parameters
specimen_id (str) – Identifier of the processed specimen
processing_procedure (Union[highdicom.SpecimenCollection, highdicom.SpecimenSampling, highdicom.SpecimenStaining, highdicom.SpecimenProcessing]) – Procedure used during processing
processing_datetime (datetime.datetime, optional) – Datetime of processing
processing_description (Union[str, pydicom.sr.coding.Code, highdicom.sr.CodedConcept], optional) – Description of processing
issuer_of_specimen_id (highdicom.IssuerOfIdentifier, optional) – The issuer of the identifier of the processed specimen.
fixative (Union[pydicom.sr.coding.Code, highdicom.sr.CodedConcept], optional) – Fixative used during processing (see CID 8114 “Specimen Fixative” for options).
embedding_medium (Union[pydicom.sr.coding.Code, highdicom.sr.CodedConcept], optional) – Embedding medium used during processing see CID 8115 “Specimen Embedding Media” for options).
specimen_container (Union[pydicom.sr.coding.Code, highdicom.sr.CodedConcept], optional) – Container the specimen resides in (see CID 8101 “Container Type” for options).
specimen_type (Union[pydicom.sr.coding.Code, highdicom.sr.CodedConcept], optional) – The anatomic pathology specimen type of the specimen (see CID 8103 “Anatomic Pathology Specimen Type” for options).
- property embedding_medium: highdicom.sr.coding.CodedConcept | None
Tissue embedding medium
- Type
- Return type
types.UnionType
[highdicom.sr.coding.CodedConcept
,None
]
- property fixative: highdicom.sr.coding.CodedConcept | None
Tissue fixative
- Type
- Return type
types.UnionType
[highdicom.sr.coding.CodedConcept
,None
]
- classmethod from_dataset(dataset)
Construct object from an existing dataset.
- Parameters
dataset (pydicom.dataset.Dataset) – Dataset
- Returns
Specimen Preparation Step
- Return type
- property issuer_of_specimen_id: str | None
Issuer of specimen id
- Type
str
- Return type
types.UnionType
[str
,None
]
- property processing_datetime: datetime.datetime | None
Processing datetime
- Type
datetime.datetime
- Return type
types.UnionType
[datetime.datetime
,None
]
- property processing_description: str | highdicom.sr.coding.CodedConcept | None
Processing description
- Type
Union[str, highdicom.sr.CodedConcept]
- Return type
types.UnionType
[str
,highdicom.sr.coding.CodedConcept
,None
]
- property processing_procedure: highdicom.content.SpecimenCollection | highdicom.content.SpecimenSampling | highdicom.content.SpecimenStaining | highdicom.content.SpecimenProcessing
Union[highdicom.SpecimenCollection, highdicom.SpecimenSampling, highdicom.SpecimenStaining, highdicom.SpecimenProcessing]:
Procedure used during processing
- Return type
types.UnionType
[highdicom.content.SpecimenCollection
,highdicom.content.SpecimenSampling
,highdicom.content.SpecimenStaining
,highdicom.content.SpecimenProcessing
]
- property processing_type: CodedConcept
Processing type
- Type
- Return type
- property specimen_container: highdicom.sr.coding.CodedConcept | None
Specimen container
- Type
- Return type
types.UnionType
[highdicom.sr.coding.CodedConcept
,None
]
- property specimen_id: str
Specimen identifier
- Type
str
- Return type
str
- property specimen_type: highdicom.sr.coding.CodedConcept | None
Specimen type
- Type
- Return type
types.UnionType
[highdicom.sr.coding.CodedConcept
,None
]
- class highdicom.SpecimenProcessing(description)
Bases:
ContentSequence
Sequence of SR content items describing a specimen processing procedure.
- Parameters
description (Union[pydicom.sr.coding.Code, highdicom.sr.CodedConcept, str]) – Description of the processing
- append(val)
Append a content item to the sequence.
- Parameters
item (highdicom.sr.ContentItem) – SR Content Item
- Return type
None
- property description: CodedConcept
Processing step description
- Type
- Return type
- extend(val)
Extend multiple content items to the sequence.
- Parameters
val (Iterable[highdicom.sr.ContentItem, highdicom.sr.ContentSequence]) – SR Content Items
- Return type
None
- find(name)
Find contained content items given their name.
- Parameters
name (Union[pydicom.sr.coding.Code, highdicom.sr.CodedConcept]) – Name of SR Content Items
- Returns
Matched content items
- Return type
- classmethod from_sequence(sequence, is_root=False, is_sr=True, copy=True)
Construct object from a sequence of datasets.
- Parameters
sequence (Sequence[pydicom.dataset.Dataset]) – Datasets representing SR Content Items
is_root (bool, optional) – Whether the sequence is used to contain SR Content Items that are intended to be added to an SR document at the root of the document content tree
is_sr (bool, optional) – Whether the sequence is use to contain SR Content Items that are intended to be added to an SR document as opposed to other types of IODs based on an acquisition, protocol or workflow context template
copy (bool) – If True, the underlying sequence is deep-copied such that the original sequence remains intact. If False, this operation will alter the original sequence in place.
- Returns
Content Sequence containing SR Content Items
- Return type
- get_nodes()
Get content items that represent nodes in the content tree.
A node is hereby defined as a content item that has a ContentSequence attribute.
- Returns
Matched content items
- Return type
- index(val)
Get the index of a given item.
- Parameters
val (highdicom.sr.ContentItem) – SR Content Item
- Returns
int
- Return type
Index of the item in the sequence
- insert(position, val)
Insert a content item into the sequence at a given position.
- Parameters
position (int) – Index position
val (highdicom.sr.ContentItem) – SR Content Item
- Return type
None
- property is_root: bool
whether the sequence is intended for use at the root of the SR content tree.
- Type
bool
- Return type
bool
- property is_sr: bool
whether the sequence is intended for use in an SR document
- Type
bool
- Return type
bool
- class highdicom.SpecimenSampling(method, parent_specimen_id, parent_specimen_type, issuer_of_parent_specimen_id=None)
Bases:
ContentSequence
Sequence of SR content items describing a specimen sampling procedure.
See SR template TID 8002 Specimen Sampling.
- Parameters
method (Union[pydicom.sr.coding.Code, highdicom.sr.CodedConcept]) – Method used to sample the examined specimen from a parent specimen
parent_specimen_id (str) – Identifier of the parent specimen
parent_specimen_type (Union[pydicom.sr.coding.Code, highdicom.sr.CodedConcept]) – Type of the parent specimen
issuer_of_parent_specimen_id (highdicom.IssuerOfIdentifier, optional) – Issuer who created the parent specimen
- append(val)
Append a content item to the sequence.
- Parameters
item (highdicom.sr.ContentItem) – SR Content Item
- Return type
None
- extend(val)
Extend multiple content items to the sequence.
- Parameters
val (Iterable[highdicom.sr.ContentItem, highdicom.sr.ContentSequence]) – SR Content Items
- Return type
None
- find(name)
Find contained content items given their name.
- Parameters
name (Union[pydicom.sr.coding.Code, highdicom.sr.CodedConcept]) – Name of SR Content Items
- Returns
Matched content items
- Return type
- classmethod from_sequence(sequence, is_root=False, is_sr=True, copy=True)
Construct object from a sequence of datasets.
- Parameters
sequence (Sequence[pydicom.dataset.Dataset]) – Datasets representing SR Content Items
is_root (bool, optional) – Whether the sequence is used to contain SR Content Items that are intended to be added to an SR document at the root of the document content tree
is_sr (bool, optional) – Whether the sequence is use to contain SR Content Items that are intended to be added to an SR document as opposed to other types of IODs based on an acquisition, protocol or workflow context template
copy (bool) – If True, the underlying sequence is deep-copied such that the original sequence remains intact. If False, this operation will alter the original sequence in place.
- Returns
Content Sequence containing SR Content Items
- Return type
- get_nodes()
Get content items that represent nodes in the content tree.
A node is hereby defined as a content item that has a ContentSequence attribute.
- Returns
Matched content items
- Return type
- index(val)
Get the index of a given item.
- Parameters
val (highdicom.sr.ContentItem) – SR Content Item
- Returns
int
- Return type
Index of the item in the sequence
- insert(position, val)
Insert a content item into the sequence at a given position.
- Parameters
position (int) – Index position
val (highdicom.sr.ContentItem) – SR Content Item
- Return type
None
- property is_root: bool
whether the sequence is intended for use at the root of the SR content tree.
- Type
bool
- Return type
bool
- property is_sr: bool
whether the sequence is intended for use in an SR document
- Type
bool
- Return type
bool
- property method: CodedConcept
Sampling method
- Type
- Return type
- property parent_specimen_id: str
Parent specimen identifier
- Type
str
- Return type
str
- property parent_specimen_type: CodedConcept
Parent specimen type
- Type
- Return type
- class highdicom.SpecimenStaining(substances)
Bases:
ContentSequence
Sequence of SR content items describing a specimen staining procedure
See SR template TID 8003 Specimen Staining.
- Parameters
substances (Sequence[Union[pydicom.sr.coding.Code, highdicom.sr.CodedConcept, str]]) – Substances used to stain examined specimen(s)
- append(val)
Append a content item to the sequence.
- Parameters
item (highdicom.sr.ContentItem) – SR Content Item
- Return type
None
- extend(val)
Extend multiple content items to the sequence.
- Parameters
val (Iterable[highdicom.sr.ContentItem, highdicom.sr.ContentSequence]) – SR Content Items
- Return type
None
- find(name)
Find contained content items given their name.
- Parameters
name (Union[pydicom.sr.coding.Code, highdicom.sr.CodedConcept]) – Name of SR Content Items
- Returns
Matched content items
- Return type
- classmethod from_sequence(sequence, is_root=False, is_sr=True, copy=True)
Construct object from a sequence of datasets.
- Parameters
sequence (Sequence[pydicom.dataset.Dataset]) – Datasets representing SR Content Items
is_root (bool, optional) – Whether the sequence is used to contain SR Content Items that are intended to be added to an SR document at the root of the document content tree
is_sr (bool, optional) – Whether the sequence is use to contain SR Content Items that are intended to be added to an SR document as opposed to other types of IODs based on an acquisition, protocol or workflow context template
copy (bool) – If True, the underlying sequence is deep-copied such that the original sequence remains intact. If False, this operation will alter the original sequence in place.
- Returns
Content Sequence containing SR Content Items
- Return type
- get_nodes()
Get content items that represent nodes in the content tree.
A node is hereby defined as a content item that has a ContentSequence attribute.
- Returns
Matched content items
- Return type
- index(val)
Get the index of a given item.
- Parameters
val (highdicom.sr.ContentItem) – SR Content Item
- Returns
int
- Return type
Index of the item in the sequence
- insert(position, val)
Insert a content item into the sequence at a given position.
- Parameters
position (int) – Index position
val (highdicom.sr.ContentItem) – SR Content Item
- Return type
None
- property is_root: bool
whether the sequence is intended for use at the root of the SR content tree.
- Type
bool
- Return type
bool
- property is_sr: bool
whether the sequence is intended for use in an SR document
- Type
bool
- Return type
bool
- property substances: list[highdicom.sr.coding.CodedConcept]
Substances used for staining
- Type
- Return type
- class highdicom.UID(value: str | None = None)
Bases:
UID
Unique DICOM identifier.
If an object is constructed without a value being provided, a value will be automatically generated using the highdicom-specific root.
Setup new instance of the class.
- Parameters
val (str or pydicom.uid.UID) – The UID string to use to create the UID object.
validation_mode (int) – Defines if values are validated and how validation errors are handled.
- Returns
The UID object.
- Return type
pydicom.uid.UID
- classmethod from_uuid(uuid)
Create a DICOM UID from a UUID using the 2.25 root.
- Parameters
uuid (str) – UUID
- Returns
UID
- Return type
Examples
>>> from uuid import uuid4 >>> import highdicom as hd >>> uuid = str(uuid4()) >>> uid = hd.UID.from_uuid(uuid)
- property info: str
Return the UID info from the UID dictionary.
- Return type
str
- property is_compressed: bool
Return
True
if a compressed transfer syntax UID.- Return type
bool
- property is_deflated: bool
Return
True
if a deflated transfer syntax UID.- Return type
bool
- property is_encapsulated: bool
Return
True
if an encasulated transfer syntax UID.- Return type
bool
- property is_implicit_VR: bool
Return
True
if an implicit VR transfer syntax UID.- Return type
bool
- property is_little_endian: bool
Return
True
if a little endian transfer syntax UID.- Return type
bool
- property is_private: bool
Return
True
if the UID isn’t an officially registered DICOM UID.- Return type
bool
- property is_retired: bool
Return
True
if the UID is retired,False
otherwise or if private.- Return type
bool
- property is_transfer_syntax: bool
Return
True
if a transfer syntax UID.- Return type
bool
- property is_valid: bool
Return
True
if self is a valid UID,False
otherwise.- Return type
bool
- property keyword: str
Return the UID keyword from the UID dictionary.
- Return type
str
- property name: str
Return the UID name from the UID dictionary.
- Return type
str
- set_private_encoding(implicit_vr, little_endian)
Set the corresponding dataset encoding for a privately defined transfer syntax.
New in version 3.0.
- Parameters
implicit_vr (bool) –
True
if the corresponding dataset encoding uses implicit VR,False
for explicit VR.little_endian (bool) –
True
if the corresponding dataset encoding uses little endian byte order,False
for big endian byte order.
- Return type
None
- property type: str
Return the UID type from the UID dictionary.
- Return type
str
- class highdicom.UniversalEntityIDTypeValues(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumerated values for Universal Entity ID Type attribute.
- DNS = 'DNS'
An Internet dotted name. Either in ASCII or as integers.
- EUI64 = 'EUI64'
An IEEE Extended Unique Identifier.
- ISO = 'ISO'
An International Standards Organization Object Identifier.
- URI = 'URI'
Uniform Resource Identifier.
- UUID = 'UUID'
The DCE Universal Unique Identifier.
- X400 = 'X400'
An X.400 MHS identifier.
- X500 = 'X500'
An X.500 directory name.
- class highdicom.VOILUT(first_mapped_value, lut_data, lut_explanation=None)
Bases:
LUT
Dataset describing an item of the VOI LUT Sequence.
- Parameters
first_mapped_value (int) – Pixel value that will be mapped to the first value in the lookup-table.
lut_data (numpy.ndarray) – Lookup table data. Must be of type uint16.
lut_explanation (Union[str, None], optional) – Free-form text explanation of the meaning of the LUT.
- apply(array, dtype=None)
Apply the LUT to a pixel array.
- Parameters
apply (numpy.ndarray) – Pixel array to which the LUT should be applied. Can be of any shape but must have an integer datatype.
dtype (Union[type, str, numpy.dtype, None], optional) – Datatype of the output array. If
None
, an unsigned integer datatype corresponding to the number of bits in the LUT will be used (eithernumpy.uint8
ornumpy.uint16
). Only safe casts are permitted.
- Returns
Array with LUT applied.
- Return type
numpy.ndarray
- property bits_per_entry: int
Bits allocated for the lookup table data. 8 or 16.
- Type
int
- Return type
int
- property first_mapped_value: int
Pixel value that will be mapped to the first value in the lookup table.
- Type
int
- Return type
int
- classmethod from_dataset(dataset, copy=True)
Create a LUT from an existing Dataset.
- Parameters
dataset (pydicom.Dataset) – Dataset representing a LUT.
copy (bool) – If True, the underlying dataset is deep-copied such that the original dataset remains intact. If False, this operation will alter the original dataset in place.
- Returns
Constructed object
- Return type
- get_inverted_lut_data()
Get LUT data array with output values inverted within the same range.
This returns the LUT data inverted within its original range. So if the original LUT data has output values in the range 10-20 inclusive, then the entries with output value 10 will be mapped to 20, the entries with output value 11 will be mapped to value 19, and so on until the entries with value 20 are mapped to 10.
- Returns
Inverted LUT data array, with the same size and data type as the original array.
- Return type
numpy.ndarray
- get_scaled_lut_data(output_range=(0.0, 1.0), dtype=<class 'numpy.float64'>, invert=False)
Get LUT data array with output values scaled to a given range.
- Parameters
output_range (Tuple[float, float], optional) – Tuple containing (lower, upper) value of the range into which to scale the output values. The lowest value in the LUT data will be mapped to the lower limit, and the highest value will be mapped to the upper limit, with a linear scaling used elsewhere.
dtype (Union[type, str, numpy.dtype, None], optional) – Data type of the returned array (must be a floating point NumPy data type).
invert (bool, optional) – Invert the returned array such that the lowest original value in the LUT is mapped to the upper limit and the highest original value is mapped to the lower limit. This may be used to efficiently combined a LUT with a Resentation transform that inverts the range.
- Returns
Rescaled LUT data array.
- Return type
numpy.ndarray
- property lut_data: ndarray
LUT data
- Type
numpy.ndarray
- Return type
numpy.ndarray
- property number_of_entries: int
Number of entries in the lookup table.
- Type
int
- Return type
int
- class highdicom.VOILUTFunctionValues(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)
Bases:
Enum
Enumerated values for attribute VOI LUT Function.
- LINEAR = 'LINEAR'
- LINEAR_EXACT = 'LINEAR_EXACT'
- SIGMOID = 'SIGMOID'
- class highdicom.VOILUTTransformation(window_center=None, window_width=None, window_explanation=None, voi_lut_function=None, voi_luts=None)
Bases:
Dataset
Dataset describing the VOI LUT Transformation as part of the Pixel Transformation Sequence to transform modality pixel values into pixel values that are of interest to a user or an application.
- Parameters
window_center (Union[float, Sequence[float], None], optional) – Center value of the intensity window used for display.
window_width (Union[float, Sequence[float], None], optional) – Width of the intensity window used for display.
window_explanation (Union[str, Sequence[str], None], optional) – Free-form explanation of the window center and width.
voi_lut_function (Union[highdicom.VOILUTFunctionValues, str, None], optional) – Description of the LUT function parametrized by
window_center
. andwindow_width
.voi_luts (Union[Sequence[highdicom.VOILUT], None], optional) – Intensity lookup tables used for display.
Note
Either
window_center
andwindow_width
should be provided orvoi_luts
should be provided, or both.window_explanation
should only be provided ifwindow_center
is provided.- apply(array, output_range=(0.0, 1.0), voi_transform_selector=0, dtype=None, invert=False, prefer_lut=False)
Apply the transformation to an array.
- Parameters
apply (numpy.ndarray) – Pixel array to which the transformation should be applied. Can be of any shape but must have an integer datatype if the transformation uses a LUT.
output_range (Tuple[float, float], optional) – Range of output values to which the VOI range is mapped.
voi_transform_selector (int | str, optional) – Specification of the VOI transform to select (multiple may be present). May either be an int or a str. If an int, it is interpreted as a (zero-based) index of the list of VOI transforms to apply. A negative integer may be used to index from the end of the list following standard Python indexing convention. If a str, the string that will be used to match the Window Center Width Explanation or the LUT Explanation to choose from multiple VOI transforms. Note that such explanations are optional according to the standard and therefore may not be present.
dtype (Union[type, str, numpy.dtype, None], optional) – Data type the output array. Should be a floating point data type. If not specified,
numpy.float64
is used.invert (bool, optional) – Invert the returned array such that the lowest original value in the LUT or input window is mapped to the upper limit and the highest original value is mapped to the lower limit. This may be used to efficiently combined a VOI LUT transformation with a presentation transform that inverts the range.
prefer_lut (bool, optional) – If True and the transformation contains both a LUT and a window parameters, apply the LUT. If False and both a LUT and window parameters are present, apply the window.
- Returns
Array with transformation applied.
- Return type
numpy.ndarray
- has_lut()
Determine whether the transformation contains a lookup table.
- Returns
True if the transformation contains a look-up table. False otherwise, when the mapping is represented by window center and width defining a linear relationship. Note that it is possible for a transformation to contain both a LUT and window parameters.
- Return type
bool
- has_window()
Determine whether the transformation contains window parameters.
- Returns
True if the transformation contains one or more sets of window parameters defining a linear relationship. False otherwise, when the mapping is represented by a lookup table. Note that it is possible for a transformation to contain both a LUT and window parameters.
- Return type
bool
- class highdicom.Volume(array, affine, coordinate_system, frame_of_reference_uid=None, channels=None)
Bases:
_VolumeBase
Class representing an array of regularly-spaced frames in 3D space.
This class combines a NumPy array with an affine matrix describing the location of the voxels in the frame-of-reference coordinate space. A Volume is not a DICOM object itself, but represents a volume that may be extracted from DICOM image, and/or encoded within a DICOM object, potentially following any number of processing steps.
All such volumes have a geometry that exists either within DICOM’s patient coordinate system or its slide coordinate system, both of which clearly define the meaning of the three spatial axes of the frame of reference coordinate system.
All volume arrays have three spatial dimensions. They may optionally have further non-spatial dimensions, known as “channel” dimensions, whose meaning is explicitly specified.
- Parameters
array (numpy.ndarray) – Array of voxel data. Must be at least 3D. The first three dimensions are the three spatial dimensions, and any subsequent dimensions are channel dimensions. Any datatype is permitted.
affine (numpy.ndarray) – 4 x 4 affine matrix representing the transformation from pixel indices (slice index, row index, column index) to the frame-of-reference coordinate system. The top left 3 x 3 matrix should be a scaled orthogonal matrix representing the rotation and scaling. The top right 3 x 1 vector represents the translation component. The last row should have value [0, 0, 0, 1].
coordinate_system (highdicom.CoordinateSystemNames | str) – Coordinate system (
"PATIENT"
or"SLIDE"
in which the volume is defined).frame_of_reference_uid (Optional[str], optional) – Frame of reference UID for the frame of reference, if known.
channels (dict[int | str | ChannelDescriptor, Sequence[int | str | float | Enum]] | None, optional) – Specification of channels of the array. Channels are additional dimensions of the array beyond the three spatial dimensions. For each such additional dimension (if any), an item in this dictionary is required to specify the meaning. The dictionary key specifies the meaning of the dimension, which must be either an instance of highdicom.ChannelDescriptor, specifying a DICOM tag whose attribute describes the channel, a a DICOM keyword describing a DICOM attribute, or an integer representing the tag of a DICOM attribute. The corresponding item of the dictionary is a sequence giving the value of the relevant attribute at each index in the array. The insertion order of the dictionary is significant as it is used to match items to the corresponding dimensions of the array (the first item in the dictionary corresponds to axis 3 of the array and so on).
- property affine: ndarray
4x4 affine transformation matrix
This matrix maps an index of the array into a position in the LPS frame-of-reference coordinate space.
- Type
numpy.ndarray
- Return type
numpy.ndarray
- property array: ndarray
Volume array.
- Type
numpy.ndarray
- Return type
numpy.ndarray
- astype(dtype)
Get new volume with a new datatype.
- Parameters
dtype (type) – A numpy datatype for the new volume.
- Returns
New volume with given datatype, and metadata copied from this volume.
- Return type
- property center_indices: tuple[float, float, float]
Array index of center of the volume, as floats with sub-voxel precision.
Results are continuous zero-based array indices.
- Return type
tuple
[float
,float
,float
]- Returns
x (float) – First array index of the volume center.
y (float) – Second array index of the volume center.
z (float) – Third array index of the volume center.
- property center_position: tuple[float, float, float]
Get frame-of-reference coordinates of the volume’s center.
- Return type
tuple
[float
,float
,float
]- Returns
x (float) – Frame of reference x coordinate of the volume center.
y (float) – Frame of reference y coordinate of the volume center.
z (float) – Frame of reference z coordinate of the volume center.
- property channel_descriptors: tuple[highdicom.volume.ChannelDescriptor, ...]
tuple[highdicom.ChannelDescriptor] Descriptor of each channel.
- Return type
tuple
[highdicom.volume.ChannelDescriptor
,...
]
- property channel_shape: tuple[int, ...]
Channel shape of the array.
Does not include the spatial dimensions.
- Type
Tuple[int, …]
- Return type
tuple
[int
,...
]
- clip(a_min, a_max)
Clip voxel intensities to lie within a given range.
- Parameters
a_min (Union[float, None]) – Lower value to clip. May be None if no lower clipping is to be applied. Voxel intensities below this value are set to this value.
a_max (Union[float, None]) – Upper value to clip. May be None if no upper clipping is to be applied. Voxel intensities above this value are set to this value.
- Returns
Volume with clipped intensities.
- Return type
- property coordinate_system: CoordinateSystemNames
highdicom.CoordinateSystemNames | str: Coordinate system (
"PATIENT"
or"SLIDE"
) in which the volume is defined).- Return type
- copy()
Get an unaltered copy of the volume.
- Returns
Copy of the original volume.
- Return type
- crop_to_spatial_shape(spatial_shape)
Center-crop volume to a given spatial shape.
- Parameters
spatial_shape (Sequence[int]) – Sequence of three integers specifying the spatial shape to crop to. This shape must be no larger than the existing shape along any of the three spatial dimensions.
- Returns
Volume with padding applied.
- Return type
Self
- property direction: ndarray
numpy.ndarray:
Direction matrix for the volume. The columns of the direction matrix are orthogonal unit vectors that give the direction in the frame of reference space of the increasing direction of each axis of the array.
- Return type
numpy.ndarray
- property direction_cosines: tuple[float, float, float, float, float, float]
Tuple[float, float, float, float, float float]:
Tuple of 6 floats giving the direction cosines of the vector along the rows and the vector along the columns, matching the format of the DICOM Image Orientation Patient and Image Orientation Slide attributes.
Assumes that frames are stacked down axis 0, rows down axis 1, and columns down axis 2 (the convention used to create volumes from images).
- Return type
tuple
[float
,float
,float
,float
,float
,float
]
- property dtype: type
Datatype of the array.
- Type
type
- Return type
type
- ensure_handedness(handedness, *, flip_axis=None, swap_axes=None)
Manipulate the volume if necessary to ensure a given handedness.
If the volume already has the specified handedness, it is returned unaltered.
If the volume does not meet the requirement, the volume is manipulated using a user specified operation to meet the requirement. The two options are reversing the direction of a single axis (“flipping”) or swapping the position of two axes.
- Parameters
handedness (highdicom.AxisHandedness) – Handedness to ensure.
flip_axis (Union[int, None], optional) – Specification of a spatial axis index (0, 1, or 2) to flip if required to meet the given handedness requirement.
swap_axes (Union[Sequence[int], None], optional) – Specification of a sequence of two spatial axis indices (each being 0, 1, or 2) to swap if required to meet the given handedness requirement.
- Returns
New volume with corrected handedness.
- Return type
Self
Note
Either
flip_axis
orswap_axes
must be provided (and not both) to specify the operation to perform to correct the handedness (if required).
- flip_spatial(axes)
Flip the spatial axes of the array.
Note that this flips the array and updates the affine to reflect the flip.
- Parameters
axes (Union[int, Sequence[int]]) – Axis or list of axis indices that should be flipped. These should include only the spatial axes (0, 1, and/or 2).
- Returns
New volume with spatial axes flipped as requested.
- Return type
- property frame_of_reference_uid: highdicom.uid.UID | None
Frame of reference UID.
- Type
Union[highdicom.UID, None]
- Return type
types.UnionType
[highdicom.uid.UID
,None
]
- classmethod from_attributes(*, array, image_position, image_orientation, pixel_spacing, spacing_between_slices, coordinate_system, frame_of_reference_uid=None, channels=None)
Create a volume from DICOM attributes.
The resulting geometry assumes that the frames of the image whose attributes are used are stacked down axis 0, the rows down axis 1, and the columns down axis 2. Furthermore, frames will be stacked such that the resulting geometry forms a right-handed coordinate system in the frame-of-reference coordinate system.
- Parameters
array (numpy.ndarray) – Three dimensional array of voxel data. The first dimension indexes slices, the second dimension indexes rows, and the final dimension indexes columns.
image_position (Sequence[float]) – Position in the frame of reference space of the center of the top left pixel of the image. Corresponds to DICOM attributes “ImagePositionPatient”. Should be a sequence of length 3.
image_orientation (Sequence[float]) – Cosines of the row direction (first triplet: horizontal, left to right, increasing column index) and the column direction (second triplet: vertical, top to bottom, increasing row index) direction expressed in the three-dimensional patient or slide coordinate system defined by the frame of reference. Corresponds to the DICOM attribute “ImageOrientationPatient”.
pixel_spacing (Sequence[float]) – Spacing between pixels in millimeter unit along the column direction (first value: spacing between rows, vertical, top to bottom, increasing row index) and the row direction (second value: spacing between columns: horizontal, left to right, increasing column index). Corresponds to DICOM attribute “PixelSpacing”.
spacing_between_slices (float) – Spacing between slices in millimeter units in the frame of reference coordinate system space. Corresponds to the DICOM attribute “SpacingBetweenSlices” (however, this may not be present in many images and may need to be inferred from “ImagePositionPatient” attributes of consecutive slices).
coordinate_system (highdicom.CoordinateSystemNames | str) – Coordinate system (
"PATIENT"
or"SLIDE"
in which the volume is defined).frame_of_reference_uid (Union[str, None], optional) – Frame of reference UID, if known. Corresponds to DICOM attribute FrameOfReferenceUID.
channels (dict[int | str | ChannelDescriptor, Sequence[int | str | float | Enum]] | None, optional) – Specification of channels of the array. Channels are additional dimensions of the array beyond the three spatial dimensions. For each such additional dimension (if any), an item in this dictionary is required to specify the meaning. The dictionary key specifies the meaning of the dimension, which must be either an instance of highdicom.ChannelDescriptor, specifying a DICOM tag whose attribute describes the channel, a a DICOM keyword describing a DICOM attribute, or an integer representing the tag of a DICOM attribute. The corresponding item of the dictionary is a sequence giving the value of the relevant attribute at each index in the array. The insertion order of the dictionary is significant as it is used to match items to the corresponding dimensions of the array (the first item in the dictionary corresponds to axis 3 of the array and so on).
- Returns
New Volume using the given array and DICOM attributes.
- Return type
- classmethod from_components(array, *, spacing, coordinate_system, position=None, center_position=None, direction=None, patient_orientation=None, frame_of_reference_uid=None, channels=None)
Construct a Volume from components of the affine matrix.
- Parameters
array (numpy.ndarray) – Three dimensional array of voxel data.
spacing (Sequence[float]) – Spacing between pixel centers in the the frame of reference coordinate system along each of the dimensions of the array. Should be either a sequence of length 3 to give the values along the three spatial dimensions, or a single float value to be shared by all spatial dimensions.
coordinate_system (highdicom.CoordinateSystemNames | str) – Coordinate system (
"PATIENT"
or"SLIDE"
in which the volume is defined).position (Sequence[float]) – Sequence of three floats giving the position in the frame of reference coordinate system of the center of the voxel at location (0, 0, 0).
center_position (Sequence[float]) – Sequence of three floats giving the position in the frame of reference coordinate system of the center of the volume. Note that the center of the volume will not lie at the center of any particular voxel unless the shape of the array is odd along all three spatial dimensions. Incompatible with
position
.direction (Sequence[float]) – Direction matrix for the volume. The columns of the direction matrix are orthogonal unit vectors that give the direction in the frame of reference space of the increasing direction of each axis of the array. This matrix may be passed either as a 3x3 matrix or a flattened 9 element array (first row, second row, third row).
patient_orientation (Union[str, Sequence[Union[str, highdicom.PatientOrientationValuesBiped]]]) – Patient orientation used to define an axis-aligned direction matrix, as either a sequence of three highdicom.PatientOrientationValuesBiped values, or a string such as
"FPL"
using the same characters. Incompatible withdirection
.frame_of_reference_uid (Union[str, None], optional) – Frame of reference UID for the frame of reference, if known.
channels (dict[int | str | ChannelDescriptor, Sequence[int | str | float | Enum]] | None, optional) – Specification of channels of the array. Channels are additional dimensions of the array beyond the three spatial dimensions. For each such additional dimension (if any), an item in this dictionary is required to specify the meaning. The dictionary key specifies the meaning of the dimension, which must be either an instance of highdicom.ChannelDescriptor, specifying a DICOM tag whose attribute describes the channel, a a DICOM keyword describing a DICOM attribute, or an integer representing the tag of a DICOM attribute. The corresponding item of the dictionary is a sequence giving the value of the relevant attribute at each index in the array. The insertion order of the dictionary is significant as it is used to match items to the corresponding dimensions of the array (the first item in the dictionary corresponds to axis 3 of the array and so on).
- Returns
Volume constructed from the provided components.
- Return type
- geometry_equal(other, tol=1e-05)
Determine whether two volumes have the same geometry.
- Parameters
other (Union[highdicom.Volume, highdicom.VolumeGeometry]) – Volume or volume geometry to which this volume should be compared.
tol (Union[float, None], optional) – Absolute Tolerance used to determine equality of affine matrices. If None, affine matrices must match exactly.
- Returns
True if the geometries match (up to the specified tolerance). False otherwise.
- Return type
bool
- get_affine(output_convention)
Get affine matrix in a particular convention.
Note that DICOM uses the left-posterior-superior (“LPS”) convention relative to the patient, in which the increasing direction of the first moves from the patient’s right to left, the increasing direction of the second axis moves from the patient’s anterior to posterior, and the increasing direction of the third axis moves from the patient’s inferior (foot) to superior (head). In highdicom, this is represented by the string
"LPH"
(left-posterior-head). Since highdicom volumes follow this convention, the affine matrix is stored internally as a matrix that maps array indices into coordinates along these three axes.This method allows you to get the affine matrix that maps the same array indices into coordinates in a frame-of-reference that uses a different convention. Another convention in widespread use is the
"RAH"
(aka “RAS”) convention used by the Nifti file format and many neuro-image analysis tools.- Parameters
output_convention (str | Sequence[str | highdicom.PatientOrientationValuesBiped] | None) – Description of a convention for defining patient-relative frame-of-reference consisting of three directions, either L or R, either A or P, and either F or H, in any order. May be passed either as a tuple of
highdicom.PatientOrientationValuesBiped
values or the single-letter codes representing them, or the same characters as a single three-character string, such as"RAH"
.- Returns
4x4 affine transformation matrix mapping augmented voxel indices to frame-of-reference coordinates defined by the chosen convention.
- Return type
numpy.ndarray
- get_channel(*, keepdims=False, **kwargs)
Get a volume corresponding to a particular channel along one or more dimensions.
- Parameters
keepdims (bool) – Whether to keep a singleton dimension in the output volume.
kwargs (dict[str, str | int | float | Enum]) – kwargs where the keyword is the keyword of a channel present in the volume and the value is the channel value along that channel.
- Returns
Volume representing a single channel of the original volume.
- Return type
- get_channel_values(channel_identifier)
Get channel values along a particular dimension.
- Parameters
channel_identifier (highdicom.ChannelDescriptor | int | str) – Identifier of a channel within the image.
- Returns
Copy of channel values along the selected dimension.
- Return type
list[str | int | float | Enum]
- get_closest_patient_orientation()
Get patient orientation codes that best represent the affine.
Note that this is not valid if the volume is not defined within the patient coordinate system.
- Returns
Tuple giving the closest patient orientation.
- Return type
Tuple[highdicom.enum.PatientOrientationValuesBiped, highdicom.enum.PatientOrientationValuesBiped, highdicom.enum.PatientOrientationValuesBiped]
- get_geometry()
Get geometry for this volume.
- Returns
Geometry object matching this volume.
- Return type
- get_pixel_measures()
Get pixel measures sequence for the volume.
This assumes that the volume is encoded in a DICOM file with frames down axis 0, rows stacked down axis 1, and columns stacked down axis 2.
- Returns
Pixel measures sequence for the volume.
- Return type
- get_plane_orientation()
Get plane orientation sequence for the volume.
This assumes that the volume is encoded in a DICOM file with frames down axis 0, rows stacked down axis 1, and columns stacked down axis 2.
- Returns
Plane orientation sequence.
- Return type
- get_plane_position(plane_index)
Get plane position of a given plane.
- Parameters
plane_number (int) – Zero-based plane index (down the first dimension of the array).
- Returns
Plane position of the plane.
- Return type
- get_plane_positions()
Get plane positions of all planes in the volume.
This assumes that the volume is encoded in a DICOM file with frames down axis 0, rows stacked down axis 1, and columns stacked down axis 2.
- Returns
Plane position of the all planes (stacked down axis 0 of the volume).
- Return type
- property handedness: AxisHandedness
Axis handedness of the volume.
This indicates whether the volume’s three spatial axes form a right-handed or left-handed coordinate system in the frame-of-reference space.
- Type
- Return type
- property inverse_affine: ndarray
4x4 inverse affine transformation matrix
Inverse of the affine matrix. This matrix maps a position in the LPS frame of reference coordinate space into an index into the array.
- Type
numpy.ndarray
- Return type
numpy.ndarray
- map_indices_to_reference(indices)
Transform image pixel indices to frame-of-reference coordinates.
- Parameters
indices (numpy.ndarray) – Array of zero-based array indices. Array of integer values with shape
(n, 3)
, where n is the number of indices, the first column represents the column index and the second column represents the row index.- Returns
Array of (x, y, z) coordinates in the coordinate system defined by the frame of reference. Array has shape
(n, 3)
, where n is the number of coordinates, the first column represents the x offsets, the second column represents the y offsets and the third column represents the z offsets- Return type
numpy.ndarray
- Raises
ValueError – When indices has incorrect shape.
- map_reference_to_indices(coordinates, round_output=False, check_bounds=False)
Transform frame of reference coordinates into array indices.
- Parameters
coordinates (numpy.ndarray) – Array of (x, y, z) coordinates in the coordinate system defined by the frame of reference. Array has shape
(n, 3)
, where n is the number of coordinates, the first column represents the X offsets, the second column represents the Y offsets and the third column represents the Z offsets- Return type
numpy.ndarray
- Returns
numpy.ndarray – Array of zero-based array indices at pixel resolution. Array of integer or floating point values with shape
(n, 3)
, where n is the number of indices. The datatype of the array will be integer ifround_output
is True (the default), or float ifround_output
is False.round_output (bool, optional) – Whether to round the output to the nearest voxel. If True, the output will have integer datatype. If False, the returned array will have floating point data type and sub-voxel precision.
check_bounds (bool, optional) – Whether to check that the returned indices lie within the bounds of the array. If True, a
RuntimeError
will be raised if the resulting array indices (before rounding) lie out of the bounds of the array.
Note
The returned pixel indices may be negative if coordinates fall outside of the array.
- Raises
ValueError – When indices has incorrect shape.
RuntimeError – If check_bounds is True and any map coordinate lies outside the bounds of the array.
- match_geometry(other, *, mode=PadModes.CONSTANT, constant_value=0.0, per_channel=False, tol=1e-05)
Match the geometry of this volume to another.
This performs a combination of permuting, padding and cropping, and flipping (in that order) such that the geometry of this volume matches that of
other
. Notably, the voxels are not resampled. If the geometry cannot be matched using these operations, then aRuntimeError
is raised.- Parameters
other (Union[highdicom.Volume, highdicom.VolumeGeometry]) – Volume or volume geometry to which this volume should be matched.
- Returns
New volume formed by matching the geometry of this volume to that of
other
.- Return type
Self
- Raises
RuntimeError: – If the geometries cannot be matched without resampling the array.
- property nearest_center_indices: tuple[int, int, int]
Array index of center of the volume, rounded down to the nearest integer value.
Results are discrete zero-based array indices.
- Return type
tuple
[int
,int
,int
]- Returns
x (int) – First array index of the volume center.
y (int) – Second array index of the volume center.
z (int) – Third array index of the volume center.
- normalize_mean_std(per_channel=True, output_mean=0.0, output_std=1.0)
Normalize the intensities using the mean and variance.
The resulting volume has zero mean and unit variance.
- Parameters
per_channel (bool, optional) – If True (the default), each channel along each channel dimension is normalized by its own mean and variance. If False, all channels are normalized together using the overall mean and variance.
output_mean (float, optional) – The mean value of the output array (or channel), after scaling.
output_std (float, optional) – The standard deviation of the output array (or channel), after scaling.
- Returns
Volume with normalized intensities. Note that the dtype will be promoted to floating point.
- Return type
- normalize_min_max(output_min=0.0, output_max=1.0, per_channel=False)
Normalize by mapping its full intensity range to a fixed range.
Other pixel values are scaled linearly within this range.
- Parameters
output_min (float, optional) – The value to which the minimum intensity is mapped.
output_max (float, optional) – The value to which the maximum intensity is mapped.
per_channel (bool, optional) – If True, each channel along each channel dimension is normalized by its own min and max. If False (the default), all channels are normalized together using the overall min and max.
- Returns
Volume with normalized intensities. Note that the dtype will be promoted to floating point.
- Return type
- property number_of_channel_dimensions: int
Number of channel dimensions.
- Type
int
- Return type
int
- pad(pad_width, *, mode=PadModes.CONSTANT, constant_value=0.0, per_channel=False)
Pad volume along the three spatial dimensions.
- Parameters
pad_width (Union[int, Sequence[int], Sequence[Sequence[int]]]) –
Values to pad the array. Takes the same form as
numpy.pad()
. May be:A single integer value, which results in that many voxels being added to the beginning and end of all three spatial dimensions, or
A sequence of two values in the form
[before, after]
, which results in ‘before’ voxels being added to the beginning of each of the three spatial dimensions, and ‘after’ voxels being added to the end of each of the three spatial dimensions, orA nested sequence of integers of the form
[[pad1], [pad2], [pad3]]
, in which separate padding values are supplied for each of the three spatial axes and used to pad before and after along those axes, orA nested sequence of integers in the form
[[before1, after1], [before2, after2], [before3, after3]]
, in which separate values are supplied for the before and after padding of each of the three spatial dimensions.
In all cases, all integer values must be non-negative.
mode (Union[highdicom.PadModes, str], optional) – Mode to use to pad the array. See
highdicom.PadModes
for options.constant_value (Union[float, Sequence[float]], optional) – Value used to pad when mode is
"CONSTANT"
. With other pad modes, this argument is ignored.per_channel (bool, optional) – For padding modes that involve calculation of image statistics to determine the padding value (i.e.
MINIMUM
,MAXIMUM
,MEAN
,MEDIAN
), pad each channel separately using the value calculated using that channel alone (rather than the statistics of the entire array). For other padding modes, this argument makes no difference. This should not the True if the image does not have a channel dimension.
- Returns
Volume with padding applied.
- Return type
- pad_or_crop_to_spatial_shape(spatial_shape, *, mode=PadModes.CONSTANT, constant_value=0.0, per_channel=False)
Pad and/or crop volume to given spatial shape.
For each dimension where padding is required, the volume is padded symmetrically, placing the original array at the center of the output array, to achieve the given shape. If this requires an odd number of elements to be added along a certain dimension, one more element is placed at the end of the array than at the start.
For each dimension where cropping is required, center cropping is used.
- Parameters
spatial_shape (Sequence[int]) – Sequence of three integers specifying the spatial shape to pad or crop to.
mode (highdicom.PadModes, optional) – Mode to use to pad the array, if padding is required. See
highdicom.PadModes
for options.constant_value (Union[float, Sequence[float]], optional) – Value used to pad when mode is
"CONSTANT"
. Ifper_channel
if True, a sequence whose length is equal to the number of channels may be passed, and each value will be used for the corresponding channel. With other pad modes, this argument is ignored.per_channel (bool, optional) – For padding modes that involve calculation of image statistics to determine the padding value (i.e.
MINIMUM
,MAXIMUM
,MEAN
,MEDIAN
), pad each channel separately using the value calculated using that channel alone (rather than the statistics of the entire array). For other padding modes, this argument makes no difference. This should not the True if the image does not have a channel dimension.
- Returns
Volume with padding and/or cropping applied.
- Return type
Self
- pad_to_spatial_shape(spatial_shape, *, mode=PadModes.CONSTANT, constant_value=0.0, per_channel=False)
Pad volume to given spatial shape.
The volume is padded symmetrically, placing the original array at the center of the output array, to achieve the given shape. If this requires an odd number of elements to be added along a certain dimension, one more element is placed at the end of the array than at the start.
- Parameters
spatial_shape (Sequence[int]) – Sequence of three integers specifying the spatial shape to pad to. This shape must be no smaller than the existing shape along any of the three spatial dimensions.
mode (highdicom.PadModes, optional) – Mode to use to pad the array. See
highdicom.PadModes
for options.constant_value (Union[float, Sequence[float]], optional) – Value used to pad when mode is
"CONSTANT"
. Ifper_channel
if True, a sequence whose length is equal to the number of channels may be passed, and each value will be used for the corresponding channel. With other pad modes, this argument is ignored.per_channel (bool, optional) – For padding modes that involve calculation of image statistics to determine the padding value (i.e.
MINIMUM
,MAXIMUM
,MEAN
,MEDIAN
), pad each channel separately using the value calculated using that channel alone (rather than the statistics of the entire array). For other padding modes, this argument makes no difference. This should not the True if the image does not have a channel dimension.
- Returns
Volume with padding applied.
- Return type
Self
- permute_channel_axes(channel_identifiers)
Create a new volume by permuting the channel axes.
- Parameters
channel_identifiers (Sequence[pydicom.BaseTag | int | str | highdicom.ChannelDescriptor]) – List of channel identifiers matching those in the volume but in an arbitrary order.
- Returns
New volume with channel axes permuted in the provided order.
- Return type
- permute_channel_axes_by_index(indices)
Create a new volume by permuting the channel axes.
- Parameters
indices (Sequence[int]) – List of integers containing values in the range 0 (inclusive) to the number of channel dimensions (exclusive) in some order, used to permute the channels. A value of
i
corresponds to the channel given byvolume.channel_identifiers[i]
.- Returns
New volume with channel axes permuted in the provided order.
- Return type
- permute_spatial_axes(indices)
Create a new volume by permuting the spatial axes.
- Parameters
indices (Sequence[int]) – List of three integers containing the values 0, 1 and 2 in some order. Note that you may not change the position of the channel axis (if present).
- Returns
New volume with spatial axes permuted in the provided order.
- Return type
- property physical_extent: tuple[float, float, float]
Side lengths of the volume in millimeters.
- Type
tuple[float, float, float]
- Return type
tuple
[float
,float
,float
]
- property physical_volume: float
Total volume in cubic millimeter.
- Type
float
- Return type
float
- property pixel_spacing: tuple[float, float]
Tuple[float, float]:
Within-plane pixel spacing in millimeter units. Two values (spacing between rows, spacing between columns), matching the format of the DICOM PixelSpacing attribute.
Assumes that frames are stacked down axis 0, rows down axis 1, and columns down axis 2 (the convention used to create volumes from images).
- Return type
tuple
[float
,float
]
- property position: tuple[float, float, float]
Tuple[float, float, float]:
Position in the frame of reference space of the center of voxel at indices (0, 0, 0).
- Return type
tuple
[float
,float
,float
]
- random_flip_spatial(axes=(0, 1, 2))
Randomly flip the spatial axes of the array.
Note that this flips the array and updates the affine to reflect the flip.
- Parameters
axes (Union[int, Sequence[int]]) – Axis or list of axis indices that may be flipped. These should include only the spatial axes (0, 1, and/or 2). Each axis in this list is flipped in the output volume with probability 0.5.
- Returns
New volume with selected spatial axes randomly flipped.
- Return type
Self
- random_permute_spatial_axes(axes=(0, 1, 2))
Create a new geometry by randomly permuting the spatial axes.
- Parameters
axes (Optional[Sequence[int]]) – Sequence of three integers containing the values 0, 1 and 2 in some order. The sequence must contain 2 or 3 elements. This subset of axes will axes will be included when generating indices for permutation. Any axis not in this sequence will remain in its original position.
- Returns
New geometry with spatial axes permuted randomly.
- Return type
Self
- random_spatial_crop(spatial_shape)
Create a random crop of a certain shape from the volume.
- Parameters
spatial_shape (Sequence[int]) – Sequence of three integers specifying the spatial shape to pad or crop to.
- Returns
New volume formed by cropping the volumes.
- Return type
Self
- property shape: tuple[int, ...]
Shape of the underlying array.
Includes any channel dimensions.
- Type
Tuple[int, …]
- Return type
tuple
[int
,...
]
- property spacing: tuple[float, float, float]
Tuple[float, float, float]:
Pixel spacing in millimeter units for the three spatial directions. Three values, one for each spatial dimension.
- Return type
tuple
[float
,float
,float
]
- property spacing_between_slices: float
float:
Spacing between consecutive slices in millimeter units.
Assumes that frames are stacked down axis 0, rows down axis 1, and columns down axis 2 (the convention used to create volumes from images).
- Return type
float
- spacing_vectors()
Get the vectors along the three array dimensions.
Note that these vectors are not normalized, they have length equal to the spacing along the relevant dimension.
- Return type
tuple
[numpy.ndarray
,numpy.ndarray
,numpy.ndarray
]- Returns
numpy.ndarray – Vector between voxel centers along the increasing first axis. 1D NumPy array.
numpy.ndarray – Vector between voxel centers along the increasing second axis. 1D NumPy array.
numpy.ndarray – Vector between voxel centers along the increasing third axis. 1D NumPy array.
- property spatial_shape: tuple[int, int, int]
Spatial shape of the array.
Does not include the channel dimensions.
- Type
Tuple[int, int, int]
- Return type
tuple
[int
,int
,int
]
- squeeze_channel(channel_descriptors=None)
Removes any singleton channel axes.
- Parameters
channel_descriptors (Sequence[str | int | highdicom.ChannelDescriptor] | None) – Identifiers of channels to squeeze. If
None
, squeeze all singleton channels. Otherwise squeeze only the specified channels and raise an error if any cannot be squeezed.- Returns
Volume with channel axis removed.
- Return type
- swap_spatial_axes(axis_1, axis_2)
Swap two spatial axes of the array.
- Parameters
axis_1 (int) – Spatial axis index (0, 1 or 2) to swap with
axis_2
.axis_2 (int) – Spatial axis index (0, 1 or 2) to swap with
axis_1
.
- Returns
New volume with spatial axes swapped as requested.
- Return type
Self
- to_patient_orientation(patient_orientation)
Rearrange the array to a given orientation.
The resulting volume is formed from this volume through a combination of axis permutations and flips of the spatial axes. Its patient orientation will be as close to the desired orientation as can be achieved with these operations alone (and in particular without resampling the array).
Note that this is not valid if the volume is not defined within the patient coordinate system.
- Parameters
patient_orientation (Union[str, Sequence[Union[str, highdicom.PatientOrientationValuesBiped]]]) – Desired patient orientation, as either a sequence of three highdicom.PatientOrientationValuesBiped values, or a string such as
"FPL"
using the same characters.- Returns
New volume with the requested patient orientation.
- Return type
Self
- unit_vectors()
Get the normalized vectors along the three array dimensions.
- Return type
tuple
[numpy.ndarray
,numpy.ndarray
,numpy.ndarray
]- Returns
numpy.ndarray – Unit vector along the increasing first axis. 1D NumPy array.
numpy.ndarray – Unit vector along the increasing second axis. 1D NumPy array.
numpy.ndarray – Unit vector along the increasing third axis. 1D NumPy array.
- property voxel_volume: float
The volume of a single voxel in cubic millimeters.
- Type
float
- Return type
float
- with_array(array, channels=None)
Get a new volume using a different array.
The spatial and other metadata will be copied from this volume. The original volume will be unaltered.
By default, the new volume will have the same channels (if any) as the existing volume. Different channels may be specified by passing the ‘channels’ parameter.
- Parameters
array (np.ndarray) – New 3D or 4D array of voxel data. The spatial shape must match the existing array, but the presence and number of channels and/or the voxel datatype may differ.
channels (dict[int | str | ChannelDescriptor, Sequence[int | str | float | Enum]] | None, optional) – Specification of channels as used by the constructor. If not specified, the channels are assumed to match those in the original volume and therefore the array must have the same shape as the array of the original volume.
- Returns
New volume using the given array and the metadata of this volume.
- Return type
- class highdicom.VolumeGeometry(affine, spatial_shape, coordinate_system, frame_of_reference_uid=None)
Bases:
_VolumeBase
Class encapsulating the geometry of a volume.
Unlike the similar
highdicom.Volume
, items of this class do not contain voxel data for the underlying volume, just a description of the geometry.- Parameters
affine (numpy.ndarray) – 4 x 4 affine matrix representing the transformation from pixel indices (slice index, row index, column index) to the frame-of-reference coordinate system. The top left 3 x 3 matrix should be a scaled orthogonal matrix representing the rotation and scaling. The top right 3 x 1 vector represents the translation component. The last row should have value [0, 0, 0, 1].
spatial_shape (Sequence[int]) – Number of voxels in the (implied) volume along the three spatial dimensions.
coordinate_system (highdicom.CoordinateSystemNames | str) – Coordinate system (
"PATIENT"
or"SLIDE"
) in which the volume is defined).frame_of_reference_uid (Optional[str], optional) – Frame of reference UID for the frame of reference, if known.
- property affine: ndarray
4x4 affine transformation matrix
This matrix maps an index of the array into a position in the LPS frame-of-reference coordinate space.
- Type
numpy.ndarray
- Return type
numpy.ndarray
- property center_indices: tuple[float, float, float]
Array index of center of the volume, as floats with sub-voxel precision.
Results are continuous zero-based array indices.
- Return type
tuple
[float
,float
,float
]- Returns
x (float) – First array index of the volume center.
y (float) – Second array index of the volume center.
z (float) – Third array index of the volume center.
- property center_position: tuple[float, float, float]
Get frame-of-reference coordinates of the volume’s center.
- Return type
tuple
[float
,float
,float
]- Returns
x (float) – Frame of reference x coordinate of the volume center.
y (float) – Frame of reference y coordinate of the volume center.
z (float) – Frame of reference z coordinate of the volume center.
- property coordinate_system: CoordinateSystemNames
highdicom.CoordinateSystemNames | str: Coordinate system (
"PATIENT"
or"SLIDE"
) in which the volume is defined).- Return type
- copy()
Get an unaltered copy of the geometry.
- Returns
Copy of the original geometry.
- Return type
- crop_to_spatial_shape(spatial_shape)
Center-crop volume to a given spatial shape.
- Parameters
spatial_shape (Sequence[int]) – Sequence of three integers specifying the spatial shape to crop to. This shape must be no larger than the existing shape along any of the three spatial dimensions.
- Returns
Volume with padding applied.
- Return type
Self
- property direction: ndarray
numpy.ndarray:
Direction matrix for the volume. The columns of the direction matrix are orthogonal unit vectors that give the direction in the frame of reference space of the increasing direction of each axis of the array.
- Return type
numpy.ndarray
- property direction_cosines: tuple[float, float, float, float, float, float]
Tuple[float, float, float, float, float float]:
Tuple of 6 floats giving the direction cosines of the vector along the rows and the vector along the columns, matching the format of the DICOM Image Orientation Patient and Image Orientation Slide attributes.
Assumes that frames are stacked down axis 0, rows down axis 1, and columns down axis 2 (the convention used to create volumes from images).
- Return type
tuple
[float
,float
,float
,float
,float
,float
]
- ensure_handedness(handedness, *, flip_axis=None, swap_axes=None)
Manipulate the volume if necessary to ensure a given handedness.
If the volume already has the specified handedness, it is returned unaltered.
If the volume does not meet the requirement, the volume is manipulated using a user specified operation to meet the requirement. The two options are reversing the direction of a single axis (“flipping”) or swapping the position of two axes.
- Parameters
handedness (highdicom.AxisHandedness) – Handedness to ensure.
flip_axis (Union[int, None], optional) – Specification of a spatial axis index (0, 1, or 2) to flip if required to meet the given handedness requirement.
swap_axes (Union[Sequence[int], None], optional) – Specification of a sequence of two spatial axis indices (each being 0, 1, or 2) to swap if required to meet the given handedness requirement.
- Returns
New volume with corrected handedness.
- Return type
Self
Note
Either
flip_axis
orswap_axes
must be provided (and not both) to specify the operation to perform to correct the handedness (if required).
- flip_spatial(axes)
Flip the spatial axes of the array.
Note that this flips the array and updates the affine to reflect the flip.
- Parameters
axes (Union[int, Sequence[int]]) – Axis or list of axis indices that should be flipped. These should include only the spatial axes (0, 1, and/or 2).
- Returns
New volume with spatial axes flipped as requested.
- Return type
- property frame_of_reference_uid: highdicom.uid.UID | None
Frame of reference UID.
- Type
Union[highdicom.UID, None]
- Return type
types.UnionType
[highdicom.uid.UID
,None
]
- classmethod from_attributes(*, image_position, image_orientation, rows, columns, pixel_spacing, spacing_between_slices, number_of_frames, coordinate_system, frame_of_reference_uid=None)
Create a volume from DICOM attributes.
The resulting geometry assumes that the frames of the image whose attributes are used are stacked down axis 0, the rows down axis 1, and the columns down axis 2. Furthermore, frames will be stacked such that the resulting geometry forms a right-handed coordinate system in the frame-of-reference coordinate system.
- Parameters
image_position (Sequence[float]) – Position in the frame of reference space of the center of the top left pixel of the image. Corresponds to DICOM attributes “ImagePositionPatient”. Should be a sequence of length 3.
image_orientation (Sequence[float]) – Cosines of the row direction (first triplet: horizontal, left to right, increasing column index) and the column direction (second triplet: vertical, top to bottom, increasing row index) direction expressed in the three-dimensional patient or slide coordinate system defined by the frame of reference. Corresponds to the DICOM attribute “ImageOrientationPatient”.
rows (int) – Number of rows in each frame.
columns (int) – Number of columns in each frame.
pixel_spacing (Sequence[float]) – Spacing between pixels in millimeter unit along the column direction (first value: spacing between rows, vertical, top to bottom, increasing row index) and the row direction (second value: spacing between columns: horizontal, left to right, increasing column index). Corresponds to DICOM attribute “PixelSpacing”.
spacing_between_slices (float) – Spacing between slices in millimeter units in the frame of reference coordinate system space. Corresponds to the DICOM attribute “SpacingBetweenSlices” (however, this may not be present in many images and may need to be inferred from “ImagePositionPatient” attributes of consecutive slices).
number_of_frames (int) – Number of frames in the volume.
coordinate_system (highdicom.CoordinateSystemNames | str) – Coordinate system (
"PATIENT"
or"SLIDE"
) in which the volume is defined).frame_of_reference_uid (Union[str, None], optional) – Frame of reference UID, if known. Corresponds to DICOM attribute FrameOfReferenceUID.
- Returns
New Volume using the given array and DICOM attributes.
- Return type
- classmethod from_components(spatial_shape, *, spacing, coordinate_system, position=None, center_position=None, direction=None, patient_orientation=None, frame_of_reference_uid=None)
Construct a VolumeGeometry from components of the affine matrix.
- Parameters
array (numpy.ndarray) – Three dimensional array of voxel data.
spacing (Sequence[float]) – Spacing between pixel centers in the the frame of reference coordinate system along each of the dimensions of the array. Should be either a sequence of length 3 to give the values along the three spatial dimensions, or a single float value to be shared by all spatial dimensions.
coordinate_system (highdicom.CoordinateSystemNames | str) – Coordinate system (
"PATIENT"
or"SLIDE"
in which the volume is defined).position (Sequence[float]) – Sequence of three floats giving the position in the frame of reference coordinate system of the center of the voxel at location (0, 0, 0).
center_position (Sequence[float]) – Sequence of three floats giving the position in the frame of reference coordinate system of the center of the volume. Note that the center of the volume will not lie at the center of any particular voxel unless the shape of the array is odd along all three spatial dimensions. Incompatible with
position
.direction (Sequence[float]) – Direction matrix for the volume. The columns of the direction matrix are orthogonal unit vectors that give the direction in the frame of reference space of the increasing direction of each axis of the array. This matrix may be passed either as a 3x3 matrix or a flattened 9 element array (first row, second row, third row).
patient_orientation (Union[str, Sequence[Union[str, highdicom.PatientOrientationValuesBiped]]]) – Patient orientation used to define an axis-aligned direction matrix, as either a sequence of three highdicom.PatientOrientationValuesBiped values, or a string such as
"FPL"
using the same characters. Incompatible withdirection
.frame_of_reference_uid (Union[str, None], optional) – Frame of reference UID for the frame of reference, if known.
channels (dict[int | str | ChannelDescriptor, Sequence[int | str | float | Enum]] | None, optional) – Specification of channels of the array. Channels are additional dimensions of the array beyond the three spatial dimensions. For each such additional dimension (if any), an item in this dictionary is required to specify the meaning. The dictionary key specifies the meaning of the dimension, which must be either an instance of highdicom.ChannelDescriptor, specifying a DICOM tag whose attribute describes the channel, a a DICOM keyword describing a DICOM attribute, or an integer representing the tag of a DICOM attribute. The corresponding item of the dictionary is a sequence giving the value of the relevant attribute at each index in the array. The insertion order of the dictionary is significant as it is used to match items to the corresponding dimensions of the array (the first item in the dictionary corresponds to axis 3 of the array and so on).
- Returns
Volume constructed from the provided components.
- Return type
- geometry_equal(other, tol=1e-05)
Determine whether two volumes have the same geometry.
- Parameters
other (Union[highdicom.Volume, highdicom.VolumeGeometry]) – Volume or volume geometry to which this volume should be compared.
tol (Union[float, None], optional) – Absolute Tolerance used to determine equality of affine matrices. If None, affine matrices must match exactly.
- Returns
True if the geometries match (up to the specified tolerance). False otherwise.
- Return type
bool
- get_affine(output_convention)
Get affine matrix in a particular convention.
Note that DICOM uses the left-posterior-superior (“LPS”) convention relative to the patient, in which the increasing direction of the first moves from the patient’s right to left, the increasing direction of the second axis moves from the patient’s anterior to posterior, and the increasing direction of the third axis moves from the patient’s inferior (foot) to superior (head). In highdicom, this is represented by the string
"LPH"
(left-posterior-head). Since highdicom volumes follow this convention, the affine matrix is stored internally as a matrix that maps array indices into coordinates along these three axes.This method allows you to get the affine matrix that maps the same array indices into coordinates in a frame-of-reference that uses a different convention. Another convention in widespread use is the
"RAH"
(aka “RAS”) convention used by the Nifti file format and many neuro-image analysis tools.- Parameters
output_convention (str | Sequence[str | highdicom.PatientOrientationValuesBiped] | None) – Description of a convention for defining patient-relative frame-of-reference consisting of three directions, either L or R, either A or P, and either F or H, in any order. May be passed either as a tuple of
highdicom.PatientOrientationValuesBiped
values or the single-letter codes representing them, or the same characters as a single three-character string, such as"RAH"
.- Returns
4x4 affine transformation matrix mapping augmented voxel indices to frame-of-reference coordinates defined by the chosen convention.
- Return type
numpy.ndarray
- get_closest_patient_orientation()
Get patient orientation codes that best represent the affine.
Note that this is not valid if the volume is not defined within the patient coordinate system.
- Returns
Tuple giving the closest patient orientation.
- Return type
Tuple[highdicom.enum.PatientOrientationValuesBiped, highdicom.enum.PatientOrientationValuesBiped, highdicom.enum.PatientOrientationValuesBiped]
- get_pixel_measures()
Get pixel measures sequence for the volume.
This assumes that the volume is encoded in a DICOM file with frames down axis 0, rows stacked down axis 1, and columns stacked down axis 2.
- Returns
Pixel measures sequence for the volume.
- Return type
- get_plane_orientation()
Get plane orientation sequence for the volume.
This assumes that the volume is encoded in a DICOM file with frames down axis 0, rows stacked down axis 1, and columns stacked down axis 2.
- Returns
Plane orientation sequence.
- Return type
- get_plane_position(plane_index)
Get plane position of a given plane.
- Parameters
plane_number (int) – Zero-based plane index (down the first dimension of the array).
- Returns
Plane position of the plane.
- Return type
- get_plane_positions()
Get plane positions of all planes in the volume.
This assumes that the volume is encoded in a DICOM file with frames down axis 0, rows stacked down axis 1, and columns stacked down axis 2.
- Returns
Plane position of the all planes (stacked down axis 0 of the volume).
- Return type
- property handedness: AxisHandedness
Axis handedness of the volume.
This indicates whether the volume’s three spatial axes form a right-handed or left-handed coordinate system in the frame-of-reference space.
- Type
- Return type
- property inverse_affine: ndarray
4x4 inverse affine transformation matrix
Inverse of the affine matrix. This matrix maps a position in the LPS frame of reference coordinate space into an index into the array.
- Type
numpy.ndarray
- Return type
numpy.ndarray
- map_indices_to_reference(indices)
Transform image pixel indices to frame-of-reference coordinates.
- Parameters
indices (numpy.ndarray) – Array of zero-based array indices. Array of integer values with shape
(n, 3)
, where n is the number of indices, the first column represents the column index and the second column represents the row index.- Returns
Array of (x, y, z) coordinates in the coordinate system defined by the frame of reference. Array has shape
(n, 3)
, where n is the number of coordinates, the first column represents the x offsets, the second column represents the y offsets and the third column represents the z offsets- Return type
numpy.ndarray
- Raises
ValueError – When indices has incorrect shape.
- map_reference_to_indices(coordinates, round_output=False, check_bounds=False)
Transform frame of reference coordinates into array indices.
- Parameters
coordinates (numpy.ndarray) – Array of (x, y, z) coordinates in the coordinate system defined by the frame of reference. Array has shape
(n, 3)
, where n is the number of coordinates, the first column represents the X offsets, the second column represents the Y offsets and the third column represents the Z offsets- Return type
numpy.ndarray
- Returns
numpy.ndarray – Array of zero-based array indices at pixel resolution. Array of integer or floating point values with shape
(n, 3)
, where n is the number of indices. The datatype of the array will be integer ifround_output
is True (the default), or float ifround_output
is False.round_output (bool, optional) – Whether to round the output to the nearest voxel. If True, the output will have integer datatype. If False, the returned array will have floating point data type and sub-voxel precision.
check_bounds (bool, optional) – Whether to check that the returned indices lie within the bounds of the array. If True, a
RuntimeError
will be raised if the resulting array indices (before rounding) lie out of the bounds of the array.
Note
The returned pixel indices may be negative if coordinates fall outside of the array.
- Raises
ValueError – When indices has incorrect shape.
RuntimeError – If check_bounds is True and any map coordinate lies outside the bounds of the array.
- match_geometry(other, *, mode=PadModes.CONSTANT, constant_value=0.0, per_channel=False, tol=1e-05)
Match the geometry of this volume to another.
This performs a combination of permuting, padding and cropping, and flipping (in that order) such that the geometry of this volume matches that of
other
. Notably, the voxels are not resampled. If the geometry cannot be matched using these operations, then aRuntimeError
is raised.- Parameters
other (Union[highdicom.Volume, highdicom.VolumeGeometry]) – Volume or volume geometry to which this volume should be matched.
- Returns
New volume formed by matching the geometry of this volume to that of
other
.- Return type
Self
- Raises
RuntimeError: – If the geometries cannot be matched without resampling the array.
- property nearest_center_indices: tuple[int, int, int]
Array index of center of the volume, rounded down to the nearest integer value.
Results are discrete zero-based array indices.
- Return type
tuple
[int
,int
,int
]- Returns
x (int) – First array index of the volume center.
y (int) – Second array index of the volume center.
z (int) – Third array index of the volume center.
- pad(pad_width, *, mode=PadModes.CONSTANT, constant_value=0.0, per_channel=False)
Pad volume along the three spatial dimensions.
- Parameters
pad_width (Union[int, Sequence[int], Sequence[Sequence[int]]]) –
Values to pad the array. Takes the same form as
numpy.pad()
. May be:A single integer value, which results in that many voxels being added to the beginning and end of all three spatial dimensions, or
A sequence of two values in the form
[before, after]
, which results in ‘before’ voxels being added to the beginning of each of the three spatial dimensions, and ‘after’ voxels being added to the end of each of the three spatial dimensions, orA nested sequence of integers of the form
[[pad1], [pad2], [pad3]]
, in which separate padding values are supplied for each of the three spatial axes and used to pad before and after along those axes, orA nested sequence of integers in the form
[[before1, after1], [before2, after2], [before3, after3]]
, in which separate values are supplied for the before and after padding of each of the three spatial dimensions.
In all cases, all integer values must be non-negative.
mode (Union[highdicom.PadModes, str], optional) – Ignored for
highdicom.VolumeGeometry
.constant_value (Union[float, Sequence[float]], optional) – Ignored for
highdicom.VolumeGeometry
.per_channel (bool, optional) – Ignored for
highdicom.VolumeGeometry
.
- Returns
Volume with padding applied.
- Return type
- pad_or_crop_to_spatial_shape(spatial_shape, *, mode=PadModes.CONSTANT, constant_value=0.0, per_channel=False)
Pad and/or crop volume to given spatial shape.
For each dimension where padding is required, the volume is padded symmetrically, placing the original array at the center of the output array, to achieve the given shape. If this requires an odd number of elements to be added along a certain dimension, one more element is placed at the end of the array than at the start.
For each dimension where cropping is required, center cropping is used.
- Parameters
spatial_shape (Sequence[int]) – Sequence of three integers specifying the spatial shape to pad or crop to.
mode (highdicom.PadModes, optional) – Mode to use to pad the array, if padding is required. See
highdicom.PadModes
for options.constant_value (Union[float, Sequence[float]], optional) – Value used to pad when mode is
"CONSTANT"
. Ifper_channel
if True, a sequence whose length is equal to the number of channels may be passed, and each value will be used for the corresponding channel. With other pad modes, this argument is ignored.per_channel (bool, optional) – For padding modes that involve calculation of image statistics to determine the padding value (i.e.
MINIMUM
,MAXIMUM
,MEAN
,MEDIAN
), pad each channel separately using the value calculated using that channel alone (rather than the statistics of the entire array). For other padding modes, this argument makes no difference. This should not the True if the image does not have a channel dimension.
- Returns
Volume with padding and/or cropping applied.
- Return type
Self
- pad_to_spatial_shape(spatial_shape, *, mode=PadModes.CONSTANT, constant_value=0.0, per_channel=False)
Pad volume to given spatial shape.
The volume is padded symmetrically, placing the original array at the center of the output array, to achieve the given shape. If this requires an odd number of elements to be added along a certain dimension, one more element is placed at the end of the array than at the start.
- Parameters
spatial_shape (Sequence[int]) – Sequence of three integers specifying the spatial shape to pad to. This shape must be no smaller than the existing shape along any of the three spatial dimensions.
mode (highdicom.PadModes, optional) – Mode to use to pad the array. See
highdicom.PadModes
for options.constant_value (Union[float, Sequence[float]], optional) – Value used to pad when mode is
"CONSTANT"
. Ifper_channel
if True, a sequence whose length is equal to the number of channels may be passed, and each value will be used for the corresponding channel. With other pad modes, this argument is ignored.per_channel (bool, optional) – For padding modes that involve calculation of image statistics to determine the padding value (i.e.
MINIMUM
,MAXIMUM
,MEAN
,MEDIAN
), pad each channel separately using the value calculated using that channel alone (rather than the statistics of the entire array). For other padding modes, this argument makes no difference. This should not the True if the image does not have a channel dimension.
- Returns
Volume with padding applied.
- Return type
Self
- permute_spatial_axes(indices)
Create a new geometry by permuting the spatial axes.
- Parameters
indices (Sequence[int]) – List of three integers containing the values 0, 1 and 2 in some order. Note that you may not change the position of the channel axis (if present).
- Returns
New geometry with spatial axes permuted in the provided order.
- Return type
- property physical_extent: tuple[float, float, float]
Side lengths of the volume in millimeters.
- Type
tuple[float, float, float]
- Return type
tuple
[float
,float
,float
]
- property physical_volume: float
Total volume in cubic millimeter.
- Type
float
- Return type
float
- property pixel_spacing: tuple[float, float]
Tuple[float, float]:
Within-plane pixel spacing in millimeter units. Two values (spacing between rows, spacing between columns), matching the format of the DICOM PixelSpacing attribute.
Assumes that frames are stacked down axis 0, rows down axis 1, and columns down axis 2 (the convention used to create volumes from images).
- Return type
tuple
[float
,float
]
- property position: tuple[float, float, float]
Tuple[float, float, float]:
Position in the frame of reference space of the center of voxel at indices (0, 0, 0).
- Return type
tuple
[float
,float
,float
]
- random_flip_spatial(axes=(0, 1, 2))
Randomly flip the spatial axes of the array.
Note that this flips the array and updates the affine to reflect the flip.
- Parameters
axes (Union[int, Sequence[int]]) – Axis or list of axis indices that may be flipped. These should include only the spatial axes (0, 1, and/or 2). Each axis in this list is flipped in the output volume with probability 0.5.
- Returns
New volume with selected spatial axes randomly flipped.
- Return type
Self
- random_permute_spatial_axes(axes=(0, 1, 2))
Create a new geometry by randomly permuting the spatial axes.
- Parameters
axes (Optional[Sequence[int]]) – Sequence of three integers containing the values 0, 1 and 2 in some order. The sequence must contain 2 or 3 elements. This subset of axes will axes will be included when generating indices for permutation. Any axis not in this sequence will remain in its original position.
- Returns
New geometry with spatial axes permuted randomly.
- Return type
Self
- random_spatial_crop(spatial_shape)
Create a random crop of a certain shape from the volume.
- Parameters
spatial_shape (Sequence[int]) – Sequence of three integers specifying the spatial shape to pad or crop to.
- Returns
New volume formed by cropping the volumes.
- Return type
Self
- property shape: tuple[int, ...]
Shape of the underlying array.
For objects of type
highdicom.VolumeGeometry
, this is equivalent to .shape.- Type
Tuple[int, …]
- Return type
tuple
[int
,...
]
- property spacing: tuple[float, float, float]
Tuple[float, float, float]:
Pixel spacing in millimeter units for the three spatial directions. Three values, one for each spatial dimension.
- Return type
tuple
[float
,float
,float
]
- property spacing_between_slices: float
float:
Spacing between consecutive slices in millimeter units.
Assumes that frames are stacked down axis 0, rows down axis 1, and columns down axis 2 (the convention used to create volumes from images).
- Return type
float
- spacing_vectors()
Get the vectors along the three array dimensions.
Note that these vectors are not normalized, they have length equal to the spacing along the relevant dimension.
- Return type
tuple
[numpy.ndarray
,numpy.ndarray
,numpy.ndarray
]- Returns
numpy.ndarray – Vector between voxel centers along the increasing first axis. 1D NumPy array.
numpy.ndarray – Vector between voxel centers along the increasing second axis. 1D NumPy array.
numpy.ndarray – Vector between voxel centers along the increasing third axis. 1D NumPy array.
- property spatial_shape: tuple[int, int, int]
Spatial shape of the array.
Does not include the channel dimension.
- Type
Tuple[int, int, int]
- Return type
tuple
[int
,int
,int
]
- swap_spatial_axes(axis_1, axis_2)
Swap two spatial axes of the array.
- Parameters
axis_1 (int) – Spatial axis index (0, 1 or 2) to swap with
axis_2
.axis_2 (int) – Spatial axis index (0, 1 or 2) to swap with
axis_1
.
- Returns
New volume with spatial axes swapped as requested.
- Return type
Self
- to_patient_orientation(patient_orientation)
Rearrange the array to a given orientation.
The resulting volume is formed from this volume through a combination of axis permutations and flips of the spatial axes. Its patient orientation will be as close to the desired orientation as can be achieved with these operations alone (and in particular without resampling the array).
Note that this is not valid if the volume is not defined within the patient coordinate system.
- Parameters
patient_orientation (Union[str, Sequence[Union[str, highdicom.PatientOrientationValuesBiped]]]) – Desired patient orientation, as either a sequence of three highdicom.PatientOrientationValuesBiped values, or a string such as
"FPL"
using the same characters.- Returns
New volume with the requested patient orientation.
- Return type
Self
- unit_vectors()
Get the normalized vectors along the three array dimensions.
- Return type
tuple
[numpy.ndarray
,numpy.ndarray
,numpy.ndarray
]- Returns
numpy.ndarray – Unit vector along the increasing first axis. 1D NumPy array.
numpy.ndarray – Unit vector along the increasing second axis. 1D NumPy array.
numpy.ndarray – Unit vector along the increasing third axis. 1D NumPy array.
- property voxel_volume: float
The volume of a single voxel in cubic millimeters.
- Type
float
- Return type
float
- with_array(array, channels=None)
Create a volume using this geometry and an array.
- Parameters
array (numpy.ndarray) – Array of voxel data. Must have the same spatial shape as the existing volume (i.e. first three elements of the shape match). Must additionally have the same shape along the channel dimensions, unless the channels parameter is provided.
channels (dict[int | str | ChannelDescriptor, Sequence[int | str | float | Enum]] | None, optional) – Specification of channels of the array. Channels are additional dimensions of the array beyond the three spatial dimensions. For each such additional dimension (if any), an item in this dictionary is required to specify the meaning. The dictionary key specifies the meaning of the dimension, which must be either an instance of highdicom.ChannelDescriptor, specifying a DICOM tag whose attribute describes the channel, a a DICOM keyword describing a DICOM attribute, or an integer representing the tag of a DICOM attribute. The corresponding item of the dictionary is a sequence giving the value of the relevant attribute at each index in the array. The insertion order of the dictionary is significant as it is used to match items to the corresponding dimensions of the array (the first item in the dictionary corresponds to axis 3 of the array and so on).
- Returns
Volume objects using this geometry and the given array.
- Return type
- class highdicom.VolumeToVolumeTransformer(volume_from, volume_to, round_output=False, check_bounds=False)
Bases:
object
Class for transforming voxel indices between two volumes.
Construct transformation object.
The resulting object will map volume indices of the “from” volume to volume indices of the “to” volume.
- Parameters
volume_from (Union[highdicom.Volume, highdicom.VolumeGeometry]) – Volume to which input volume indices refer.
volume_to (Union[highdicom.Volume, highdicom.VolumeGeometry]) – Volume to which output volume indices refer.
round_output (bool, optional) – Whether to round the output to the nearest integer (if
True
) or return with sub-voxel accuracy as floats (ifFalse
).check_bounds (bool, optional) – Whether to perform a bounds check before returning the output indices. Note there is no bounds check on the input indices.
- __call__(indices)
Transform volume indices between two volumes.
- Parameters
indices (numpy.ndarray) – Array of voxel indices in the “from” volume. Array of integer or floating-point values with shape
(n, 3)
, where n is the number of coordinates. The order of the three indices corresponds to the three spatial dimensions volume in that order. Point(0, 0, 0)
refers to the center of the voxel at index(0, 0, 0)
in the array.- Returns
Array of indices in the output volume that spatially correspond to those in the indices in the input array. This will have dtype an integer datatype if
round_output
isTrue
and a floating point datatype otherwise. The output datatype will be matched to the input datatype if possible, otherwise eithernp.int64
ornp.float64
is used.- Return type
numpy.ndarray
- Raises
ValueError – If
check_bounds
isTrue
and the output indices would otherwise contain invalid indices for the “to” volume.
- property affine: ndarray
4x4 affine transformation matrix
- Type
numpy.ndarray
- Return type
numpy.ndarray
- highdicom.get_volume_from_series(series_datasets, *, dtype=<class 'numpy.float64'>, apply_real_world_transform=None, real_world_value_map_selector=0, apply_modality_transform=None, apply_voi_transform=False, voi_transform_selector=0, voi_output_range=(0.0, 1.0), apply_presentation_lut=True, apply_palette_color_lut=None, apply_icc_profile=None, atol=None, rtol=None)
Create volume from a series of single frame images.
- Parameters
series_datasets (Sequence[pydicom.Dataset]) – Series of single frame datasets. There is no requirement on the sorting of the datasets.
dtype (Union[type, str, numpy.dtype], optional) – Data type of the returned array.
apply_real_world_transform (bool | None, optional) –
Whether to apply a real-world value map to the frame. A real-world value maps converts stored pixel values to output values with a real-world meaning, either using a LUT or a linear slope and intercept.
If True, the transform is applied if present, and if not present an error will be raised. If False, the transform will not be applied, regardless of whether it is present. If
None
, the transform will be applied if present but no error will be raised if it is not present.Note that if the dataset contains both a modality LUT and a real world value map, the real world value map will be applied preferentially. This also implies that specifying both
apply_real_world_transform
andapply_modality_transform
to True is not permitted.real_world_value_map_selector (int | str | pydicom.sr.coding.Code | highdicom.sr.coding.CodedConcept, optional) – Specification of the real world value map to use (multiple may be present in the dataset). If an int, it is used to index the list of available maps. A negative integer may be used to index from the end of the list following standard Python indexing convention. If a str, the string will be used to match the
"LUTLabel"
attribute to select the map. If apydicom.sr.coding.Code
orhighdicom.sr.coding.CodedConcept
, this will be used to match the units (contained in the"MeasurementUnitsCodeSequence"
attribute).apply_modality_transform (bool | None, optional) –
Whether to apply the modality transform (if present in the dataset) to the frame. The modality transform maps stored pixel values to output values, either using a LUT or rescale slope and intercept.
If True, the transform is applied if present, and if not present an error will be raised. If False, the transform will not be applied, regardless of whether it is present. If
None
, the transform will be applied if it is present and no real world value map takes precedence, but no error will be raised if it is not present.apply_voi_transform (bool | None, optional) –
Apply the value-of-interest (VOI) transform (if present in the dataset), which limits the range of pixel values to a particular range of interest using either a windowing operation or a LUT.
If True, the transform is applied if present, and if not present an error will be raised. If False, the transform will not be applied, regardless of whether it is present. If
None
, the transform will be applied if it is present and no real world value map takes precedence, but no error will be raised if it is not present.voi_transform_selector (int | str | highdicom.content.VOILUTTransformation, optional) – Specification of the VOI transform to select (multiple may be present). May either be an int or a str. If an int, it is interpreted as a (zero-based) index of the list of VOI transforms to apply. A negative integer may be used to index from the end of the list following standard Python indexing convention. If a str, the string that will be used to match the
"WindowCenterWidthExplanation"
or the"LUTExplanation"
attributes to choose from multiple VOI transforms. Note that such explanations are optional according to the standard and therefore may not be present. Ignored ifapply_voi_transform
isFalse
or no VOI transform is included in the datasets.voi_output_range (Tuple[float, float], optional) – Range of output values to which the VOI range is mapped. Only relevant if
apply_voi_transform
is True and a VOI transform is present.apply_palette_color_lut (bool | None, optional) – Apply the palette color LUT, if present in the dataset. The palette color LUT maps a single sample for each pixel stored in the dataset to a 3 sample-per-pixel color image.
apply_presentation_lut (bool, optional) – Apply the presentation LUT transform to invert the pixel values. If the PresentationLUTShape is present with the value
'INVERSE'
, or the PresentationLUTShape is not present but the Photometric Interpretation is MONOCHROME1, convert the range of the output pixels corresponds to MONOCHROME2 (in which high values are represent white and low values represent black). Ignored if PhotometricInterpretation is not MONOCHROME1 and the PresentationLUTShape is not present, or if a real world value transform is applied.apply_icc_profile (bool | None, optional) –
Whether colors should be corrected by applying an ICC transform. Will only be performed if metadata contain an ICC Profile.
If True, the transform is applied if present, and if not present an error will be raised. If False, the transform will not be applied, regardless of whether it is present. If
None
, the transform will be applied if it is present, but no error will be raised if it is not present.rtol (float | None, optional) – Relative tolerance for determining spacing regularity. If slice spacings vary by less that this proportion of the average spacing, they are considered to be regular. If neither
rtol
oratol
are provided, a default relative tolerance of 0.01 is used.atol (float | None, optional) – Absolute tolerance for determining spacing regularity. If slice spacings vary by less that this value (in mm), they are considered to be regular. Incompatible with
rtol
.
- Returns
Volume created from the series.
- Return type
- highdicom.imread(fp, lazy_frame_retrieval=False)
Read an image stored in DICOM File Format.
- Parameters
fp (Union[str, bytes, os.PathLike]) – Any file-like object representing a DICOM file containing an image.
lazy_frame_retrieval (bool) – If True, the returned image will retrieve frames from the file as requested, rather than loading in the entire object to memory initially. This may be a good idea if file reading is slow and you are likely to need only a subset of the frames in the image.
- Returns
Image read from the file.
- Return type
highdicom.color module
- class highdicom.color.CIELabColor(l_star, a_star, b_star)
Bases:
object
Class to represent a color value in CIELab color space.
- Parameters
l_star (float) – Lightness value in the range 0.0 (black) to 100.0 (white).
a_star (float) – Red-green value from -128.0 (red) to 127.0 (green).
b_star (float) – Blue-yellow value from -128.0 (blue) to 127.0 (yellow).
- property value: tuple[int, int, int]
Tuple[int]: Value formatted as a triplet of 16 bit unsigned integers.
- Return type
tuple
[int
,int
,int
]
- class highdicom.color.ColorManager(icc_profile)
Bases:
object
Class for color management using ICC profiles.
- Parameters
icc_profile (bytes) – ICC profile
- Raises
ValueError – When ICC Profile cannot be read.
- transform_frame(array)
Transforms a frame by applying the ICC profile.
- Parameters
array (numpy.ndarray) – Pixel data of a color image frame in form of an array with dimensions (Rows x Columns x SamplesPerPixel)
- Returns
Color corrected pixel data of a image frame in form of an array with dimensions (Rows x Columns x SamplesPerPixel)
- Return type
numpy.ndarray
- Raises
ValueError – When array does not have 3 dimensions and thus does not represent a color image frame.
highdicom.frame module
- highdicom.frame.decode_frame(value, transfer_syntax_uid, rows, columns, samples_per_pixel, bits_allocated, bits_stored, photometric_interpretation, pixel_representation=0, planar_configuration=None, index=0)
Decode pixel data of an individual frame.
- Parameters
value (bytes) – Pixel data of a frame (potentially compressed in case of encapsulated format encoding, depending on the transfer syntax)
transfer_syntax_uid (str) – Transfer Syntax UID
rows (int) – Number of pixel rows in the frame
columns (int) – Number of pixel columns in the frame
samples_per_pixel (int) – Number of (color) samples per pixel
bits_allocated (int) – Number of bits that need to be allocated per pixel sample
bits_stored (int) – Number of bits that are required to store a pixel sample
photometric_interpretation (Union[str, highdicom.PhotometricInterpretationValues]) – Photometric interpretation
pixel_representation (Union[highdicom.PixelRepresentationValues, int, None], optional) – Whether pixel samples are represented as unsigned integers or 2’s complements
planar_configuration (Union[highdicom.PlanarConfigurationValues, int, None], optional) – Whether color samples are encoded by pixel (
R1G1B1R2G2B2...
) or by plane (R1R2...G1G2...B1B2...
).index (int, optional) – The (zero-based) index of the frame in the original dataset. This is only required situation: when the bits allocated is 1, the transfer syntax is not encapsulated (i.e. is native) and the number of pixels per frame is not a multiple of 8. In this case, the index is required to know how many bits need to be stripped from the start and/or end of the byte array. In all other situations, this parameter is not required and will have no effect (since decoding a frame does not depend on the index of the frame).
- Returns
Decoded pixel data
- Return type
numpy.ndarray
- Raises
ValueError – When transfer syntax is not supported.
Note
In case of color image frames, the photometric_interpretation parameter describes the color space of the encoded pixel data and data may be converted from the specified color space into RGB color space upon decoding. For example, the JPEG codec generally converts pixels from RGB into YBR color space prior to compression to take advantage of the correlation between RGB color bands and improve compression efficiency. In case of an image data set with an encapsulated Pixel Data element containing JPEG compressed image frames, the value of the Photometric Interpretation element specifies the color space in which image frames were compressed. If photometric_interpretation specifies a YBR color space, then this function assumes that pixels were converted from RGB to YBR color space during encoding prior to JPEG compression and need to be converted back into RGB color space after JPEG decompression during decoding. If photometric_interpretation specifies an RGB color space, then the function assumes that no color space conversion was performed during encoding and therefore no conversion needs to be performed during decoding either. In both case, the function is supposed to return decoded pixel data of color image frames in RGB color space.
- highdicom.frame.encode_frame(array, transfer_syntax_uid, bits_allocated, bits_stored, photometric_interpretation, pixel_representation=0, planar_configuration=None)
Encode pixel data of an individual frame.
- Parameters
array (numpy.ndarray) – Pixel data in form of an array with dimensions (Rows x Columns x SamplesPerPixel) in case of a color image and (Rows x Columns) in case of a monochrome image
transfer_syntax_uid (int) – Transfer Syntax UID
bits_allocated (int) – Number of bits that need to be allocated per pixel sample
bits_stored (int) – Number of bits that are required to store a pixel sample
photometric_interpretation (Union[PhotometricInterpretationValues, str]) – Photometric interpretation that will be used to store data. Usually, this will match the photometric interpretation of the input pixel array, however for
"JPEGBaseline8Bit"
,"JPEG2000"
, and"JPEG2000Lossless"
transfer syntaxes with color images, the pixel data must be passed in in RGB format and will be converted and stored as"YBR_FULL_422"
("JPEGBaseline8Bit"
),"YBR_ICT"
("JPEG2000"
), or “YBR_RCT”` ("JPEG2000Lossless"
). In these cases the values of photometric metric passed must match those given above.pixel_representation (Union[highdicom.PixelRepresentationValues, int, None], optional) – Whether pixel samples are represented as unsigned integers or 2’s complements
planar_configuration (Union[highdicom.PlanarConfigurationValues, int, None], optional) – Whether color samples are encoded by pixel (
R1G1B1R2G2B2...
) or by plane (R1R2...G1G2...B1B2...
).
- Returns
Encoded pixel data (potentially compressed in case of encapsulated format encoding, depending on the transfer syntax)
- Return type
bytes
- Raises
ValueError – When transfer_syntax_uid is not supported or when planar_configuration is missing in case of a color image frame.
Note
In case of color image frames, the photometric_interpretation parameter describes the color space of the encoded pixel data and data may be converted from RGB color space into the specified color space upon encoding. For example, the JPEG codec converts pixels from RGB into YBR color space prior to compression to take advantage of the correlation between RGB color bands and improve compression efficiency. Therefore, pixels are supposed to be provided via array in RGB color space, but photometric_interpretation needs to specify a YBR color space.
highdicom.io module
Input/Output of datasets based on DICOM Part10 files.
- class highdicom.io.ImageFileReader(filename)
Bases:
object
Reader for DICOM datasets representing Image Information Entities.
It provides efficient, “lazy”, access to individual frame items contained in the Pixel Data element without loading the entire element into memory.
Note
As of highdicom 0.24.0, users should prefer the
highdicom.Image
class with lazy frame retrieval (e.g. as output by thehighdicom.imread()
function whenlazy_frame_retrieval=True
) to this class in most situations. Thehighdicom.Image
class offers the same lazy frame-level access, but additionally has several higher-level features, including the ability to apply pixel transformations to loaded frames, construct total pixel matrices, and construct volumes.Examples
>>> from pydicom.data import get_testdata_file >>> from highdicom.io import ImageFileReader >>> test_filepath = get_testdata_file('eCT_Supplemental.dcm') >>> >>> with ImageFileReader(test_filepath) as image: ... print(image.metadata.SOPInstanceUID) ... for i in range(image.number_of_frames): ... frame = image.read_frame(i) ... print(frame.shape) 1.3.6.1.4.1.5962.1.1.10.3.1.1166562673.14401 (512, 512) (512, 512)
- Parameters
filename (Union[str, pathlib.Path, pydicom.filebase.DicomfileLike]) – DICOM Part10 file containing a dataset of an image SOP Instance
- close()
Closes file.
- Return type
None
- property filename: str
Path to the image file
- Type
str
- Return type
str
- property metadata: Dataset
Metadata
- Type
pydicom.dataset.Dataset
- Return type
pydicom.dataset.Dataset
- property number_of_frames: int
Number of frames
- Type
int
- Return type
int
- open()
Open file for reading.
- Raises
FileNotFoundError – When file cannot be found
OSError – When file cannot be opened
OSError – When DICOM metadata cannot be read from file
ValueError – When DICOM dataset contained in file does not represent an image
Note
Builds a Basic Offset Table to speed up subsequent frame-level access.
- Return type
None
- read_frame(index, correct_color=True)
Reads and decodes the pixel data of an individual frame item.
- Parameters
index (int) – Zero-based frame index
correct_color (bool, optional) – Whether colors should be corrected by applying an ICC transformation. Will only be performed if metadata contain an ICC Profile. Default = True.
- Returns
Array of decoded pixels of the frame with shape (Rows x Columns) in case of a monochrome image or (Rows x Columns x SamplesPerPixel) in case of a color image.
- Return type
numpy.ndarray
- Raises
OSError – When frame could not be read
- read_frame_raw(index)
Reads the raw pixel data of an individual frame item.
- Parameters
index (int) – Zero-based frame index
- Returns
Pixel data of a given frame item encoded in the transfer syntax.
- Return type
bytes
- Raises
OSError – When frame could not be read
highdicom.spatial module
- class highdicom.spatial.ImageToImageTransformer(image_position_from, image_orientation_from, pixel_spacing_from, image_position_to, image_orientation_to, pixel_spacing_to)
Bases:
object
Class for transforming image coordinates between two images.
This class facilitates the mapping of image coordinates of an image or an image frame (tile or plane) into those of another image or image frame in the same frame of reference. This can include (but is not limited) to mapping between different frames of the same image, or different images within the same series (e.g. two levels of a spatial pyramid). However, it is required that the two images be coplanar within the frame-of-reference coordinate system.
Image coordinates are (column, row) pairs of floating-point values, where the (0.0, 0.0) point is located at the top left corner of the top left hand corner pixel of the pixel matrix. Image coordinates have pixel units at sub-pixel resolution.
Examples
Create a transformer for two images, where the second image has an axis flipped relative to the first.
>>> transformer = ImageToImageTransformer( ... image_position_from=[0.0, 0.0, 0.0], ... image_orientation_from=[1.0, 0.0, 0.0, 0.0, 1.0, 0.0], ... pixel_spacing_from=[1.0, 1.0], ... image_position_to=[0.0, 100.0, 0.0], ... image_orientation_to=[1.0, 0.0, 0.0, 0.0, -1.0, 0.0], ... pixel_spacing_to=[1.0, 1.0], ... )
>>> coords_in = np.array([[0, 0], [50, 50]]) >>> coords_out = transformer(coords_in) >>> print(coords_out) [[ 0. 101.] [ 50. 51.]]
Warning
This class shall not be used to pixel indices between images. Use the
highdicom.spatial.PixelToPixelTransformer
class instead.Construct transformation object.
The resulting object will map image coordinates of the “from” image to image coordinates of the “to” image.
- Parameters
image_position_from (Sequence[float]) – Position of the “from” image in the frame of reference, i.e., the offset of the top left hand corner pixel in the pixel matrix from the origin of the reference coordinate system along the X, Y, and Z axis
image_orientation_from (Sequence[float]) – Cosines of the row direction (first triplet: horizontal, left to right, increasing column index) and the column direction (second triplet: vertical, top to bottom, increasing row index) direction of the “from” image expressed in the three-dimensional patient or slide coordinate system defined by the frame of reference
pixel_spacing_from (Sequence[float]) – Spacing between pixels of the “from” imagem in millimeter unit along the column direction (first value: spacing between rows, vertical, top to bottom, increasing row index) and the rows direction (second value: spacing between columns: horizontal, left to right, increasing column index)
image_position_to (Sequence[float]) – Position of the “to” image using the same definition as the “from” image.
image_orientation_to (Sequence[float]) – Orientation cosines of the “to” image using the same definition as the “from” image.
pixel_spacing_to (Sequence[float]) – Pixel spacing of the “to” image using the same definition as the “from” image.
- Raises
TypeError – When any of the arguments is not a sequence.
ValueError – When any of the arguments has an incorrect length, or if the two images are not coplanar in the frame of reference coordinate system.
- __call__(coordinates)
Transform pixel indices between two images.
- Parameters
indices (numpy.ndarray) – Array of (column, row) coordinates at sub-pixel resolution in the range [0, Columns] and [0, Rows], respectively. Array of floating-point values with shape
(n, 2)
, where n is the number of coordinates, the first column represents the column values and the second column represents the row values. The(0.0, 0.0)
coordinate is located at the top left corner of the top left hand corner pixel in the total pixel matrix.- Returns
Array of (column, row) image coordinates in the “to” image.
- Return type
numpy.ndarray
- Raises
ValueError – When coordinates has incorrect shape.
- property affine: ndarray
4x4 affine transformation matrix
- Type
numpy.ndarray
- Return type
numpy.ndarray
- classmethod for_images(dataset_from, dataset_to, frame_number_from=None, frame_number_to=None, for_total_pixel_matrix_from=False, for_total_pixel_matrix_to=False)
Construct a transformer for two given images or image frames.
- Parameters
dataset (pydicom.Dataset) – Dataset representing an image.
frame_number (Union[int, None], optional) – Frame number (using 1-based indexing) of the frame for which to get the transformer. This should be provided if and only if the dataset is a multi-frame image.
for_total_pixel_matrix (bool, optional) – If True, use the spatial information for the total pixel matrix of a tiled image. The result will be a transformer that maps image coordinates of the total pixel matrix to frame of reference coordinates. This should only be True if the image is a tiled image and is incompatible with specifying a frame number.
- Returns
Transformer object for the given image, or image frame.
- Return type
- class highdicom.spatial.ImageToReferenceTransformer(image_position, image_orientation, pixel_spacing)
Bases:
object
Class for transforming coordinates from image to reference space.
This class facilitates the mapping of image coordinates in the pixel matrix of an image or an image frame (tile or plane) into the patient or slide coordinate system defined by the frame of reference. For example, this class may be used to map spatial coordinates (SCOORD) to 3D spatial coordinates (SCOORD3D).
Image coordinates are (column, row) pairs of floating-point values, where the (0.0, 0.0) point is located at the top left corner of the top left hand corner pixel of the pixel matrix. Image coordinates have pixel units at sub-pixel resolution.
Reference coordinates are (x, y, z) triplets of floating-point values, where the (0.0, 0.0, 0.0) point is located at the origin of the frame of reference. Reference coordinates have millimeter units.
Examples
>>> transformer = ImageToReferenceTransformer( ... image_position=[56.0, 34.2, 1.0], ... image_orientation=[1.0, 0.0, 0.0, 0.0, 1.0, 0.0], ... pixel_spacing=[0.5, 0.5] ... ) >>> >>> image_coords = np.array([[0.0, 10.0], [5.0, 5.0]]) >>> ref_coords = transformer(image_coords) >>> print(ref_coords) [[55.75 38.95 1. ] [58.25 36.45 1. ]]
Warning
This class shall not be used for pixel indices. Use the class:highdicom.spatial.PixelToReferenceTransformer class instead.
Construct transformation object.
- Parameters
image_position (Sequence[float]) – Position of the slice (image or frame) in the frame of reference, i.e., the offset of the top left hand corner pixel in the pixel matrix from the origin of the reference coordinate system along the X, Y, and Z axis
image_orientation (Sequence[float]) – Cosines of the row direction (first triplet: horizontal, left to right, increasing column index) and the column direction (second triplet: vertical, top to bottom, increasing row index) direction expressed in the three-dimensional patient or slide coordinate system defined by the frame of reference
pixel_spacing (Sequence[float]) – Spacing between pixels in millimeter unit along the column direction (first value: spacing between rows, vertical, top to bottom, increasing row index) and the rows direction (second value: spacing between columns: horizontal, left to right, increasing column index)
- Raises
TypeError – When any of the arguments is not a sequence.
ValueError – When any of the arguments has an incorrect length.
- __call__(coordinates)
Transform image coordinates to frame of reference coordinates.
- Parameters
coordinates (numpy.ndarray) – Array of (column, row) coordinates at sub-pixel resolution in the range [0, Columns] and [0, Rows], respectively. Array of floating-point values with shape
(n, 2)
, where n is the number of coordinates, the first column represents the column values and the second column represents the row values. The(0.0, 0.0)
coordinate is located at the top left corner of the top left hand corner pixel in the total pixel matrix.- Returns
Array of (x, y, z) coordinates in the coordinate system defined by the frame of reference. Array has shape
(n, 3)
, where n is the number of coordinates, the first column represents the X offsets, the second column represents the Y offsets and the third column represents the Z offsets- Return type
numpy.ndarray
- Raises
ValueError – When coordinates has incorrect shape.
- property affine: ndarray
4x4 affine transformation matrix
- Type
numpy.ndarray
- Return type
numpy.ndarray
- classmethod for_image(dataset, frame_number=None, for_total_pixel_matrix=False)
Construct a transformer for a given image or image frame.
- Parameters
dataset (pydicom.Dataset) – Dataset representing an image.
frame_number (Union[int, None], optional) – Frame number (using 1-based indexing) of the frame for which to get the transformer. This should be provided if and only if the dataset is a multi-frame image.
for_total_pixel_matrix (bool, optional) – If True, use the spatial information for the total pixel matrix of a tiled image. The result will be a transformer that maps image coordinates of the total pixel matrix to frame of reference coordinates. This should only be True if the image is a tiled image and is incompatible with specifying a frame number.
- Returns
Transformer object for the given image, or image frame.
- Return type
- highdicom.spatial.PATIENT_ORIENTATION_OPPOSITES = {<PatientOrientationValuesBiped.L: 'L'>: <PatientOrientationValuesBiped.R: 'R'>, <PatientOrientationValuesBiped.R: 'R'>: <PatientOrientationValuesBiped.L: 'L'>, <PatientOrientationValuesBiped.A: 'A'>: <PatientOrientationValuesBiped.P: 'P'>, <PatientOrientationValuesBiped.P: 'P'>: <PatientOrientationValuesBiped.A: 'A'>, <PatientOrientationValuesBiped.F: 'F'>: <PatientOrientationValuesBiped.H: 'H'>, <PatientOrientationValuesBiped.H: 'H'>: <PatientOrientationValuesBiped.F: 'F'>}
Mapping of each patient orientation value to its opposite.