radiant_mlhub.models package
Submodules
radiant_mlhub.models.collection module
Extensions of the PySTAC classes that provide convenience methods for interacting with the Radiant MLHub API.
- class radiant_mlhub.models.collection.Collection(id: str, description: str, extent: pystac.collection.Extent, title: Optional[str] = None, stac_extensions: Optional[List[str]] = None, href: Optional[str] = None, extra_fields: Optional[Dict[str, Any]] = None, catalog_type: Optional[pystac.catalog.CatalogType] = None, license: str = 'proprietary', keywords: Optional[List[str]] = None, providers: Optional[List[pystac.provider.Provider]] = None, summaries: Optional[pystac.summaries.Summaries] = None, *, api_key: Optional[str] = None, profile: Optional[str] = None)[source]
Bases:
pystac.collection.Collection
Class inheriting from
pystac.Collection
that adds some convenience methods for listing and fetching from the Radiant MLHub API.- property archive_size: Optional[int]
The size of the tarball archive for this collection in bytes (or
None
if the archive does not exist).
- download(output_dir: Union[str, pathlib.Path], *, if_exists: str = 'resume', api_key: Optional[str] = None, profile: Optional[str] = None) pathlib.Path [source]
Downloads the archive for this collection to an output location (current working directory by default). If the parent directories for
output_path
do not exist, they will be created.The
if_exists
argument determines how to handle an existing archive file in the output directory. See the documentation for thedownload_archive()
function for details. The default behavior is to resume downloading if the existing file is incomplete and skip the download if it is complete.Note
Some collections may be very large and take a significant amount of time to download, depending on your connection speed.
- Parameters
output_dir (Path) – Path to a local directory to which the file will be downloaded. File name will be generated automatically based on the download URL.
if_exists (str, optional) – How to handle an existing archive at the same location. If
"skip"
, the download will be skipped. If"overwrite"
, the existing file will be overwritten and the entire file will be re-downloaded. If"resume"
(the default), the existing file size will be compared to the size of the download (using theContent-Length
header). If the existing file is smaller, then only the remaining portion will be downloaded. Otherwise, the download will be skipped.api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable
profile (str) – A profile to use when making this request.
- Returns
output_path – The path to the downloaded archive file.
- Return type
- Raises
FileExistsError – If file at
output_path
already exists and bothexist_okay
andoverwrite
areFalse
.
- classmethod fetch(collection_id: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) Collection [source]
Creates a
Collection
instance by fetching the collection with the given ID from the Radiant MLHub API.- Parameters
- Returns
collection
- Return type
- fetch_item(item_id: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) pystac.item.Item [source]
- classmethod from_dict(d: Dict[str, Any], href: Optional[str] = None, root: Optional[pystac.catalog.Catalog] = None, migrate: bool = False, preserve_dict: bool = True, *, api_key: Optional[str] = None, profile: Optional[str] = None) Collection [source]
Patches the
pystac.Collection.from_dict()
method so that it returns the calling class instead of always returning apystac.Collection
instance.
- get_items(*, api_key: Optional[str] = None, profile: Optional[str] = None) Iterator[pystac.item.Item] [source]
Note
The
get_items
method is not implemented for Radiant MLHubCollection
instances for performance reasons. Please use theCollection.download()
method to download Collection assets.- Raises
- classmethod list(*, api_key: Optional[str] = None, profile: Optional[str] = None) List[Collection] [source]
Returns a list of
Collection
instances for all collections hosted by MLHub.See the Authentication documentation for details on how authentication is handled for this request.
- Parameters
- Returns
collections
- Return type
List[Collection]
radiant_mlhub.models.dataset module
Extensions of the PySTAC classes that provide convenience methods for interacting with the Radiant MLHub API.
- class radiant_mlhub.models.dataset.CollectionType(value)[source]
Bases:
enum.Enum
Valid values for the type of a collection associated with a Radiant MLHub dataset.
- LABELS = 'labels'
- SOURCE = 'source_imagery'
- class radiant_mlhub.models.dataset.Dataset(id: str, collections: List[Dict[str, Any]], title: Optional[str] = None, registry: Optional[str] = None, doi: Optional[str] = None, citation: Optional[str] = None, *, api_key: Optional[str] = None, profile: Optional[str] = None, **_: Any)[source]
Bases:
object
Class that brings together multiple Radiant MLHub “collections” that are all considered part of a single “dataset”. For instance, the
bigearthnet_v1
dataset is composed of both a source imagery collection (bigearthnet_v1_source
) and a labels collection (bigearthnet_v1_labels
).- registry_url
The URL to the registry page for this dataset, or
None
if no registry page exists.- Type
str or None
- doi
The DOI identifier for this dataset, or
None
if there is no DOI for this dataset.- Type
str or None
- citation
The citation information for this dataset, or
None
if there is no citation information.- Type
str or None
- property collections: radiant_mlhub.models.dataset._CollectionList
List of collections associated with this dataset. The list that is returned has 2 additional attributes (
source_imagery
andlabels
) that represent the list of collections corresponding the each type.Note
This is a cached property, so updating
self.collection_descriptions
after callingself.collections
the first time will have no effect on the results. Seefunctools.cached_property()
for details on clearing the cached value.Examples
>>> from radiant_mlhub import Dataset >>> dataset = Dataset.fetch('bigearthnet_v1') >>> len(dataset.collections) 2 >>> len(dataset.collections.source_imagery) 1 >>> len(dataset.collections.labels) 1
To loop through all collections
>>> for collection in dataset.collections: ... # Do something here
To loop through only the source imagery collections:
>>> for collection in dataset.collections.source_imagery: ... # Do something here
To loop through only the label collections:
>>> for collection in dataset.collections.labels: ... # Do something here
- download(output_dir: Union[pathlib.Path, str], *, if_exists: str = 'resume', api_key: Optional[str] = None, profile: Optional[str] = None) List[pathlib.Path] [source]
Downloads archives for all collections associated with this dataset to given directory. Each archive will be named using the collection ID (e.g. some_collection.tar.gz). If
output_dir
does not exist, it will be created.Note
Some collections may be very large and take a significant amount of time to download, depending on your connection speed.
- Parameters
output_dir (str or pathlib.Path) – The directory into which the archives will be written.
if_exists (str, optional) – How to handle an existing archive at the same location. If
"skip"
, the download will be skipped. If"overwrite"
, the existing file will be overwritten and the entire file will be re-downloaded. If"resume"
(the default), the existing file size will be compared to the size of the download (using theContent-Length
header). If the existing file is smaller, then only the remaining portion will be downloaded. Otherwise, the download will be skipped.api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable
profile (str) – A profile to use when making this request.
- Returns
output_paths – List of paths to the downloaded archives
- Return type
List[pathlib.Path]
- Raises
IOError – If
output_dir
exists and is not a directory.FileExistsError – If one of the archive files already exists in the
output_dir
and bothexist_okay
andoverwrite
areFalse
.
- classmethod fetch(dataset_id_or_doi: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) Dataset [source]
Creates a
Dataset
instance by first trying to fetching the dataset based on ID, then falling back to fetching by DOI.- Parameters
- Returns
dataset
- Return type
- classmethod fetch_by_doi(dataset_doi: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) Dataset [source]
Creates a
Dataset
instance by fetching the dataset with the given DOI from the Radiant MLHub API.- Parameters
- Returns
dataset
- Return type
- classmethod fetch_by_id(dataset_id: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) Dataset [source]
Creates a
Dataset
instance by fetching the dataset with the given ID from the Radiant MLHub API.- Parameters
- Returns
dataset
- Return type
- classmethod list(*, tags: Optional[Union[str, Iterable[str]]] = None, text: Optional[Union[str, Iterable[str]]] = None, api_key: Optional[str] = None, profile: Optional[str] = None) List[Dataset] [source]
Returns a list of
Dataset
instances for each datasets hosted by MLHub.See the Authentication documentation for details on how authentication is handled for this request.
- Parameters
tags (A list of tags to filter datasets by. If not
None
, only datasets containing all) – provided tags will be returned.text (A list of text phrases to filter datasets by. If not
None
, only datasets) – containing all phrases will be returned.api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable
profile (str) – A profile to use when making this request.
- Yields
dataset (Dataset)
radiant_mlhub.models.ml_model module
Extensions of the PySTAC classes that provide convenience methods for interacting with the Radiant MLHub API.
- class radiant_mlhub.models.ml_model.MLModel(id: str, geometry: Optional[Dict[str, Any]], bbox: Optional[List[float]], datetime: Optional[datetime.datetime], properties: Dict[str, Any], stac_extensions: Optional[List[str]] = None, href: Optional[str] = None, collection: Optional[Union[str, pystac.collection.Collection]] = None, extra_fields: Optional[Dict[str, Any]] = None, *, api_key: Optional[str] = None, profile: Optional[str] = None)[source]
Bases:
pystac.item.Item
- bbox: Optional[List[float]]
Bounding Box of the asset represented by this item using either 2D or 3D geometries. The length of the array is 2*n where n is the number of dimensions. Could also be None in the case of a null geometry.
- collection: Optional[Collection]
Collection
to which this Item belongs, if any.
- datetime: Optional[Datetime]
Datetime associated with this item. If
None
, thenstart_datetime
andend_datetime
incommon_metadata
will supply the datetime range of the Item.
- classmethod fetch(model_id: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) radiant_mlhub.models.ml_model.MLModel [source]
Fetches a
MLModel
instance by id.- Parameters
- Returns
model
- Return type
- classmethod from_dict(d: Dict[str, Any], href: Optional[str] = None, root: Optional[pystac.catalog.Catalog] = None, migrate: bool = False, preserve_dict: bool = True, *, api_key: Optional[str] = None, profile: Optional[str] = None) radiant_mlhub.models.ml_model.MLModel [source]
Patches the
pystac.Item.from_dict()
method so that it returns the calling class instead of always returning apystac.Item
instance.
- geometry: Optional[Dict[str, Any]]
Defines the full footprint of the asset represented by this item, formatted according to RFC 7946, section 3.1 (GeoJSON).
- classmethod list(*, api_key: Optional[str] = None, profile: Optional[str] = None) List[radiant_mlhub.models.ml_model.MLModel] [source]
Returns a list of
MLModel
instances for all models hosted by MLHub.See the Authentication documentation for details on how authentication is handled for this request.
- session_kwargs: Dict[str, Any] = {}
Class inheriting from
pystac.Item
that adds some convenience methods for listing and fetching from the Radiant MLHub API.
Module contents
Extensions of the PySTAC classes that provide convenience methods for interacting with the Radiant MLHub API.
- class radiant_mlhub.models.Collection(id: str, description: str, extent: pystac.collection.Extent, title: Optional[str] = None, stac_extensions: Optional[List[str]] = None, href: Optional[str] = None, extra_fields: Optional[Dict[str, Any]] = None, catalog_type: Optional[pystac.catalog.CatalogType] = None, license: str = 'proprietary', keywords: Optional[List[str]] = None, providers: Optional[List[pystac.provider.Provider]] = None, summaries: Optional[pystac.summaries.Summaries] = None, *, api_key: Optional[str] = None, profile: Optional[str] = None)[source]
Bases:
pystac.collection.Collection
Class inheriting from
pystac.Collection
that adds some convenience methods for listing and fetching from the Radiant MLHub API.- property archive_size: Optional[int]
The size of the tarball archive for this collection in bytes (or
None
if the archive does not exist).
- download(output_dir: Union[str, pathlib.Path], *, if_exists: str = 'resume', api_key: Optional[str] = None, profile: Optional[str] = None) pathlib.Path [source]
Downloads the archive for this collection to an output location (current working directory by default). If the parent directories for
output_path
do not exist, they will be created.The
if_exists
argument determines how to handle an existing archive file in the output directory. See the documentation for thedownload_archive()
function for details. The default behavior is to resume downloading if the existing file is incomplete and skip the download if it is complete.Note
Some collections may be very large and take a significant amount of time to download, depending on your connection speed.
- Parameters
output_dir (Path) – Path to a local directory to which the file will be downloaded. File name will be generated automatically based on the download URL.
if_exists (str, optional) – How to handle an existing archive at the same location. If
"skip"
, the download will be skipped. If"overwrite"
, the existing file will be overwritten and the entire file will be re-downloaded. If"resume"
(the default), the existing file size will be compared to the size of the download (using theContent-Length
header). If the existing file is smaller, then only the remaining portion will be downloaded. Otherwise, the download will be skipped.api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable
profile (str) – A profile to use when making this request.
- Returns
output_path – The path to the downloaded archive file.
- Return type
- Raises
FileExistsError – If file at
output_path
already exists and bothexist_okay
andoverwrite
areFalse
.
- classmethod fetch(collection_id: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) Collection [source]
Creates a
Collection
instance by fetching the collection with the given ID from the Radiant MLHub API.- Parameters
- Returns
collection
- Return type
- fetch_item(item_id: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) pystac.item.Item [source]
- classmethod from_dict(d: Dict[str, Any], href: Optional[str] = None, root: Optional[pystac.catalog.Catalog] = None, migrate: bool = False, preserve_dict: bool = True, *, api_key: Optional[str] = None, profile: Optional[str] = None) Collection [source]
Patches the
pystac.Collection.from_dict()
method so that it returns the calling class instead of always returning apystac.Collection
instance.
- get_items(*, api_key: Optional[str] = None, profile: Optional[str] = None) Iterator[pystac.item.Item] [source]
Note
The
get_items
method is not implemented for Radiant MLHubCollection
instances for performance reasons. Please use theCollection.download()
method to download Collection assets.- Raises
- classmethod list(*, api_key: Optional[str] = None, profile: Optional[str] = None) List[Collection] [source]
Returns a list of
Collection
instances for all collections hosted by MLHub.See the Authentication documentation for details on how authentication is handled for this request.
- Parameters
- Returns
collections
- Return type
List[Collection]
- class radiant_mlhub.models.Dataset(id: str, collections: List[Dict[str, Any]], title: Optional[str] = None, registry: Optional[str] = None, doi: Optional[str] = None, citation: Optional[str] = None, *, api_key: Optional[str] = None, profile: Optional[str] = None, **_: Any)[source]
Bases:
object
Class that brings together multiple Radiant MLHub “collections” that are all considered part of a single “dataset”. For instance, the
bigearthnet_v1
dataset is composed of both a source imagery collection (bigearthnet_v1_source
) and a labels collection (bigearthnet_v1_labels
).- registry_url
The URL to the registry page for this dataset, or
None
if no registry page exists.- Type
str or None
- doi
The DOI identifier for this dataset, or
None
if there is no DOI for this dataset.- Type
str or None
- citation
The citation information for this dataset, or
None
if there is no citation information.- Type
str or None
- property collections: radiant_mlhub.models.dataset._CollectionList
List of collections associated with this dataset. The list that is returned has 2 additional attributes (
source_imagery
andlabels
) that represent the list of collections corresponding the each type.Note
This is a cached property, so updating
self.collection_descriptions
after callingself.collections
the first time will have no effect on the results. Seefunctools.cached_property()
for details on clearing the cached value.Examples
>>> from radiant_mlhub import Dataset >>> dataset = Dataset.fetch('bigearthnet_v1') >>> len(dataset.collections) 2 >>> len(dataset.collections.source_imagery) 1 >>> len(dataset.collections.labels) 1
To loop through all collections
>>> for collection in dataset.collections: ... # Do something here
To loop through only the source imagery collections:
>>> for collection in dataset.collections.source_imagery: ... # Do something here
To loop through only the label collections:
>>> for collection in dataset.collections.labels: ... # Do something here
- download(output_dir: Union[pathlib.Path, str], *, if_exists: str = 'resume', api_key: Optional[str] = None, profile: Optional[str] = None) List[pathlib.Path] [source]
Downloads archives for all collections associated with this dataset to given directory. Each archive will be named using the collection ID (e.g. some_collection.tar.gz). If
output_dir
does not exist, it will be created.Note
Some collections may be very large and take a significant amount of time to download, depending on your connection speed.
- Parameters
output_dir (str or pathlib.Path) – The directory into which the archives will be written.
if_exists (str, optional) – How to handle an existing archive at the same location. If
"skip"
, the download will be skipped. If"overwrite"
, the existing file will be overwritten and the entire file will be re-downloaded. If"resume"
(the default), the existing file size will be compared to the size of the download (using theContent-Length
header). If the existing file is smaller, then only the remaining portion will be downloaded. Otherwise, the download will be skipped.api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable
profile (str) – A profile to use when making this request.
- Returns
output_paths – List of paths to the downloaded archives
- Return type
List[pathlib.Path]
- Raises
IOError – If
output_dir
exists and is not a directory.FileExistsError – If one of the archive files already exists in the
output_dir
and bothexist_okay
andoverwrite
areFalse
.
- classmethod fetch(dataset_id_or_doi: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) Dataset [source]
Creates a
Dataset
instance by first trying to fetching the dataset based on ID, then falling back to fetching by DOI.- Parameters
- Returns
dataset
- Return type
- classmethod fetch_by_doi(dataset_doi: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) Dataset [source]
Creates a
Dataset
instance by fetching the dataset with the given DOI from the Radiant MLHub API.- Parameters
- Returns
dataset
- Return type
- classmethod fetch_by_id(dataset_id: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) Dataset [source]
Creates a
Dataset
instance by fetching the dataset with the given ID from the Radiant MLHub API.- Parameters
- Returns
dataset
- Return type
- classmethod list(*, tags: Optional[Union[str, Iterable[str]]] = None, text: Optional[Union[str, Iterable[str]]] = None, api_key: Optional[str] = None, profile: Optional[str] = None) List[Dataset] [source]
Returns a list of
Dataset
instances for each datasets hosted by MLHub.See the Authentication documentation for details on how authentication is handled for this request.
- Parameters
tags (A list of tags to filter datasets by. If not
None
, only datasets containing all) – provided tags will be returned.text (A list of text phrases to filter datasets by. If not
None
, only datasets) – containing all phrases will be returned.api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable
profile (str) – A profile to use when making this request.
- Yields
dataset (Dataset)
- class radiant_mlhub.models.MLModel(id: str, geometry: Optional[Dict[str, Any]], bbox: Optional[List[float]], datetime: Optional[datetime.datetime], properties: Dict[str, Any], stac_extensions: Optional[List[str]] = None, href: Optional[str] = None, collection: Optional[Union[str, pystac.collection.Collection]] = None, extra_fields: Optional[Dict[str, Any]] = None, *, api_key: Optional[str] = None, profile: Optional[str] = None)[source]
Bases:
pystac.item.Item
- bbox: Optional[List[float]]
Bounding Box of the asset represented by this item using either 2D or 3D geometries. The length of the array is 2*n where n is the number of dimensions. Could also be None in the case of a null geometry.
- collection: Optional[Collection]
Collection
to which this Item belongs, if any.
- datetime: Optional[Datetime]
Datetime associated with this item. If
None
, thenstart_datetime
andend_datetime
incommon_metadata
will supply the datetime range of the Item.
- classmethod fetch(model_id: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) radiant_mlhub.models.ml_model.MLModel [source]
Fetches a
MLModel
instance by id.- Parameters
- Returns
model
- Return type
- classmethod from_dict(d: Dict[str, Any], href: Optional[str] = None, root: Optional[pystac.catalog.Catalog] = None, migrate: bool = False, preserve_dict: bool = True, *, api_key: Optional[str] = None, profile: Optional[str] = None) radiant_mlhub.models.ml_model.MLModel [source]
Patches the
pystac.Item.from_dict()
method so that it returns the calling class instead of always returning apystac.Item
instance.
- geometry: Optional[Dict[str, Any]]
Defines the full footprint of the asset represented by this item, formatted according to RFC 7946, section 3.1 (GeoJSON).
- classmethod list(*, api_key: Optional[str] = None, profile: Optional[str] = None) List[radiant_mlhub.models.ml_model.MLModel] [source]
Returns a list of
MLModel
instances for all models hosted by MLHub.See the Authentication documentation for details on how authentication is handled for this request.
- session_kwargs: Dict[str, Any] = {}
Class inheriting from
pystac.Item
that adds some convenience methods for listing and fetching from the Radiant MLHub API.