radiant_mlhub.models package

Submodules

radiant_mlhub.models.collection module

Extensions of the PySTAC classes that provide convenience methods for interacting with the Radiant MLHub API.

class radiant_mlhub.models.collection.Collection(id: str, description: str, extent: pystac.collection.Extent, title: Optional[str] = None, stac_extensions: Optional[List[str]] = None, href: Optional[str] = None, extra_fields: Optional[Dict[str, Any]] = None, catalog_type: Optional[pystac.catalog.CatalogType] = None, license: str = 'proprietary', keywords: Optional[List[str]] = None, providers: Optional[List[pystac.provider.Provider]] = None, summaries: Optional[pystac.summaries.Summaries] = None, *, api_key: Optional[str] = None, profile: Optional[str] = None)[source]

Bases: pystac.collection.Collection

Class inheriting from pystac.Collection that adds some convenience methods for listing and fetching from the Radiant MLHub API.

property archive_size: Optional[int]

The size of the tarball archive for this collection in bytes (or None if the archive does not exist).

download(output_dir: Union[str, pathlib.Path], *, if_exists: str = 'resume', api_key: Optional[str] = None, profile: Optional[str] = None) pathlib.Path[source]

Downloads the archive for this collection to an output location (current working directory by default). If the parent directories for output_path do not exist, they will be created.

The if_exists argument determines how to handle an existing archive file in the output directory. See the documentation for the download_archive() function for details. The default behavior is to resume downloading if the existing file is incomplete and skip the download if it is complete.

Note

Some collections may be very large and take a significant amount of time to download, depending on your connection speed.

Parameters
  • output_dir (Path) – Path to a local directory to which the file will be downloaded. File name will be generated automatically based on the download URL.

  • if_exists (str, optional) – How to handle an existing archive at the same location. If "skip", the download will be skipped. If "overwrite", the existing file will be overwritten and the entire file will be re-downloaded. If "resume" (the default), the existing file size will be compared to the size of the download (using the Content-Length header). If the existing file is smaller, then only the remaining portion will be downloaded. Otherwise, the download will be skipped.

  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Returns

output_path – The path to the downloaded archive file.

Return type

pathlib.Path

Raises

FileExistsError – If file at output_path already exists and both exist_okay and overwrite are False.

classmethod fetch(collection_id: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) Collection[source]

Creates a Collection instance by fetching the collection with the given ID from the Radiant MLHub API.

Parameters
  • collection_id (str) – The ID of the collection to fetch (e.g. bigearthnet_v1_source).

  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Returns

collection

Return type

Collection

fetch_item(item_id: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) pystac.item.Item[source]
classmethod from_dict(d: Dict[str, Any], href: Optional[str] = None, root: Optional[pystac.catalog.Catalog] = None, migrate: bool = False, preserve_dict: bool = True, *, api_key: Optional[str] = None, profile: Optional[str] = None) Collection[source]

Patches the pystac.Collection.from_dict() method so that it returns the calling class instead of always returning a pystac.Collection instance.

get_items(*, api_key: Optional[str] = None, profile: Optional[str] = None) Iterator[pystac.item.Item][source]

Note

The get_items method is not implemented for Radiant MLHub Collection instances for performance reasons. Please use the Collection.download() method to download Collection assets.

Raises

NotImplementedError

classmethod list(*, api_key: Optional[str] = None, profile: Optional[str] = None) List[Collection][source]

Returns a list of Collection instances for all collections hosted by MLHub.

See the Authentication documentation for details on how authentication is handled for this request.

Parameters
  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Returns

collections

Return type

List[Collection]

property registry_url: Optional[str]

The URL of the registry page for this Collection. The URL is based on the DOI identifier for the collection. If the Collection does not have a "sci:doi" property then registry_url will be None.

radiant_mlhub.models.dataset module

Extensions of the PySTAC classes that provide convenience methods for interacting with the Radiant MLHub API.

class radiant_mlhub.models.dataset.CollectionType(value)[source]

Bases: enum.Enum

Valid values for the type of a collection associated with a Radiant MLHub dataset.

LABELS = 'labels'
SOURCE = 'source_imagery'
class radiant_mlhub.models.dataset.Dataset(id: str, collections: List[Dict[str, Any]], title: Optional[str] = None, registry: Optional[str] = None, doi: Optional[str] = None, citation: Optional[str] = None, *, api_key: Optional[str] = None, profile: Optional[str] = None, **_: Any)[source]

Bases: object

Class that brings together multiple Radiant MLHub “collections” that are all considered part of a single “dataset”. For instance, the bigearthnet_v1 dataset is composed of both a source imagery collection (bigearthnet_v1_source) and a labels collection (bigearthnet_v1_labels).

id

The dataset ID.

Type

str

title

The title of the dataset (or None if dataset has no title).

Type

str or None

registry_url

The URL to the registry page for this dataset, or None if no registry page exists.

Type

str or None

doi

The DOI identifier for this dataset, or None if there is no DOI for this dataset.

Type

str or None

citation

The citation information for this dataset, or None if there is no citation information.

Type

str or None

property collections: radiant_mlhub.models.dataset._CollectionList

List of collections associated with this dataset. The list that is returned has 2 additional attributes (source_imagery and labels) that represent the list of collections corresponding the each type.

Note

This is a cached property, so updating self.collection_descriptions after calling self.collections the first time will have no effect on the results. See functools.cached_property() for details on clearing the cached value.

Examples

>>> from radiant_mlhub import Dataset
>>> dataset = Dataset.fetch('bigearthnet_v1')
>>> len(dataset.collections)
2
>>> len(dataset.collections.source_imagery)
1
>>> len(dataset.collections.labels)
1

To loop through all collections

>>> for collection in dataset.collections:
...     # Do something here

To loop through only the source imagery collections:

>>> for collection in dataset.collections.source_imagery:
...     # Do something here

To loop through only the label collections:

>>> for collection in dataset.collections.labels:
...     # Do something here
download(output_dir: Union[pathlib.Path, str], *, if_exists: str = 'resume', api_key: Optional[str] = None, profile: Optional[str] = None) List[pathlib.Path][source]

Downloads archives for all collections associated with this dataset to given directory. Each archive will be named using the collection ID (e.g. some_collection.tar.gz). If output_dir does not exist, it will be created.

Note

Some collections may be very large and take a significant amount of time to download, depending on your connection speed.

Parameters
  • output_dir (str or pathlib.Path) – The directory into which the archives will be written.

  • if_exists (str, optional) – How to handle an existing archive at the same location. If "skip", the download will be skipped. If "overwrite", the existing file will be overwritten and the entire file will be re-downloaded. If "resume" (the default), the existing file size will be compared to the size of the download (using the Content-Length header). If the existing file is smaller, then only the remaining portion will be downloaded. Otherwise, the download will be skipped.

  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Returns

output_paths – List of paths to the downloaded archives

Return type

List[pathlib.Path]

Raises
  • IOError – If output_dir exists and is not a directory.

  • FileExistsError – If one of the archive files already exists in the output_dir and both exist_okay and overwrite are False.

classmethod fetch(dataset_id_or_doi: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) Dataset[source]

Creates a Dataset instance by first trying to fetching the dataset based on ID, then falling back to fetching by DOI.

Parameters
  • dataset_id_or_doi (str) – The ID or DOI of the dataset to fetch (e.g. bigearthnet_v1).

  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Returns

dataset

Return type

Dataset

classmethod fetch_by_doi(dataset_doi: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) Dataset[source]

Creates a Dataset instance by fetching the dataset with the given DOI from the Radiant MLHub API.

Parameters
  • dataset_doi (str) – The DOI of the dataset to fetch (e.g. 10.6084/m9.figshare.12047478.v2).

  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Returns

dataset

Return type

Dataset

classmethod fetch_by_id(dataset_id: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) Dataset[source]

Creates a Dataset instance by fetching the dataset with the given ID from the Radiant MLHub API.

Parameters
  • dataset_id (str) – The ID of the dataset to fetch (e.g. bigearthnet_v1).

  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Returns

dataset

Return type

Dataset

classmethod list(*, tags: Optional[Union[str, Iterable[str]]] = None, text: Optional[Union[str, Iterable[str]]] = None, api_key: Optional[str] = None, profile: Optional[str] = None) List[Dataset][source]

Returns a list of Dataset instances for each datasets hosted by MLHub.

See the Authentication documentation for details on how authentication is handled for this request.

Parameters
  • tags (A list of tags to filter datasets by. If not None, only datasets containing all) – provided tags will be returned.

  • text (A list of text phrases to filter datasets by. If not None, only datasets) – containing all phrases will be returned.

  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Yields

dataset (Dataset)

property total_archive_size: Optional[int]

Gets the total size (in bytes) of the archives for all collections associated with this dataset. If no archives exist, returns None.

radiant_mlhub.models.ml_model module

Extensions of the PySTAC classes that provide convenience methods for interacting with the Radiant MLHub API.

class radiant_mlhub.models.ml_model.MLModel(id: str, geometry: Optional[Dict[str, Any]], bbox: Optional[List[float]], datetime: Optional[datetime.datetime], properties: Dict[str, Any], stac_extensions: Optional[List[str]] = None, href: Optional[str] = None, collection: Optional[Union[str, pystac.collection.Collection]] = None, extra_fields: Optional[Dict[str, Any]] = None, *, api_key: Optional[str] = None, profile: Optional[str] = None)[source]

Bases: pystac.item.Item

assets: Dict[str, Asset]

Dictionary of Asset objects, each with a unique key.

bbox: Optional[List[float]]

Bounding Box of the asset represented by this item using either 2D or 3D geometries. The length of the array is 2*n where n is the number of dimensions. Could also be None in the case of a null geometry.

collection: Optional[Collection]

Collection to which this Item belongs, if any.

collection_id: Optional[str]

The Collection ID that this item belongs to, if any.

datetime: Optional[Datetime]

Datetime associated with this item. If None, then start_datetime and end_datetime in common_metadata will supply the datetime range of the Item.

extra_fields: Dict[str, Any]

Extra fields that are part of the top-level JSON fields the Item.

classmethod fetch(model_id: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) radiant_mlhub.models.ml_model.MLModel[source]

Fetches a MLModel instance by id.

Parameters
  • model_id (str) – The ID of the ML Model to fetch (e.g. model-cyclone-wind-estimation-torchgeo-v1).

  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Returns

model

Return type

MLModel

classmethod from_dict(d: Dict[str, Any], href: Optional[str] = None, root: Optional[pystac.catalog.Catalog] = None, migrate: bool = False, preserve_dict: bool = True, *, api_key: Optional[str] = None, profile: Optional[str] = None) radiant_mlhub.models.ml_model.MLModel[source]

Patches the pystac.Item.from_dict() method so that it returns the calling class instead of always returning a pystac.Item instance.

geometry: Optional[Dict[str, Any]]

Defines the full footprint of the asset represented by this item, formatted according to RFC 7946, section 3.1 (GeoJSON).

id: str

Provider identifier. Unique within the STAC.

A list of Link objects representing all links associated with this Item.

classmethod list(*, api_key: Optional[str] = None, profile: Optional[str] = None) List[radiant_mlhub.models.ml_model.MLModel][source]

Returns a list of MLModel instances for all models hosted by MLHub.

See the Authentication documentation for details on how authentication is handled for this request.

Parameters
  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Returns

models

Return type

List[MLModel]

properties: Dict[str, Any]

A dictionary of additional metadata for the Item.

session_kwargs: Dict[str, Any] = {}

Class inheriting from pystac.Item that adds some convenience methods for listing and fetching from the Radiant MLHub API.

stac_extensions: List[str]

List of extensions the Item implements.

Module contents

Extensions of the PySTAC classes that provide convenience methods for interacting with the Radiant MLHub API.

class radiant_mlhub.models.Collection(id: str, description: str, extent: pystac.collection.Extent, title: Optional[str] = None, stac_extensions: Optional[List[str]] = None, href: Optional[str] = None, extra_fields: Optional[Dict[str, Any]] = None, catalog_type: Optional[pystac.catalog.CatalogType] = None, license: str = 'proprietary', keywords: Optional[List[str]] = None, providers: Optional[List[pystac.provider.Provider]] = None, summaries: Optional[pystac.summaries.Summaries] = None, *, api_key: Optional[str] = None, profile: Optional[str] = None)[source]

Bases: pystac.collection.Collection

Class inheriting from pystac.Collection that adds some convenience methods for listing and fetching from the Radiant MLHub API.

property archive_size: Optional[int]

The size of the tarball archive for this collection in bytes (or None if the archive does not exist).

download(output_dir: Union[str, pathlib.Path], *, if_exists: str = 'resume', api_key: Optional[str] = None, profile: Optional[str] = None) pathlib.Path[source]

Downloads the archive for this collection to an output location (current working directory by default). If the parent directories for output_path do not exist, they will be created.

The if_exists argument determines how to handle an existing archive file in the output directory. See the documentation for the download_archive() function for details. The default behavior is to resume downloading if the existing file is incomplete and skip the download if it is complete.

Note

Some collections may be very large and take a significant amount of time to download, depending on your connection speed.

Parameters
  • output_dir (Path) – Path to a local directory to which the file will be downloaded. File name will be generated automatically based on the download URL.

  • if_exists (str, optional) – How to handle an existing archive at the same location. If "skip", the download will be skipped. If "overwrite", the existing file will be overwritten and the entire file will be re-downloaded. If "resume" (the default), the existing file size will be compared to the size of the download (using the Content-Length header). If the existing file is smaller, then only the remaining portion will be downloaded. Otherwise, the download will be skipped.

  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Returns

output_path – The path to the downloaded archive file.

Return type

pathlib.Path

Raises

FileExistsError – If file at output_path already exists and both exist_okay and overwrite are False.

classmethod fetch(collection_id: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) Collection[source]

Creates a Collection instance by fetching the collection with the given ID from the Radiant MLHub API.

Parameters
  • collection_id (str) – The ID of the collection to fetch (e.g. bigearthnet_v1_source).

  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Returns

collection

Return type

Collection

fetch_item(item_id: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) pystac.item.Item[source]
classmethod from_dict(d: Dict[str, Any], href: Optional[str] = None, root: Optional[pystac.catalog.Catalog] = None, migrate: bool = False, preserve_dict: bool = True, *, api_key: Optional[str] = None, profile: Optional[str] = None) Collection[source]

Patches the pystac.Collection.from_dict() method so that it returns the calling class instead of always returning a pystac.Collection instance.

get_items(*, api_key: Optional[str] = None, profile: Optional[str] = None) Iterator[pystac.item.Item][source]

Note

The get_items method is not implemented for Radiant MLHub Collection instances for performance reasons. Please use the Collection.download() method to download Collection assets.

Raises

NotImplementedError

classmethod list(*, api_key: Optional[str] = None, profile: Optional[str] = None) List[Collection][source]

Returns a list of Collection instances for all collections hosted by MLHub.

See the Authentication documentation for details on how authentication is handled for this request.

Parameters
  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Returns

collections

Return type

List[Collection]

property registry_url: Optional[str]

The URL of the registry page for this Collection. The URL is based on the DOI identifier for the collection. If the Collection does not have a "sci:doi" property then registry_url will be None.

class radiant_mlhub.models.Dataset(id: str, collections: List[Dict[str, Any]], title: Optional[str] = None, registry: Optional[str] = None, doi: Optional[str] = None, citation: Optional[str] = None, *, api_key: Optional[str] = None, profile: Optional[str] = None, **_: Any)[source]

Bases: object

Class that brings together multiple Radiant MLHub “collections” that are all considered part of a single “dataset”. For instance, the bigearthnet_v1 dataset is composed of both a source imagery collection (bigearthnet_v1_source) and a labels collection (bigearthnet_v1_labels).

id

The dataset ID.

Type

str

title

The title of the dataset (or None if dataset has no title).

Type

str or None

registry_url

The URL to the registry page for this dataset, or None if no registry page exists.

Type

str or None

doi

The DOI identifier for this dataset, or None if there is no DOI for this dataset.

Type

str or None

citation

The citation information for this dataset, or None if there is no citation information.

Type

str or None

property collections: radiant_mlhub.models.dataset._CollectionList

List of collections associated with this dataset. The list that is returned has 2 additional attributes (source_imagery and labels) that represent the list of collections corresponding the each type.

Note

This is a cached property, so updating self.collection_descriptions after calling self.collections the first time will have no effect on the results. See functools.cached_property() for details on clearing the cached value.

Examples

>>> from radiant_mlhub import Dataset
>>> dataset = Dataset.fetch('bigearthnet_v1')
>>> len(dataset.collections)
2
>>> len(dataset.collections.source_imagery)
1
>>> len(dataset.collections.labels)
1

To loop through all collections

>>> for collection in dataset.collections:
...     # Do something here

To loop through only the source imagery collections:

>>> for collection in dataset.collections.source_imagery:
...     # Do something here

To loop through only the label collections:

>>> for collection in dataset.collections.labels:
...     # Do something here
download(output_dir: Union[pathlib.Path, str], *, if_exists: str = 'resume', api_key: Optional[str] = None, profile: Optional[str] = None) List[pathlib.Path][source]

Downloads archives for all collections associated with this dataset to given directory. Each archive will be named using the collection ID (e.g. some_collection.tar.gz). If output_dir does not exist, it will be created.

Note

Some collections may be very large and take a significant amount of time to download, depending on your connection speed.

Parameters
  • output_dir (str or pathlib.Path) – The directory into which the archives will be written.

  • if_exists (str, optional) – How to handle an existing archive at the same location. If "skip", the download will be skipped. If "overwrite", the existing file will be overwritten and the entire file will be re-downloaded. If "resume" (the default), the existing file size will be compared to the size of the download (using the Content-Length header). If the existing file is smaller, then only the remaining portion will be downloaded. Otherwise, the download will be skipped.

  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Returns

output_paths – List of paths to the downloaded archives

Return type

List[pathlib.Path]

Raises
  • IOError – If output_dir exists and is not a directory.

  • FileExistsError – If one of the archive files already exists in the output_dir and both exist_okay and overwrite are False.

classmethod fetch(dataset_id_or_doi: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) Dataset[source]

Creates a Dataset instance by first trying to fetching the dataset based on ID, then falling back to fetching by DOI.

Parameters
  • dataset_id_or_doi (str) – The ID or DOI of the dataset to fetch (e.g. bigearthnet_v1).

  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Returns

dataset

Return type

Dataset

classmethod fetch_by_doi(dataset_doi: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) Dataset[source]

Creates a Dataset instance by fetching the dataset with the given DOI from the Radiant MLHub API.

Parameters
  • dataset_doi (str) – The DOI of the dataset to fetch (e.g. 10.6084/m9.figshare.12047478.v2).

  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Returns

dataset

Return type

Dataset

classmethod fetch_by_id(dataset_id: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) Dataset[source]

Creates a Dataset instance by fetching the dataset with the given ID from the Radiant MLHub API.

Parameters
  • dataset_id (str) – The ID of the dataset to fetch (e.g. bigearthnet_v1).

  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Returns

dataset

Return type

Dataset

classmethod list(*, tags: Optional[Union[str, Iterable[str]]] = None, text: Optional[Union[str, Iterable[str]]] = None, api_key: Optional[str] = None, profile: Optional[str] = None) List[Dataset][source]

Returns a list of Dataset instances for each datasets hosted by MLHub.

See the Authentication documentation for details on how authentication is handled for this request.

Parameters
  • tags (A list of tags to filter datasets by. If not None, only datasets containing all) – provided tags will be returned.

  • text (A list of text phrases to filter datasets by. If not None, only datasets) – containing all phrases will be returned.

  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Yields

dataset (Dataset)

property total_archive_size: Optional[int]

Gets the total size (in bytes) of the archives for all collections associated with this dataset. If no archives exist, returns None.

class radiant_mlhub.models.MLModel(id: str, geometry: Optional[Dict[str, Any]], bbox: Optional[List[float]], datetime: Optional[datetime.datetime], properties: Dict[str, Any], stac_extensions: Optional[List[str]] = None, href: Optional[str] = None, collection: Optional[Union[str, pystac.collection.Collection]] = None, extra_fields: Optional[Dict[str, Any]] = None, *, api_key: Optional[str] = None, profile: Optional[str] = None)[source]

Bases: pystac.item.Item

assets: Dict[str, Asset]

Dictionary of Asset objects, each with a unique key.

bbox: Optional[List[float]]

Bounding Box of the asset represented by this item using either 2D or 3D geometries. The length of the array is 2*n where n is the number of dimensions. Could also be None in the case of a null geometry.

collection: Optional[Collection]

Collection to which this Item belongs, if any.

collection_id: Optional[str]

The Collection ID that this item belongs to, if any.

datetime: Optional[Datetime]

Datetime associated with this item. If None, then start_datetime and end_datetime in common_metadata will supply the datetime range of the Item.

extra_fields: Dict[str, Any]

Extra fields that are part of the top-level JSON fields the Item.

classmethod fetch(model_id: str, *, api_key: Optional[str] = None, profile: Optional[str] = None) radiant_mlhub.models.ml_model.MLModel[source]

Fetches a MLModel instance by id.

Parameters
  • model_id (str) – The ID of the ML Model to fetch (e.g. model-cyclone-wind-estimation-torchgeo-v1).

  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Returns

model

Return type

MLModel

classmethod from_dict(d: Dict[str, Any], href: Optional[str] = None, root: Optional[pystac.catalog.Catalog] = None, migrate: bool = False, preserve_dict: bool = True, *, api_key: Optional[str] = None, profile: Optional[str] = None) radiant_mlhub.models.ml_model.MLModel[source]

Patches the pystac.Item.from_dict() method so that it returns the calling class instead of always returning a pystac.Item instance.

geometry: Optional[Dict[str, Any]]

Defines the full footprint of the asset represented by this item, formatted according to RFC 7946, section 3.1 (GeoJSON).

id: str

Provider identifier. Unique within the STAC.

A list of Link objects representing all links associated with this Item.

classmethod list(*, api_key: Optional[str] = None, profile: Optional[str] = None) List[radiant_mlhub.models.ml_model.MLModel][source]

Returns a list of MLModel instances for all models hosted by MLHub.

See the Authentication documentation for details on how authentication is handled for this request.

Parameters
  • api_key (str) – An API key to use for this request. This will override an API key set in a profile on using an environment variable

  • profile (str) – A profile to use when making this request.

Returns

models

Return type

List[MLModel]

properties: Dict[str, Any]

A dictionary of additional metadata for the Item.

session_kwargs: Dict[str, Any] = {}

Class inheriting from pystac.Item that adds some convenience methods for listing and fetching from the Radiant MLHub API.

stac_extensions: List[str]

List of extensions the Item implements.