libcloud.storage.drivers package

Submodules

libcloud.storage.drivers.atmos module

class libcloud.storage.drivers.atmos.AtmosConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, backoff=None, retry_delay=None)[source]

Bases: libcloud.common.base.ConnectionUserAndKey

add_default_headers(headers)[source]

Adds default headers (such as Authorization, X-Foo-Bar) to the passed headers

Should return a dictionary.

pre_connect_hook(params, headers)[source]

A hook which is called before connecting to the remote server. This hook can perform a final manipulation on the params, headers and url parameters.

Parameters:
  • params (dict) – Request parameters.
  • headers (dict) – Request headers.
responseCls

alias of AtmosResponse

class libcloud.storage.drivers.atmos.AtmosDriver(key, secret=None, secure=True, host=None, port=None)[source]

Bases: libcloud.storage.base.StorageDriver

DEFAULT_CDN_TTL = 604800
api_name = 'atmos'
connectionCls

alias of AtmosConnection

create_container(container_name)[source]

Create a new container.

Parameters:container_name (str) – Container name.
Returns:Container instance on success.
Return type:Container
delete_container(container)[source]

Delete a container.

Parameters:container (Container) – Container instance
Returns:True on success, False otherwise.
Return type:bool
delete_object(obj)[source]

Delete an object.

Parameters:obj (Object) – Object instance.
Returns:bool True on success.
Return type:bool
download_object(obj, destination_path, overwrite_existing=False, delete_on_failure=True)[source]

Download an object to the specified destination path.

Parameters:
  • obj (Object) – Object instance.
  • destination_path (str) – Full path to a file or a directory where the incoming file will be saved.
  • overwrite_existing (bool) – True to overwrite an existing file, defaults to False.
  • delete_on_failure (bool) – True to delete a partially downloaded file if the download was not successful (hash mismatch / file size).
Returns:

True if an object has been successfully downloaded, False otherwise.

Return type:

bool

download_object_as_stream(obj, chunk_size=None)[source]

Return a generator which yields object data.

Parameters:
  • obj (Object) – Object instance
  • chunk_size (int) – Optional chunk size (in bytes).
enable_object_cdn(obj)[source]

Enable object CDN.

Parameters:obj (Object) – Object instance
Return type:bool
get_container(container_name)[source]

Return a container instance.

Parameters:container_name (str) – Container name.
Returns:Container instance.
Return type:Container
get_object(container_name, object_name)[source]

Return an object instance.

Parameters:
  • container_name (str) – Container name.
  • object_name (str) – Object name.
Returns:

Object instance.

Return type:

Object

get_object_cdn_url(obj, expiry=None, use_object=False)[source]

Return an object CDN URL.

Parameters:
  • obj (Object) – Object instance
  • expiry (str) – Expiry
  • use_object (bool) – Use object
Return type:

str

host = None
iterate_container_objects(container)[source]

Return a generator of objects for the given container.

Parameters:container (Container) – Container instance
Returns:A generator of Object instances.
Return type:generator of Object
iterate_containers()[source]

Return a generator of containers for the given account

Returns:A generator of Container instances.
Return type:generator of Container
name = 'atmos'
path = None
supports_chunked_encoding = True
upload_object(file_path, container, object_name, extra=None, verify_hash=True)[source]

Upload an object currently located on a disk.

Parameters:
  • file_path (str) – Path to the object on disk.
  • container (Container) – Destination container.
  • object_name (str) – Object name.
  • verify_hash (bool) – Verify hash
  • extra (dict) – Extra attributes (driver specific). (optional)
  • headers (dict) – (optional) Additional request headers, such as CORS headers. For example: headers = {‘Access-Control-Allow-Origin’: ‘http://mozilla.com’}
Return type:

Object

upload_object_via_stream(iterator, container, object_name, extra=None)[source]

Upload an object using an iterator.

If a provider supports it, chunked transfer encoding is used and you don’t need to know in advance the amount of data to be uploaded.

Otherwise if a provider doesn’t support it, iterator will be exhausted so a total size for data to be uploaded can be determined.

Note: Exhausting the iterator means that the whole data must be buffered in memory which might result in memory exhausting when uploading a very large object.

If a file is located on a disk you are advised to use upload_object function which uses fs.stat function to determine the file size and it doesn’t need to buffer whole object in the memory.

Parameters:
  • iterator (object) – An object which implements the iterator interface.
  • container (Container) – Destination container.
  • object_name (str) – Object name.
  • extra (dict) – (optional) Extra attributes (driver specific). Note: This dictionary must contain a ‘content_type’ key which represents a content type of the stored object.
  • headers (dict) – (optional) Additional request headers, such as CORS headers. For example: headers = {‘Access-Control-Allow-Origin’: ‘http://mozilla.com’}
Return type:

object

website = 'http://atmosonline.com/'
exception libcloud.storage.drivers.atmos.AtmosError(code, message, driver=None)[source]

Bases: libcloud.common.types.LibcloudError

class libcloud.storage.drivers.atmos.AtmosResponse(response, connection)[source]

Bases: libcloud.common.base.XmlResponse

Parameters:
  • response (httplib.HTTPResponse) – HTTP response object. (optional)
  • connection (Connection) – Parent connection object.
parse_error()[source]

Parse the error messages.

Override in a provider’s subclass.

Returns:Parsed error.
Return type:str
success()[source]

Determine if our request was successful.

The meaning of this can be arbitrary; did we receive OK status? Did the node get created? Were we authenticated?

Return type:bool
Returns:True or False
libcloud.storage.drivers.atmos.collapse(s)[source]

libcloud.storage.drivers.auroraobjects module

class libcloud.storage.drivers.auroraobjects.AuroraObjectsStorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region=None, **kwargs)[source]

Bases: libcloud.storage.drivers.auroraobjects.BaseAuroraObjectsStorageDriver

Parameters:
  • key (str) – API key or username to be used (required)
  • secret (str) – Secret password to be used (required)
  • secure (bool) – Whether to use HTTPS or HTTP. Note: Some providers only support HTTPS, and it is on by default.
  • host (str) – Override hostname used for connections.
  • port (int) – Override port used for connections.
  • api_version (str) – Optional API version. Only used by drivers which support multiple API versions.
  • region (str) – Optional driver region. Only used by drivers which support multiple regions.
Return type:

None

connectionCls

alias of BaseAuroraObjectsConnection

enable_container_cdn(*argv)[source]

Enable container CDN.

Parameters:container (Container) – Container instance
Return type:bool
enable_object_cdn(*argv)[source]

Enable object CDN.

Parameters:obj (Object) – Object instance
Return type:bool
get_container_cdn_url(*argv)[source]

Return a container CDN URL.

Parameters:container (Container) – Container instance
Returns:A CDN URL for this container.
Return type:str
get_object_cdn_url(*argv)[source]

Return an object CDN URL.

Parameters:obj (Object) – Object instance
Returns:A CDN URL for this object.
Return type:str

libcloud.storage.drivers.azure_blobs module

class libcloud.storage.drivers.azure_blobs.AzureBlobLease(driver, object_path, use_lease)[source]

Bases: object

A class to help in leasing an azure blob and renewing the lease

Parameters:
  • driver (AzureStorageDriver) – The Azure storage driver that is being used
  • object_path (str) – The path of the object we need to lease
  • use_lease (bool) – Indicates if we must take a lease or not
renew()[source]

Renew the lease if it is older than a predefined time period

update_headers(headers)[source]

Update the lease id in the headers

class libcloud.storage.drivers.azure_blobs.AzureBlobsConnection(*args, **kwargs)[source]

Bases: libcloud.common.azure.AzureConnection

Represents a single connection to Azure Blobs.

The main Azure Blob Storage service uses a prefix in the hostname to distinguish between accounts, e.g. theaccount.blob.core.windows.net. However, some custom deployments of the service, such as the Azurite emulator, instead use a URL prefix such as /theaccount. To support these deployments, the parameter account_prefix must be set on the connection. This is done by instantiating the driver with arguments such as host='somewhere.tld' and key='theaccount'. To specify a custom host without an account prefix, e.g. for use-cases where the custom host implements an auditing proxy or similar, the driver can be instantiated with host='theaccount.somewhere.tld' and key=''.

Parameters:account_prefix (str) – Optional prefix identifying the sotrage account. Used when connecting to a custom deployment of the storage service like Azurite or IoT Edge Storage.
API_VERSION = '2016-05-31'
morph_action_hook(action)[source]
class libcloud.storage.drivers.azure_blobs.AzureBlobsStorageDriver(key, secret=None, secure=True, host=None, port=None, **kwargs)[source]

Bases: libcloud.storage.base.StorageDriver

connectionCls

alias of AzureBlobsConnection

create_container(container_name)[source]

@inherits: StorageDriver.create_container

delete_container(container)[source]

@inherits: StorageDriver.delete_container

delete_object(obj)[source]

@inherits: StorageDriver.delete_object

download_object(obj, destination_path, overwrite_existing=False, delete_on_failure=True)[source]

@inherits: StorageDriver.download_object

download_object_as_stream(obj, chunk_size=None)[source]

@inherits: StorageDriver.download_object_as_stream

ex_blob_type = 'BlockBlob'
ex_set_object_metadata(obj, meta_data)[source]

Set metadata for an object

Parameters:
  • obj (Object) – The blob object
  • meta_data (dict) – Metadata key value pairs
get_container(container_name)[source]

@inherits: StorageDriver.get_container

get_object(container_name, object_name)[source]

@inherits: StorageDriver.get_object

hash_type = 'md5'
iterate_container_objects(container, ex_prefix=None)[source]

@inherits: StorageDriver.iterate_container_objects

iterate_containers()[source]

@inherits: StorageDriver.iterate_containers

list_container_objects(container, ex_prefix=None)[source]

Return a list of objects for the given container.

Parameters:
  • container (Container) – Container instance.
  • ex_prefix (str) – Only return objects starting with ex_prefix
Returns:

A list of Object instances.

Return type:

list of Object

name = 'Microsoft Azure (blobs)'
supports_chunked_encoding = False
upload_object(file_path, container, object_name, extra=None, verify_hash=True, ex_blob_type=None, ex_use_lease=False)[source]

Upload an object currently located on a disk.

@inherits: StorageDriver.upload_object

Parameters:
  • ex_blob_type (str) – Storage class
  • ex_use_lease (bool) – Indicates if we must take a lease before upload
upload_object_via_stream(iterator, container, object_name, verify_hash=False, extra=None, ex_use_lease=False, ex_blob_type=None, ex_page_blob_size=None)[source]

@inherits: StorageDriver.upload_object_via_stream

Note that if iterator does not support seek, the entire generator will be buffered in memory.

Parameters:
  • ex_blob_type (str) – Storage class
  • ex_page_blob_size (int) – The maximum size to which the page blob can grow to
  • ex_use_lease (bool) – Indicates if we must take a lease before upload
website = 'http://windows.azure.com/'

libcloud.storage.drivers.backblaze_b2 module

Driver for Backblaze B2 service.

class libcloud.storage.drivers.backblaze_b2.BackblazeB2StorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region=None, **kwargs)[source]

Bases: libcloud.storage.base.StorageDriver

Parameters:
  • key (str) – API key or username to be used (required)
  • secret (str) – Secret password to be used (required)
  • secure (bool) – Whether to use HTTPS or HTTP. Note: Some providers only support HTTPS, and it is on by default.
  • host (str) – Override hostname used for connections.
  • port (int) – Override port used for connections.
  • api_version (str) – Optional API version. Only used by drivers which support multiple API versions.
  • region (str) – Optional driver region. Only used by drivers which support multiple regions.
Return type:

None

connectionCls

alias of BackblazeB2Connection

create_container(container_name, ex_type='allPrivate')[source]

Create a new container.

Parameters:container_name (str) – Container name.
Returns:Container instance on success.
Return type:Container
delete_container(container)[source]

Delete a container.

Parameters:container (Container) – Container instance
Returns:True on success, False otherwise.
Return type:bool
delete_object(obj)[source]

Delete an object.

Parameters:obj (Object) – Object instance.
Returns:bool True on success.
Return type:bool
download_object(obj, destination_path, overwrite_existing=False, delete_on_failure=True)[source]

Download an object to the specified destination path.

Parameters:
  • obj (Object) – Object instance.
  • destination_path (str) – Full path to a file or a directory where the incoming file will be saved.
  • overwrite_existing (bool) – True to overwrite an existing file, defaults to False.
  • delete_on_failure (bool) – True to delete a partially downloaded file if the download was not successful (hash mismatch / file size).
Returns:

True if an object has been successfully downloaded, False otherwise.

Return type:

bool

download_object_as_stream(obj, chunk_size=None)[source]

Return a generator which yields object data.

Parameters:
  • obj (Object) – Object instance
  • chunk_size (int) – Optional chunk size (in bytes).
ex_get_object(object_id)[source]
ex_get_upload_data(container_id)[source]

Retrieve information used for uploading files (upload url, auth token, etc).

Rype:dict
ex_get_upload_url(container_id)[source]

Retrieve URL used for file uploads.

Return type:str
ex_hide_object(container_id, object_name)[source]
ex_list_object_versions(container_id, ex_start_file_name=None, ex_start_file_id=None, ex_max_file_count=None)[source]
get_container(container_name)[source]

Return a container instance.

Parameters:container_name (str) – Container name.
Returns:Container instance.
Return type:Container
get_object(container_name, object_name)[source]

Return an object instance.

Parameters:
  • container_name (str) – Container name.
  • object_name (str) – Object name.
Returns:

Object instance.

Return type:

Object

hash_type = 'sha1'
iterate_container_objects(container)[source]

Return a generator of objects for the given container.

Parameters:container (Container) – Container instance
Returns:A generator of Object instances.
Return type:generator of Object
iterate_containers()[source]

Return a generator of containers for the given account

Returns:A generator of Container instances.
Return type:generator of Container
name = 'Backblaze B2'
supports_chunked_encoding = False
type = 'backblaze_b2'
upload_object(file_path, container, object_name, extra=None, verify_hash=True, headers=None)[source]

Upload an object.

Note: This will override file with a same name if it already exists.

upload_object_via_stream(iterator, container, object_name, extra=None, headers=None)[source]

Upload an object.

Note: Backblaze does not yet support uploading via stream, so this calls upload_object internally requiring the object data to be loaded into memory at once

website = 'https://www.backblaze.com/b2/'
class libcloud.storage.drivers.backblaze_b2.BackblazeB2Connection(*args, **kwargs)[source]

Bases: libcloud.common.base.ConnectionUserAndKey

authCls

alias of BackblazeB2AuthConnection

download_request(action, params=None)[source]
host = None
request(action, params=None, data=None, headers=None, method='GET', raw=False, include_account_id=False)[source]

Request a given action.

Basically a wrapper around the connection object’s request that does some helpful pre-processing.

Parameters:
  • action (str) – A path. This can include arguments. If included, any extra parameters are appended to the existing ones.
  • params (dict) – Optional mapping of additional parameters to send. If None, leave as an empty dict.
  • data (unicode) – A body of data to send with the request.
  • headers (dict) – Extra headers to add to the request None, leave as an empty dict.
  • method (str) – An HTTP method such as “GET” or “POST”.
  • raw (bool) – True to perform a “raw” request aka only send the headers and use the rawResponseCls class. This is used with storage API when uploading a file.
  • stream (bool) – True to return an iterator in Response.iter_content and allow streaming of the response data (for downloading large files)
Returns:

An Response instance.

Return type:

Response instance

responseCls

alias of BackblazeB2Response

secure = True
upload_request(action, headers, upload_host, auth_token, data)[source]
class libcloud.storage.drivers.backblaze_b2.BackblazeB2AuthConnection(*args, **kwargs)[source]

Bases: libcloud.common.base.ConnectionUserAndKey

authenticate(force=False)[source]
Parameters:force (bool) – Force authentication if if we have already obtained the token.
host = 'api.backblaze.com'
responseCls

alias of BackblazeB2Response

secure = True

libcloud.storage.drivers.cloudfiles module

class libcloud.storage.drivers.cloudfiles.ChunkStreamReader(file_path, start_block, end_block, chunk_size)[source]

Bases: object

next()[source]
class libcloud.storage.drivers.cloudfiles.CloudFilesConnection(user_id, key, secure=True, use_internal_url=False, **kwargs)[source]

Bases: libcloud.storage.drivers.cloudfiles.OpenStackSwiftConnection

Base connection class for the Cloudfiles driver.

auth_url = 'https://identity.api.rackspacecloud.com'
get_endpoint()[source]

Selects the endpoint to use based on provider specific values, or overrides passed in by the user when setting up the driver.

Returns:url of the relevant endpoint for the driver
rawResponseCls

alias of CloudFilesRawResponse

request(action, params=None, data='', headers=None, method='GET', raw=False, cdn_request=False)[source]

Request a given action.

Basically a wrapper around the connection object’s request that does some helpful pre-processing.

Parameters:
  • action (str) – A path. This can include arguments. If included, any extra parameters are appended to the existing ones.
  • params (dict) – Optional mapping of additional parameters to send. If None, leave as an empty dict.
  • data (unicode) – A body of data to send with the request.
  • headers (dict) – Extra headers to add to the request None, leave as an empty dict.
  • method (str) – An HTTP method such as “GET” or “POST”.
  • raw (bool) – True to perform a “raw” request aka only send the headers and use the rawResponseCls class. This is used with storage API when uploading a file.
  • stream (bool) – True to return an iterator in Response.iter_content and allow streaming of the response data (for downloading large files)
Returns:

An Response instance.

Return type:

Response instance

responseCls

alias of CloudFilesResponse

class libcloud.storage.drivers.cloudfiles.CloudFilesRawResponse(connection, response=None)[source]

Bases: libcloud.storage.drivers.cloudfiles.CloudFilesResponse, libcloud.common.base.RawResponse

Parameters:connection (Connection) – Parent connection object.
class libcloud.storage.drivers.cloudfiles.CloudFilesResponse(response, connection)[source]

Bases: libcloud.common.base.Response

Parameters:
  • response (httplib.HTTPResponse) – HTTP response object. (optional)
  • connection (Connection) – Parent connection object.
parse_body()[source]

Parse response body.

Override in a provider’s subclass.

Returns:Parsed body.
Return type:str
success()[source]

Determine if our request was successful.

The meaning of this can be arbitrary; did we receive OK status? Did the node get created? Were we authenticated?

Return type:bool
Returns:True or False
valid_response_codes = [<HTTPStatus.NOT_FOUND: 404>, <HTTPStatus.CONFLICT: 409>]
class libcloud.storage.drivers.cloudfiles.CloudFilesStorageDriver(key, secret=None, secure=True, host=None, port=None, region='ord', use_internal_url=False, **kwargs)[source]

Bases: libcloud.storage.base.StorageDriver, libcloud.common.openstack.OpenStackDriverMixin

CloudFiles driver.

@inherits: StorageDriver.__init__

Parameters:region (str) – ID of the region which should be used.
connectionCls

alias of CloudFilesConnection

create_container(container_name)[source]

Create a new container.

Parameters:container_name (str) – Container name.
Returns:Container instance on success.
Return type:Container
delete_container(container)[source]

Delete a container.

Parameters:container (Container) – Container instance
Returns:True on success, False otherwise.
Return type:bool
delete_object(obj)[source]

Delete an object.

Parameters:obj (Object) – Object instance.
Returns:bool True on success.
Return type:bool
download_object(obj, destination_path, overwrite_existing=False, delete_on_failure=True)[source]

Download an object to the specified destination path.

Parameters:
  • obj (Object) – Object instance.
  • destination_path (str) – Full path to a file or a directory where the incoming file will be saved.
  • overwrite_existing (bool) – True to overwrite an existing file, defaults to False.
  • delete_on_failure (bool) – True to delete a partially downloaded file if the download was not successful (hash mismatch / file size).
Returns:

True if an object has been successfully downloaded, False otherwise.

Return type:

bool

download_object_as_stream(obj, chunk_size=None)[source]

Return a generator which yields object data.

Parameters:
  • obj (Object) – Object instance
  • chunk_size (int) – Optional chunk size (in bytes).
enable_container_cdn(container, ex_ttl=None)[source]

@inherits: StorageDriver.enable_container_cdn

Parameters:ex_ttl (int) – cache time to live
ex_enable_static_website(container, index_file='index.html')[source]

Enable serving a static website.

Parameters:
  • container (Container) – Container instance
  • index_file – Name of the object which becomes an index page for

every sub-directory in this container. :type index_file: str

Return type:bool
ex_get_meta_data()[source]

Get meta data

Return type:dict
ex_get_object_temp_url(obj, method='GET', timeout=60)[source]

Create a temporary URL to allow others to retrieve or put objects in your Cloud Files account for as long or as short a time as you wish. This method is specifically for allowing users to retrieve or update an object.

Parameters:
  • obj (Object) – The object that you wish to make temporarily public
  • method (str) – Which method you would like to allow, ‘PUT’ or ‘GET’
  • timeout – Time (in seconds) after which you want the TempURL

to expire. :type timeout: int

Return type:bool
ex_multipart_upload_object(file_path, container, object_name, chunk_size=33554432, extra=None, verify_hash=True)[source]
ex_purge_object_from_cdn(obj, email=None)[source]

Purge edge cache for the specified object.

Parameters:email – Email where a notification will be sent when the job

completes. (optional) :type email: str

ex_set_account_metadata_temp_url_key(key)[source]

Set the metadata header X-Account-Meta-Temp-URL-Key on your Cloud Files account.

Parameters:key (str) – X-Account-Meta-Temp-URL-Key
Return type:bool
ex_set_error_page(container, file_name='error.html')[source]

Set a custom error page which is displayed if file is not found and serving of a static website is enabled.

Parameters:
  • container (Container) – Container instance
  • file_name (str) – Name of the object which becomes the error page.
Return type:

bool

get_container(container_name)[source]

Return a container instance.

Parameters:container_name (str) – Container name.
Returns:Container instance.
Return type:Container
get_container_cdn_url(container, ex_ssl_uri=False)[source]

Return a container CDN URL.

Parameters:container (Container) – Container instance
Returns:A CDN URL for this container.
Return type:str
get_object(container_name, object_name)[source]

Return an object instance.

Parameters:
  • container_name (str) – Container name.
  • object_name (str) – Object name.
Returns:

Object instance.

Return type:

Object

get_object_cdn_url(obj)[source]

Return an object CDN URL.

Parameters:obj (Object) – Object instance
Returns:A CDN URL for this object.
Return type:str
hash_type = 'md5'
iterate_container_objects(container, ex_prefix=None)[source]

Return a generator of objects for the given container.

Parameters:
  • container (Container) – Container instance
  • ex_prefix (str) – Only get objects with names starting with ex_prefix
Returns:

A generator of Object instances.

Return type:

generator of Object

iterate_containers()[source]

Return a generator of containers for the given account

Returns:A generator of Container instances.
Return type:generator of Container
list_container_objects(container, ex_prefix=None)[source]

Return a list of objects for the given container.

Parameters:
  • container (Container) – Container instance.
  • ex_prefix (str) – Only get objects with names starting with ex_prefix
Returns:

A list of Object instances.

Return type:

list of Object

classmethod list_regions()[source]
name = 'CloudFiles'
supports_chunked_encoding = True
upload_object(file_path, container, object_name, extra=None, verify_hash=True, headers=None)[source]

Upload an object.

Note: This will override file with a same name if it already exists.

upload_object_via_stream(iterator, container, object_name, extra=None, headers=None)[source]

Upload an object using an iterator.

If a provider supports it, chunked transfer encoding is used and you don’t need to know in advance the amount of data to be uploaded.

Otherwise if a provider doesn’t support it, iterator will be exhausted so a total size for data to be uploaded can be determined.

Note: Exhausting the iterator means that the whole data must be buffered in memory which might result in memory exhausting when uploading a very large object.

If a file is located on a disk you are advised to use upload_object function which uses fs.stat function to determine the file size and it doesn’t need to buffer whole object in the memory.

Parameters:
  • iterator (object) – An object which implements the iterator interface.
  • container (Container) – Destination container.
  • object_name (str) – Object name.
  • extra (dict) – (optional) Extra attributes (driver specific). Note: This dictionary must contain a ‘content_type’ key which represents a content type of the stored object.
  • headers (dict) – (optional) Additional request headers, such as CORS headers. For example: headers = {‘Access-Control-Allow-Origin’: ‘http://mozilla.com’}
Return type:

object

website = 'http://www.rackspace.com/'
class libcloud.storage.drivers.cloudfiles.FileChunkReader(file_path, chunk_size)[source]

Bases: object

next()[source]
class libcloud.storage.drivers.cloudfiles.OpenStackSwiftConnection(user_id, key, secure=True, **kwargs)[source]

Bases: libcloud.common.openstack.OpenStackBaseConnection

Connection class for the OpenStack Swift endpoint.

auth_url = 'https://identity.api.rackspacecloud.com'
get_endpoint(*args, **kwargs)[source]

Selects the endpoint to use based on provider specific values, or overrides passed in by the user when setting up the driver.

Returns:url of the relevant endpoint for the driver
rawResponseCls

alias of CloudFilesRawResponse

request(action, params=None, data='', headers=None, method='GET', raw=False, cdn_request=False)[source]

Request a given action.

Basically a wrapper around the connection object’s request that does some helpful pre-processing.

Parameters:
  • action (str) – A path. This can include arguments. If included, any extra parameters are appended to the existing ones.
  • params (dict) – Optional mapping of additional parameters to send. If None, leave as an empty dict.
  • data (unicode) – A body of data to send with the request.
  • headers (dict) – Extra headers to add to the request None, leave as an empty dict.
  • method (str) – An HTTP method such as “GET” or “POST”.
  • raw (bool) – True to perform a “raw” request aka only send the headers and use the rawResponseCls class. This is used with storage API when uploading a file.
  • stream (bool) – True to return an iterator in Response.iter_content and allow streaming of the response data (for downloading large files)
Returns:

An Response instance.

Return type:

Response instance

responseCls

alias of CloudFilesResponse

class libcloud.storage.drivers.cloudfiles.OpenStackSwiftStorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, **kwargs)[source]

Bases: libcloud.storage.drivers.cloudfiles.CloudFilesStorageDriver

Storage driver for the OpenStack Swift.

connectionCls

alias of OpenStackSwiftConnection

name = 'OpenStack Swift'
type = 'cloudfiles_swift'

libcloud.storage.drivers.digitalocean_spaces module

class libcloud.storage.drivers.digitalocean_spaces.DigitalOceanSpacesStorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region='nyc3', **kwargs)[source]

Bases: libcloud.storage.drivers.s3.BaseS3StorageDriver

name = 'DigitalOcean Spaces'
supports_chunked_encoding = False
supports_s3_multipart_upload = True
website = 'https://www.digitalocean.com/products/object-storage/'

libcloud.storage.drivers.dummy module

class libcloud.storage.drivers.dummy.DummyFileObject(yield_count=5, chunk_len=10)[source]

Bases: _io.FileIO

read(size)[source]

Read at most size bytes, returned as bytes.

Only makes one system call, so less data may be returned than requested. In non-blocking mode, returns None if no data is available. Return an empty bytes object at EOF.

class libcloud.storage.drivers.dummy.DummyIterator(data=None)[source]

Bases: object

get_md5_hash()[source]
next()[source]
class libcloud.storage.drivers.dummy.DummyStorageDriver(api_key, api_secret)[source]

Bases: libcloud.storage.base.StorageDriver

Dummy Storage driver.

>>> from libcloud.storage.drivers.dummy import DummyStorageDriver
>>> driver = DummyStorageDriver('key', 'secret')
>>> container = driver.create_container(container_name='test container')
>>> container
<Container: name=test container, provider=Dummy Storage Provider>
>>> container.name
'test container'
>>> container.extra['object_count']
0
Parameters:
  • api_key (str) – API key or username to used (required)
  • api_secret (str) – Secret password to be used (required)
Return type:

None

create_container(container_name)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> container_name = 'test container 1'
>>> container = driver.create_container(container_name=container_name)
>>> container
<Container: name=test container 1, provider=Dummy Storage Provider>
>>> container = driver.create_container(
...    container_name='test container 1')
... #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
ContainerAlreadyExistsError:

@inherits: StorageDriver.create_container

delete_container(container)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> container = Container(name = 'test container',
...    extra={'object_count': 0}, driver=driver)
>>> driver.delete_container(container=container)
... #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
ContainerDoesNotExistError:
>>> container = driver.create_container(
...      container_name='test container 1')
... #doctest: +IGNORE_EXCEPTION_DETAIL
>>> len(driver._containers)
1
>>> driver.delete_container(container=container)
True
>>> len(driver._containers)
0
>>> container = driver.create_container(
...    container_name='test container 1')
... #doctest: +IGNORE_EXCEPTION_DETAIL
>>> obj = container.upload_object_via_stream(
...   object_name='test object', iterator=DummyFileObject(5, 10),
...   extra={})
>>> driver.delete_container(container=container)
... #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
ContainerIsNotEmptyError:

@inherits: StorageDriver.delete_container

delete_object(obj)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> container = driver.create_container(
...   container_name='test container 1')
... #doctest: +IGNORE_EXCEPTION_DETAIL
>>> obj = container.upload_object_via_stream(object_name='test object',
...   iterator=DummyFileObject(5, 10), extra={})
>>> obj #doctest: +ELLIPSIS
<Object: name=test object, size=50, ...>
>>> container.delete_object(obj=obj)
True
>>> obj = Object(name='test object 2',
...    size=1000, hash=None, extra=None,
...    meta_data=None, container=container,driver=None)
>>> container.delete_object(obj=obj) #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
ObjectDoesNotExistError:

@inherits: StorageDriver.delete_object

download_object(obj, destination_path, overwrite_existing=False, delete_on_failure=True)[source]

Download an object to the specified destination path.

Parameters:
  • obj (Object) – Object instance.
  • destination_path (str) – Full path to a file or a directory where the incoming file will be saved.
  • overwrite_existing (bool) – True to overwrite an existing file, defaults to False.
  • delete_on_failure (bool) – True to delete a partially downloaded file if the download was not successful (hash mismatch / file size).
Returns:

True if an object has been successfully downloaded, False otherwise.

Return type:

bool

download_object_as_stream(obj, chunk_size=None)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> container = driver.create_container(
...   container_name='test container 1')
... #doctest: +IGNORE_EXCEPTION_DETAIL
>>> obj = container.upload_object_via_stream(object_name='test object',
...    iterator=DummyFileObject(5, 10), extra={})
>>> stream = container.download_object_as_stream(obj)
>>> stream #doctest: +ELLIPSIS
<...closed...>

@inherits: StorageDriver.download_object_as_stream

get_container(container_name)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> driver.get_container('unknown') #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
ContainerDoesNotExistError:
>>> container_name = 'test container 1'
>>> container = driver.create_container(container_name=container_name)
>>> container
<Container: name=test container 1, provider=Dummy Storage Provider>
>>> container.name
'test container 1'
>>> driver.get_container('test container 1')
<Container: name=test container 1, provider=Dummy Storage Provider>

@inherits: StorageDriver.get_container

get_container_cdn_url(container)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> driver.get_container('unknown') #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
ContainerDoesNotExistError:
>>> container_name = 'test container 1'
>>> container = driver.create_container(container_name=container_name)
>>> container
<Container: name=test container 1, provider=Dummy Storage Provider>
>>> container.name
'test container 1'
>>> container.get_cdn_url()
'http://www.test.com/container/test_container_1'

@inherits: StorageDriver.get_container_cdn_url

get_meta_data()[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> driver.get_meta_data()['object_count']
0
>>> driver.get_meta_data()['container_count']
0
>>> driver.get_meta_data()['bytes_used']
0
>>> container_name = 'test container 1'
>>> container = driver.create_container(container_name=container_name)
>>> container_name = 'test container 2'
>>> container = driver.create_container(container_name=container_name)
>>> obj = container.upload_object_via_stream(
...  object_name='test object', iterator=DummyFileObject(5, 10),
...  extra={})
>>> driver.get_meta_data()['object_count']
1
>>> driver.get_meta_data()['container_count']
2
>>> driver.get_meta_data()['bytes_used']
50
Return type:dict
get_object(container_name, object_name)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> driver.get_object('unknown', 'unknown')
... #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
ContainerDoesNotExistError:
>>> container_name = 'test container 1'
>>> container = driver.create_container(container_name=container_name)
>>> container
<Container: name=test container 1, provider=Dummy Storage Provider>
>>> driver.get_object(
...  'test container 1', 'unknown') #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
ObjectDoesNotExistError:
>>> obj = container.upload_object_via_stream(object_name='test object',
...      iterator=DummyFileObject(5, 10), extra={})
>>> obj.name
'test object'
>>> obj.size
50

@inherits: StorageDriver.get_object

get_object_cdn_url(obj)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> container_name = 'test container 1'
>>> container = driver.create_container(container_name=container_name)
>>> container
<Container: name=test container 1, provider=Dummy Storage Provider>
>>> obj = container.upload_object_via_stream(
...      object_name='test object 5',
...      iterator=DummyFileObject(5, 10), extra={})
>>> obj.name
'test object 5'
>>> obj.get_cdn_url()
'http://www.test.com/object/test_object_5'

@inherits: StorageDriver.get_object_cdn_url

iterate_containers()[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> list(driver.iterate_containers())
[]
>>> container_name = 'test container 1'
>>> container = driver.create_container(container_name=container_name)
>>> container
<Container: name=test container 1, provider=Dummy Storage Provider>
>>> container.name
'test container 1'
>>> container_name = 'test container 2'
>>> container = driver.create_container(container_name=container_name)
>>> container
<Container: name=test container 2, provider=Dummy Storage Provider>
>>> container = driver.create_container(
...  container_name='test container 2')
... #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
ContainerAlreadyExistsError:
>>> container_list=list(driver.iterate_containers())
>>> sorted([c.name for c in container_list])
['test container 1', 'test container 2']

@inherits: StorageDriver.iterate_containers

list_container_objects(container, ex_prefix=None)[source]

Return a list of objects for the given container.

Parameters:
  • container (Container) – Container instance.
  • ex_prefix (str) – Filter objects starting with a prefix.
Returns:

A list of Object instances.

Return type:

list of Object

name = 'Dummy Storage Provider'
upload_object(file_path, container, object_name, extra=None, file_hash=None)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> container_name = 'test container 1'
>>> container = driver.create_container(container_name=container_name)
>>> container.upload_object(file_path='/tmp/inexistent.file',
...     object_name='test') #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
LibcloudError:
>>> file_path = path = os.path.abspath(__file__)
>>> file_size = os.path.getsize(file_path)
>>> obj = container.upload_object(file_path=file_path,
...                               object_name='test')
>>> obj #doctest: +ELLIPSIS
<Object: name=test, size=...>
>>> obj.size == file_size
True

@inherits: StorageDriver.upload_object :param file_hash: File hash :type file_hash: str

upload_object_via_stream(iterator, container, object_name, extra=None)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> container = driver.create_container(
...    container_name='test container 1')
... #doctest: +IGNORE_EXCEPTION_DETAIL
>>> obj = container.upload_object_via_stream(
...   object_name='test object', iterator=DummyFileObject(5, 10),
...   extra={})
>>> obj #doctest: +ELLIPSIS
<Object: name=test object, size=50, ...>

@inherits: StorageDriver.upload_object_via_stream

website = 'http://example.com'

libcloud.storage.drivers.google_storage module

class libcloud.storage.drivers.google_storage.ContainerPermissions[source]

Bases: object

NONE = 0
OWNER = 3
READER = 1
WRITER = 2
values = ['NONE', 'READER', 'WRITER', 'OWNER']
class libcloud.storage.drivers.google_storage.GCSResponse(response, connection)[source]

Bases: libcloud.common.google.GoogleResponse

Parameters:
  • response (httplib.HTTPResponse) – HTTP response object. (optional)
  • connection (Connection) – Parent connection object.
class libcloud.storage.drivers.google_storage.GoogleStorageConnection(user_id, key, secure=True, auth_type=None, credential_file=None, **kwargs)[source]

Bases: libcloud.common.base.ConnectionUserAndKey

Represents a single connection to the Google storage API endpoint.

This can either authenticate via the Google OAuth2 methods or via the S3 HMAC interoperability method.

PROJECT_ID_HEADER = 'x-goog-project-id'
add_default_headers(headers)[source]

Adds default headers (such as Authorization, X-Foo-Bar) to the passed headers

Should return a dictionary.

get_project()[source]
host = 'storage.googleapis.com'
pre_connect_hook(params, headers)[source]

A hook which is called before connecting to the remote server. This hook can perform a final manipulation on the params, headers and url parameters.

Parameters:
  • params (dict) – Request parameters.
  • headers (dict) – Request headers.
rawResponseCls

alias of libcloud.storage.drivers.s3.S3RawResponse

responseCls

alias of libcloud.storage.drivers.s3.S3Response

class libcloud.storage.drivers.google_storage.GoogleStorageDriver(key, secret=None, project=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.BaseS3StorageDriver

Driver for Google Cloud Storage.

Can authenticate via standard Google Cloud methods (Service Accounts, Installed App credentials, and GCE instance service accounts)

Examples:

Service Accounts:

driver = GoogleStorageDriver(key=client_email, secret=private_key, ...)

Installed Application:

driver = GoogleStorageDriver(key=client_id, secret=client_secret, ...)

From GCE instance:

driver = GoogleStorageDriver(key=foo, secret=bar, ...)

Can also authenticate via Google Cloud Storage’s S3 HMAC interoperability API. S3 user keys are 20 alphanumeric characters, starting with GOOG.

Example:

driver = GoogleStorageDriver(key='GOOG0123456789ABCXYZ',
                             secret=key_secret)
connectionCls

alias of GoogleStorageConnection

ex_delete_permissions(container_name, object_name=None, entity=None)[source]

Delete permissions for an ACL entity on a container or object.

Parameters:
  • container_name (str) – The container name.
  • object_name (str) – The object name. Optional. Not providing an object will delete a container permission.
  • entity (str or None) – The entity to whose permission will be deleted. Optional. If not provided, the role will be applied to the authenticated user, if using an OAuth2 authentication scheme.
ex_get_permissions(container_name, object_name=None)[source]

Return the permissions for the currently authenticated user.

Parameters:
  • container_name (str) – The container name.
  • object_name (str or None) – The object name. Optional. Not providing an object will return only container permissions.
Returns:

A tuple of container and object permissions.

Return type:

tuple of (int, int or None) from ContainerPermissions and ObjectPermissions, respectively.

ex_set_permissions(container_name, object_name=None, entity=None, role=None)[source]

Set the permissions for an ACL entity on a container or an object.

Parameters:
  • container_name (str) – The container name.
  • object_name (str) – The object name. Optional. Not providing an object will apply the acl to the container.
  • entity (str) – The entity to which apply the role. Optional. If not provided, the role will be applied to the authenticated user, if using an OAuth2 authentication scheme.
  • role (int from ContainerPermissions or ObjectPermissions or str.) – The permission/role to set on the entity.
Raises:

ValueError – If no entity was given, but was required. Or if the role isn’t valid for the bucket or object.

hash_type = 'md5'
http_vendor_prefix = 'x-goog'
jsonConnectionCls

alias of GoogleStorageJSONConnection

name = 'Google Cloud Storage'
namespace = 'http://doc.s3.amazonaws.com/2006-03-01'
supports_chunked_encoding = False
supports_s3_multipart_upload = False
website = 'http://cloud.google.com/storage'
class libcloud.storage.drivers.google_storage.GoogleStorageJSONConnection(user_id, key, secure=True, auth_type=None, credential_file=None, **kwargs)[source]

Bases: libcloud.storage.drivers.google_storage.GoogleStorageConnection

Represents a single connection to the Google storage JSON API endpoint.

This can either authenticate via the Google OAuth2 methods or via the S3 HMAC interoperability method.

add_default_headers(headers)[source]

Adds default headers (such as Authorization, X-Foo-Bar) to the passed headers

Should return a dictionary.

host = 'www.googleapis.com'
rawResponseCls = None
responseCls

alias of GCSResponse

class libcloud.storage.drivers.google_storage.ObjectPermissions[source]

Bases: object

NONE = 0
OWNER = 2
READER = 1
values = ['NONE', 'READER', 'OWNER']

libcloud.storage.drivers.ktucloud module

class libcloud.storage.drivers.ktucloud.KTUCloudStorageConnection(user_id, key, secure=True, use_internal_url=False, **kwargs)[source]

Bases: libcloud.storage.drivers.cloudfiles.CloudFilesConnection

Connection class for the KT UCloud Storage endpoint.

auth_url = 'https://ssproxy.ucloudbiz.olleh.com/auth/v1.0'
get_endpoint()[source]

Selects the endpoint to use based on provider specific values, or overrides passed in by the user when setting up the driver.

Returns:url of the relevant endpoint for the driver
class libcloud.storage.drivers.ktucloud.KTUCloudStorageDriver(key, secret=None, secure=True, host=None, port=None, region='ord', use_internal_url=False, **kwargs)[source]

Bases: libcloud.storage.drivers.cloudfiles.CloudFilesStorageDriver

Cloudfiles storage driver for the UK endpoint.

@inherits: StorageDriver.__init__

Parameters:region (str) – ID of the region which should be used.
connectionCls

alias of KTUCloudStorageConnection

name = 'KTUCloud Storage'
type = 'ktucloud'

libcloud.storage.drivers.local module

libcloud.storage.drivers.nimbus module

class libcloud.storage.drivers.nimbus.NimbusConnection(*args, **kwargs)[source]

Bases: libcloud.common.base.ConnectionUserAndKey

host = 'nimbus.io'
pre_connect_hook(params, headers)[source]

A hook which is called before connecting to the remote server. This hook can perform a final manipulation on the params, headers and url parameters.

Parameters:
  • params (dict) – Request parameters.
  • headers (dict) – Request headers.
responseCls

alias of NimbusResponse

class libcloud.storage.drivers.nimbus.NimbusResponse(response, connection)[source]

Bases: libcloud.common.base.JsonResponse

Parameters:
  • response (httplib.HTTPResponse) – HTTP response object. (optional)
  • connection (Connection) – Parent connection object.
parse_error()[source]

Parse the error messages.

Override in a provider’s subclass.

Returns:Parsed error.
Return type:str
success()[source]

Determine if our request was successful.

The meaning of this can be arbitrary; did we receive OK status? Did the node get created? Were we authenticated?

Return type:bool
Returns:True or False
valid_response_codes = [<HTTPStatus.OK: 200>, <HTTPStatus.NOT_FOUND: 404>, <HTTPStatus.CONFLICT: 409>, <HTTPStatus.BAD_REQUEST: 400>]
class libcloud.storage.drivers.nimbus.NimbusStorageDriver(*args, **kwargs)[source]

Bases: libcloud.storage.base.StorageDriver

connectionCls

alias of NimbusConnection

create_container(container_name)[source]

Create a new container.

Parameters:container_name (str) – Container name.
Returns:Container instance on success.
Return type:Container
iterate_containers()[source]

Return a generator of containers for the given account

Returns:A generator of Container instances.
Return type:generator of Container
name = 'Nimbus.io'
website = 'https://nimbus.io/'

libcloud.storage.drivers.ninefold module

class libcloud.storage.drivers.ninefold.NinefoldStorageDriver(key, secret=None, secure=True, host=None, port=None)[source]

Bases: libcloud.storage.drivers.atmos.AtmosDriver

host = 'api.ninefold.com'
name = 'Ninefold'
path = '/storage/v1.0'
type = 'ninefold'
website = 'http://ninefold.com/'

libcloud.storage.drivers.oss module

class libcloud.storage.drivers.oss.OSSStorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region=None, **kwargs)[source]

Bases: libcloud.storage.base.StorageDriver

Parameters:
  • key (str) – API key or username to be used (required)
  • secret (str) – Secret password to be used (required)
  • secure (bool) – Whether to use HTTPS or HTTP. Note: Some providers only support HTTPS, and it is on by default.
  • host (str) – Override hostname used for connections.
  • port (int) – Override port used for connections.
  • api_version (str) – Optional API version. Only used by drivers which support multiple API versions.
  • region (str) – Optional driver region. Only used by drivers which support multiple regions.
Return type:

None

connectionCls

alias of OSSConnection

create_container(container_name, ex_location=None)[source]

@inherits StorageDriver.create_container

Parameters:ex_location – The desired location where to create container
delete_container(container)[source]

Delete a container.

Parameters:container (Container) – Container instance
Returns:True on success, False otherwise.
Return type:bool
delete_object(obj)[source]

Delete an object.

Parameters:obj (Object) – Object instance.
Returns:bool True on success.
Return type:bool
download_object(obj, destination_path, overwrite_existing=False, delete_on_failure=True)[source]

Download an object to the specified destination path.

Parameters:
  • obj (Object) – Object instance.
  • destination_path (str) – Full path to a file or a directory where the incoming file will be saved.
  • overwrite_existing (bool) – True to overwrite an existing file, defaults to False.
  • delete_on_failure (bool) – True to delete a partially downloaded file if the download was not successful (hash mismatch / file size).
Returns:

True if an object has been successfully downloaded, False otherwise.

Return type:

bool

download_object_as_stream(obj, chunk_size=None)[source]

Return a generator which yields object data.

Parameters:
  • obj (Object) – Object instance
  • chunk_size (int) – Optional chunk size (in bytes).
ex_abort_all_multipart_uploads(container, prefix=None)[source]

Extension method for removing all partially completed OSS multipart uploads.

Parameters:
  • container (Container) – The container holding the uploads
  • prefix (str) – Delete only uploads of objects with this prefix
ex_iterate_multipart_uploads(container, prefix=None, delimiter=None, max_uploads=1000)[source]

Extension method for listing all in-progress OSS multipart uploads.

Each multipart upload which has not been committed or aborted is considered in-progress.

Parameters:
  • container (Container) – The container holding the uploads
  • prefix (str) – Print only uploads of objects with this prefix
  • delimiter (str) – The object/key names are grouped based on being split by this delimiter
  • max_uploads (int) – The max uplod items returned for one request
Returns:

A generator of OSSMultipartUpload instances.

Return type:

generator of OSSMultipartUpload

get_container(container_name)[source]

Return a container instance.

Parameters:container_name (str) – Container name.
Returns:Container instance.
Return type:Container
get_object(container_name, object_name)[source]

Return an object instance.

Parameters:
  • container_name (str) – Container name.
  • object_name (str) – Object name.
Returns:

Object instance.

Return type:

Object

hash_type = 'md5'
http_vendor_prefix = 'x-oss-'
iterate_container_objects(container, ex_prefix=None)[source]

Return a generator of objects for the given container.

Parameters:
  • container (Container) – Container instance
  • ex_prefix (str) – Only return objects starting with ex_prefix
Returns:

A generator of Object instances.

Return type:

generator of Object

iterate_containers()[source]

Return a generator of containers for the given account

Returns:A generator of Container instances.
Return type:generator of Container
list_container_objects(container, ex_prefix=None)[source]

Return a list of objects for the given container.

Parameters:
  • container (Container) – Container instance.
  • ex_prefix (str) – Only return objects starting with ex_prefix
Returns:

A list of Object instances.

Return type:

list of Object

name = 'Aliyun OSS'
namespace = None
supports_chunked_encoding = False
supports_multipart_upload = True
upload_object(file_path, container, object_name, extra=None, verify_hash=True, headers=None)[source]

Upload an object currently located on a disk.

Parameters:
  • file_path (str) – Path to the object on disk.
  • container (Container) – Destination container.
  • object_name (str) – Object name.
  • verify_hash (bool) – Verify hash
  • extra (dict) – Extra attributes (driver specific). (optional)
  • headers (dict) – (optional) Additional request headers, such as CORS headers. For example: headers = {‘Access-Control-Allow-Origin’: ‘http://mozilla.com’}
Return type:

Object

upload_object_via_stream(iterator, container, object_name, extra=None, headers=None)[source]

Upload an object using an iterator.

If a provider supports it, chunked transfer encoding is used and you don’t need to know in advance the amount of data to be uploaded.

Otherwise if a provider doesn’t support it, iterator will be exhausted so a total size for data to be uploaded can be determined.

Note: Exhausting the iterator means that the whole data must be buffered in memory which might result in memory exhausting when uploading a very large object.

If a file is located on a disk you are advised to use upload_object function which uses fs.stat function to determine the file size and it doesn’t need to buffer whole object in the memory.

Parameters:
  • iterator (object) – An object which implements the iterator interface.
  • container (Container) – Destination container.
  • object_name (str) – Object name.
  • extra (dict) – (optional) Extra attributes (driver specific). Note: This dictionary must contain a ‘content_type’ key which represents a content type of the stored object.
  • headers (dict) – (optional) Additional request headers, such as CORS headers. For example: headers = {‘Access-Control-Allow-Origin’: ‘http://mozilla.com’}
Return type:

object

website = 'http://www.aliyun.com/product/oss'
class libcloud.storage.drivers.oss.OSSMultipartUpload(key, id, initiated)[source]

Bases: object

Class representing an Aliyun OSS multipart upload

Class representing an Aliyun OSS multipart upload

Parameters:
  • key (str) – The object/key that was being uploaded
  • id (str) – The upload id assigned by Aliyun
  • initiated – The date/time at which the upload was started

libcloud.storage.drivers.rgw module

class libcloud.storage.drivers.rgw.S3RGWStorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region='default', **kwargs)[source]

Bases: libcloud.storage.drivers.s3.BaseS3StorageDriver

name = 'Ceph RGW'
website = 'http://ceph.com/'
class libcloud.storage.drivers.rgw.S3RGWOutscaleStorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region='eu-west-2', **kwargs)[source]

Bases: libcloud.storage.drivers.rgw.S3RGWStorageDriver

name = 'RGW Outscale'
website = 'https://en.outscale.com/'

libcloud.storage.drivers.s3 module

class libcloud.storage.drivers.s3.BaseS3Connection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, backoff=None, retry_delay=None)[source]

Bases: libcloud.common.base.ConnectionUserAndKey

Represents a single connection to the S3 Endpoint

add_default_params(params)[source]

Adds default parameters (such as API key, version, etc.) to the passed params

Should return a dictionary.

static get_auth_signature(method, headers, params, expires, secret_key, path, vendor_prefix)[source]
Signature = URL-Encode( Base64( HMAC-SHA1( YourSecretAccessKeyID,
UTF-8-Encoding-Of( StringToSign ) ) ) );

StringToSign = HTTP-VERB + “

” +
Content-MD5 + “
” +
Content-Type + “
” +
Expires + “
” +
CanonicalizedVendorHeaders + CanonicalizedResource;
host = 's3.amazonaws.com'
pre_connect_hook(params, headers)[source]

A hook which is called before connecting to the remote server. This hook can perform a final manipulation on the params, headers and url parameters.

Parameters:
  • params (dict) – Request parameters.
  • headers (dict) – Request headers.
rawResponseCls

alias of S3RawResponse

responseCls

alias of S3Response

class libcloud.storage.drivers.s3.BaseS3StorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region=None, **kwargs)[source]

Bases: libcloud.storage.base.StorageDriver

Parameters:
  • key (str) – API key or username to be used (required)
  • secret (str) – Secret password to be used (required)
  • secure (bool) – Whether to use HTTPS or HTTP. Note: Some providers only support HTTPS, and it is on by default.
  • host (str) – Override hostname used for connections.
  • port (int) – Override port used for connections.
  • api_version (str) – Optional API version. Only used by drivers which support multiple API versions.
  • region (str) – Optional driver region. Only used by drivers which support multiple regions.
Return type:

None

connectionCls

alias of BaseS3Connection

create_container(container_name)[source]

Create a new container.

Parameters:container_name (str) – Container name.
Returns:Container instance on success.
Return type:Container
delete_container(container)[source]

Delete a container.

Parameters:container (Container) – Container instance
Returns:True on success, False otherwise.
Return type:bool
delete_object(obj)[source]

Delete an object.

Parameters:obj (Object) – Object instance.
Returns:bool True on success.
Return type:bool
download_object(obj, destination_path, overwrite_existing=False, delete_on_failure=True)[source]

Download an object to the specified destination path.

Parameters:
  • obj (Object) – Object instance.
  • destination_path (str) – Full path to a file or a directory where the incoming file will be saved.
  • overwrite_existing (bool) – True to overwrite an existing file, defaults to False.
  • delete_on_failure (bool) – True to delete a partially downloaded file if the download was not successful (hash mismatch / file size).
Returns:

True if an object has been successfully downloaded, False otherwise.

Return type:

bool

download_object_as_stream(obj, chunk_size=None)[source]

Return a generator which yields object data.

Parameters:
  • obj (Object) – Object instance
  • chunk_size (int) – Optional chunk size (in bytes).
ex_cleanup_all_multipart_uploads(container, prefix=None)[source]

Extension method for removing all partially completed S3 multipart uploads.

Parameters:
  • container (Container) – The container holding the uploads
  • prefix (str) – Delete only uploads of objects with this prefix
ex_iterate_multipart_uploads(container, prefix=None, delimiter=None)[source]

Extension method for listing all in-progress S3 multipart uploads.

Each multipart upload which has not been committed or aborted is considered in-progress.

Parameters:
  • container (Container) – The container holding the uploads
  • prefix (str) – Print only uploads of objects with this prefix
  • delimiter (str) – The object/key names are grouped based on being split by this delimiter
Returns:

A generator of S3MultipartUpload instances.

Return type:

generator of S3MultipartUpload

ex_location_name = ''
get_container(container_name)[source]

Return a container instance.

Parameters:container_name (str) – Container name.
Returns:Container instance.
Return type:Container
get_object(container_name, object_name)[source]

Return an object instance.

Parameters:
  • container_name (str) – Container name.
  • object_name (str) – Object name.
Returns:

Object instance.

Return type:

Object

hash_type = 'md5'
http_vendor_prefix = 'x-amz'
iterate_container_objects(container, ex_prefix=None)[source]

Return a generator of objects for the given container.

Parameters:
  • container (Container) – Container instance
  • ex_prefix (str) – Only return objects starting with ex_prefix
Returns:

A generator of Object instances.

Return type:

generator of Object

iterate_containers()[source]

Return a generator of containers for the given account

Returns:A generator of Container instances.
Return type:generator of Container
list_container_objects(container, ex_prefix=None)[source]

Return a list of objects for the given container.

Parameters:
  • container (Container) – Container instance.
  • ex_prefix (str) – Only return objects starting with ex_prefix
Returns:

A list of Object instances.

Return type:

list of Object

name = 'Amazon S3 (standard)'
namespace = 'http://s3.amazonaws.com/doc/2006-03-01/'
supports_chunked_encoding = False
supports_s3_multipart_upload = True
upload_object(file_path, container, object_name, extra=None, verify_hash=True, ex_storage_class=None)[source]

@inherits: StorageDriver.upload_object

Parameters:ex_storage_class (str) – Storage class
upload_object_via_stream(iterator, container, object_name, extra=None, ex_storage_class=None)[source]

@inherits: StorageDriver.upload_object_via_stream

Parameters:ex_storage_class (str) – Storage class
website = 'http://aws.amazon.com/s3/'
class libcloud.storage.drivers.s3.S3APNE1Connection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3SignatureV4Connection

host = 's3-ap-northeast-1.amazonaws.com'
class libcloud.storage.drivers.s3.S3APNE1StorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3APNE1Connection

ex_location_name = 'ap-northeast-1'
name = 'Amazon S3 (ap-northeast-1)'
region_name = 'ap-northeast-1'
class libcloud.storage.drivers.s3.S3APNE2Connection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3SignatureV4Connection

host = 's3-ap-northeast-2.amazonaws.com'
class libcloud.storage.drivers.s3.S3APNE2StorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3APNE2Connection

ex_location_name = 'ap-northeast-2'
name = 'Amazon S3 (ap-northeast-2)'
region_name = 'ap-northeast-2'
libcloud.storage.drivers.s3.S3APNEConnection

alias of libcloud.storage.drivers.s3.S3APNE1Connection

libcloud.storage.drivers.s3.S3APNEStorageDriver

alias of libcloud.storage.drivers.s3.S3APNE1StorageDriver

class libcloud.storage.drivers.s3.S3APSE2Connection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3SignatureV4Connection

host = 's3-ap-southeast-2.amazonaws.com'
class libcloud.storage.drivers.s3.S3APSE2StorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3APSE2Connection

ex_location_name = 'ap-southeast-2'
name = 'Amazon S3 (ap-southeast-2)'
region_name = 'ap-southeast-2'
class libcloud.storage.drivers.s3.S3APSEConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3SignatureV4Connection

host = 's3-ap-southeast-1.amazonaws.com'
class libcloud.storage.drivers.s3.S3APSEStorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3APSEConnection

ex_location_name = 'ap-southeast-1'
name = 'Amazon S3 (ap-southeast-1)'
region_name = 'ap-southeast-1'
class libcloud.storage.drivers.s3.S3APSouthConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3SignatureV4Connection

host = 's3-ap-south-1.amazonaws.com'
class libcloud.storage.drivers.s3.S3APSouthStorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3APSouthConnection

ex_location_name = 'ap-south-1'
name = 'Amazon S3 (ap-south-1)'
region_name = 'ap-south-1'
class libcloud.storage.drivers.s3.S3CACentralConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3SignatureV4Connection

host = 's3-ca-central-1.amazonaws.com'
class libcloud.storage.drivers.s3.S3CACentralStorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3CACentralConnection

ex_location_name = 'ca-central-1'
name = 'Amazon S3 (ca-central-1)'
region_name = 'ca-central-1'
class libcloud.storage.drivers.s3.S3CNNorthConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3SignatureV4Connection

host = 's3.cn-north-1.amazonaws.com.cn'
class libcloud.storage.drivers.s3.S3CNNorthStorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3CNNorthConnection

ex_location_name = 'cn-north-1'
name = 'Amazon S3 (cn-north-1)'
region_name = 'cn-north-1'
class libcloud.storage.drivers.s3.S3CNNorthWestConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3SignatureV4Connection

host = 's3.cn-northwest-1.amazonaws.com.cn'
class libcloud.storage.drivers.s3.S3CNNorthWestStorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3CNNorthWestConnection

ex_location_name = 'cn-northwest-1'
name = 'Amazon S3 (cn-northwest-1)'
region_name = 'cn-northwest-1'
class libcloud.storage.drivers.s3.S3Connection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.common.aws.AWSTokenConnection, libcloud.storage.drivers.s3.BaseS3Connection

Represents a single connection to the S3 endpoint, with AWS-specific features.

class libcloud.storage.drivers.s3.S3EUCentralConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3SignatureV4Connection

host = 's3-eu-central-1.amazonaws.com'
class libcloud.storage.drivers.s3.S3EUCentralStorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3EUCentralConnection

ex_location_name = 'eu-central-1'
name = 'Amazon S3 (eu-central-1)'
region_name = 'eu-central-1'
class libcloud.storage.drivers.s3.S3EUNorth1Connection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3SignatureV4Connection

host = 's3-eu-north-1.amazonaws.com'
class libcloud.storage.drivers.s3.S3EUNorth1StorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3EUNorth1Connection

ex_location_name = 'eu-north-1'
name = 'Amazon S3 (eu-north-1)'
region_name = 'eu-north-1'
class libcloud.storage.drivers.s3.S3EUWest2Connection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3SignatureV4Connection

host = 's3-eu-west-2.amazonaws.com'
class libcloud.storage.drivers.s3.S3EUWest2StorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3EUWest2Connection

ex_location_name = 'eu-west-2'
name = 'Amazon S3 (eu-west-2)'
region_name = 'eu-west-2'
class libcloud.storage.drivers.s3.S3EUWestConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3SignatureV4Connection

host = 's3-eu-west-1.amazonaws.com'
class libcloud.storage.drivers.s3.S3EUWestStorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3EUWestConnection

ex_location_name = 'EU'
name = 'Amazon S3 (eu-west-1)'
region_name = 'eu-west-1'
class libcloud.storage.drivers.s3.S3MultipartUpload(key, id, created_at, initiator, owner)[source]

Bases: object

Class representing an amazon s3 multipart upload

Class representing an amazon s3 multipart upload

Parameters:
  • key (str) – The object/key that was being uploaded
  • id (str) – The upload id assigned by amazon
  • created_at (str) – The date/time at which the upload was started
  • initiator (str) – The AWS owner/IAM user who initiated this
  • owner (str) – The AWS owner/IAM who will own this object
class libcloud.storage.drivers.s3.S3RawResponse(connection, response=None)[source]

Bases: libcloud.storage.drivers.s3.S3Response, libcloud.common.base.RawResponse

Parameters:connection (Connection) – Parent connection object.
class libcloud.storage.drivers.s3.S3Response(response, connection)[source]

Bases: libcloud.common.aws.AWSBaseResponse

Parameters:
  • response (httplib.HTTPResponse) – HTTP response object. (optional)
  • connection (Connection) – Parent connection object.
namespace = None
parse_error()[source]

Parse the error messages.

Override in a provider’s subclass.

Returns:Parsed error.
Return type:str
success()[source]

Determine if our request was successful.

The meaning of this can be arbitrary; did we receive OK status? Did the node get created? Were we authenticated?

Return type:bool
Returns:True or False
valid_response_codes = [<HTTPStatus.NOT_FOUND: 404>, <HTTPStatus.CONFLICT: 409>, <HTTPStatus.BAD_REQUEST: 400>]
class libcloud.storage.drivers.s3.S3SAEastConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3SignatureV4Connection

host = 's3-sa-east-1.amazonaws.com'
class libcloud.storage.drivers.s3.S3SAEastStorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3SAEastConnection

ex_location_name = 'sa-east-1'
name = 'Amazon S3 (sa-east-1)'
region_name = 'sa-east-1'
class libcloud.storage.drivers.s3.S3SignatureV4Connection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.common.aws.SignedAWSConnection, libcloud.storage.drivers.s3.BaseS3Connection

service_name = 's3'
version = '2006-03-01'
class libcloud.storage.drivers.s3.S3StorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.common.aws.AWSDriver, libcloud.storage.drivers.s3.BaseS3StorageDriver

connectionCls

alias of S3SignatureV4Connection

classmethod list_regions()[source]
name = 'Amazon S3'
region_name = 'us-east-1'
class libcloud.storage.drivers.s3.S3USEast2Connection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3SignatureV4Connection

host = 's3-us-east-2.amazonaws.com'
class libcloud.storage.drivers.s3.S3USEast2StorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3USEast2Connection

ex_location_name = 'us-east-2'
name = 'Amazon S3 (us-east-2)'
region_name = 'us-east-2'
class libcloud.storage.drivers.s3.S3USGovWestConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3SignatureV4Connection

host = 's3-us-gov-west-1.amazonaws.com'
class libcloud.storage.drivers.s3.S3USGovWestStorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3USGovWestConnection

ex_location_name = 'us-gov-west-1'
name = 'Amazon S3 (us-gov-west-1)'
region_name = 'us-gov-west-1'
class libcloud.storage.drivers.s3.S3USWestConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3SignatureV4Connection

host = 's3-us-west-1.amazonaws.com'
class libcloud.storage.drivers.s3.S3USWestOregonConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3SignatureV4Connection

host = 's3-us-west-2.amazonaws.com'
class libcloud.storage.drivers.s3.S3USWestOregonStorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3USWestOregonConnection

ex_location_name = 'us-west-2'
name = 'Amazon S3 (us-west-2)'
region_name = 'us-west-2'
class libcloud.storage.drivers.s3.S3USWestStorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3USWestConnection

ex_location_name = 'us-west-1'
name = 'Amazon S3 (us-west-1)'
region_name = 'us-west-1'

Module contents

Drivers for working with different providers