libcloud.storage.drivers package

Submodules

libcloud.storage.drivers.atmos module

class libcloud.storage.drivers.atmos.AtmosConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, backoff=None, retry_delay=None)[source]

Bases: libcloud.common.base.ConnectionUserAndKey

add_default_headers(headers)[source]
pre_connect_hook(params, headers)[source]
responseCls

alias of AtmosResponse

class libcloud.storage.drivers.atmos.AtmosDriver(key, secret=None, secure=True, host=None, port=None)[source]

Bases: libcloud.storage.base.StorageDriver

DEFAULT_CDN_TTL = 604800
api_name = 'atmos'
connectionCls

alias of AtmosConnection

create_container(container_name)[source]
delete_container(container)[source]
delete_object(obj)[source]
download_object(obj, destination_path, overwrite_existing=False, delete_on_failure=True)[source]
download_object_as_stream(obj, chunk_size=None)[source]
enable_object_cdn(obj)[source]
get_container(container_name)[source]
get_object(container_name, object_name)[source]
get_object_cdn_url(obj, expiry=None, use_object=False)[source]

Return an object CDN URL.

Parameters:
  • obj (Object) – Object instance
  • expiry (str) – Expiry
  • use_object (bool) – Use object
Return type:

str

host = None
iterate_container_objects(container)[source]
iterate_containers()[source]
name = 'atmos'
path = None
supports_chunked_encoding = True
upload_object(file_path, container, object_name, extra=None, verify_hash=True)[source]
upload_object_via_stream(iterator, container, object_name, extra=None)[source]
website = 'http://atmosonline.com/'
exception libcloud.storage.drivers.atmos.AtmosError(code, message, driver=None)[source]

Bases: libcloud.common.types.LibcloudError

class libcloud.storage.drivers.atmos.AtmosResponse(response, connection)[source]

Bases: libcloud.common.base.XmlResponse

Parameters:
parse_error()[source]
success()[source]
libcloud.storage.drivers.atmos.collapse(s)[source]

libcloud.storage.drivers.auroraobjects module

class libcloud.storage.drivers.auroraobjects.AuroraObjectsStorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region=None, **kwargs)[source]

Bases: libcloud.storage.drivers.auroraobjects.BaseAuroraObjectsStorageDriver

Parameters:
  • key (str) – API key or username to be used (required)
  • secret (str) – Secret password to be used (required)
  • secure (bool) – Whether to use HTTPS or HTTP. Note: Some providers only support HTTPS, and it is on by default.
  • host (str) – Override hostname used for connections.
  • port (int) – Override port used for connections.
  • api_version (str) – Optional API version. Only used by drivers which support multiple API versions.
  • region (str) – Optional driver region. Only used by drivers which support multiple regions.
Return type:

None

connectionCls

alias of BaseAuroraObjectsConnection

enable_container_cdn(*argv)[source]
enable_object_cdn(*argv)[source]
get_container_cdn_url(*argv)[source]
get_object_cdn_url(*argv)[source]

libcloud.storage.drivers.azure_blobs module

class libcloud.storage.drivers.azure_blobs.AzureBlobLease(driver, object_path, use_lease)[source]

Bases: object

A class to help in leasing an azure blob and renewing the lease

Parameters:
  • driver (AzureStorageDriver) – The Azure storage driver that is being used
  • object_path (str) – The path of the object we need to lease
  • use_lease (bool) – Indicates if we must take a lease or not
renew()[source]

Renew the lease if it is older than a predefined time period

update_headers(headers)[source]

Update the lease id in the headers

class libcloud.storage.drivers.azure_blobs.AzureBlobsConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, backoff=None, retry_delay=None)[source]

Bases: libcloud.common.azure.AzureConnection

Represents a single connection to Azure Blobs

class libcloud.storage.drivers.azure_blobs.AzureBlobsStorageDriver(key, secret=None, secure=True, host=None, port=None, **kwargs)[source]

Bases: libcloud.storage.base.StorageDriver

connectionCls

alias of AzureBlobsConnection

create_container(container_name)[source]

@inherits: StorageDriver.create_container

delete_container(container)[source]

@inherits: StorageDriver.delete_container

delete_object(obj)[source]

@inherits: StorageDriver.delete_object

download_object(obj, destination_path, overwrite_existing=False, delete_on_failure=True)[source]

@inherits: StorageDriver.download_object

download_object_as_stream(obj, chunk_size=None)[source]

@inherits: StorageDriver.download_object_as_stream

ex_blob_type = 'BlockBlob'
ex_set_object_metadata(obj, meta_data)[source]

Set metadata for an object

Parameters:
  • obj (Object) – The blob object
  • meta_data (dict) – Metadata key value pairs
get_container(container_name)[source]

@inherits: StorageDriver.get_container

get_object(container_name, object_name)[source]

@inherits: StorageDriver.get_object

hash_type = 'md5'
iterate_container_objects(container)[source]

@inherits: StorageDriver.iterate_container_objects

iterate_containers()[source]

@inherits: StorageDriver.iterate_containers

name = 'Microsoft Azure (blobs)'
supports_chunked_encoding = False
upload_object(file_path, container, object_name, extra=None, verify_hash=True, ex_blob_type=None, ex_use_lease=False)[source]

Upload an object currently located on a disk.

@inherits: StorageDriver.upload_object

Parameters:
  • ex_blob_type (str) – Storage class
  • ex_use_lease (bool) – Indicates if we must take a lease before upload
upload_object_via_stream(iterator, container, object_name, verify_hash=False, extra=None, ex_use_lease=False, ex_blob_type=None, ex_page_blob_size=None)[source]

@inherits: StorageDriver.upload_object_via_stream

Parameters:
  • ex_blob_type (str) – Storage class
  • ex_page_blob_size (int) – The maximum size to which the page blob can grow to
  • ex_use_lease (bool) – Indicates if we must take a lease before upload
website = 'http://windows.azure.com/'

libcloud.storage.drivers.backblaze_b2 module

Driver for Backblaze B2 service.

class libcloud.storage.drivers.backblaze_b2.BackblazeB2StorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region=None, **kwargs)[source]

Bases: libcloud.storage.base.StorageDriver

Parameters:
  • key (str) – API key or username to be used (required)
  • secret (str) – Secret password to be used (required)
  • secure (bool) – Whether to use HTTPS or HTTP. Note: Some providers only support HTTPS, and it is on by default.
  • host (str) – Override hostname used for connections.
  • port (int) – Override port used for connections.
  • api_version (str) – Optional API version. Only used by drivers which support multiple API versions.
  • region (str) – Optional driver region. Only used by drivers which support multiple regions.
Return type:

None

connectionCls

alias of BackblazeB2Connection

create_container(container_name, ex_type='allPrivate')[source]
delete_container(container)[source]
delete_object(obj)[source]
download_object(obj, destination_path, overwrite_existing=False, delete_on_failure=True)[source]
download_object_as_stream(obj, chunk_size=None)[source]
ex_get_object(object_id)[source]
ex_get_upload_data(container_id)[source]

Retrieve information used for uploading files (upload url, auth token, etc).

Rype:dict
ex_get_upload_url(container_id)[source]

Retrieve URL used for file uploads.

Return type:str
ex_hide_object(container_id, object_name)[source]
ex_list_object_versions(container_id, ex_start_file_name=None, ex_start_file_id=None, ex_max_file_count=None)[source]
get_container(container_name)[source]
get_object(container_name, object_name)[source]
hash_type = 'sha1'
iterate_container_objects(container)[source]
iterate_containers()[source]
name = 'Backblaze B2'
supports_chunked_encoding = False
type = 'backblaze_b2'
upload_object(file_path, container, object_name, extra=None, verify_hash=True, headers=None)[source]

Upload an object.

Note: This will override file with a same name if it already exists.

upload_object_via_stream(iterator, container, object_name, extra=None, headers=None)[source]

Upload an object.

Note: Backblaze does not yet support uploading via stream, so this calls upload_object internally requiring the object data to be loaded into memory at once

website = 'https://www.backblaze.com/b2/'
class libcloud.storage.drivers.backblaze_b2.BackblazeB2Connection(*args, **kwargs)[source]

Bases: libcloud.common.base.ConnectionUserAndKey

authCls

alias of BackblazeB2AuthConnection

download_request(action, params=None)[source]
host = None
request(action, params=None, data=None, headers=None, method='GET', raw=False, include_account_id=False)[source]
responseCls

alias of BackblazeB2Response

secure = True
upload_request(action, headers, upload_host, auth_token, data)[source]
class libcloud.storage.drivers.backblaze_b2.BackblazeB2AuthConnection(*args, **kwargs)[source]

Bases: libcloud.common.base.ConnectionUserAndKey

authenticate(force=False)[source]
Parameters:force (bool) – Force authentication if if we have already obtained the token.
host = 'api.backblaze.com'
responseCls

alias of BackblazeB2Response

secure = True

libcloud.storage.drivers.cloudfiles module

class libcloud.storage.drivers.cloudfiles.ChunkStreamReader(file_path, start_block, end_block, chunk_size)[source]

Bases: object

next()[source]
class libcloud.storage.drivers.cloudfiles.CloudFilesConnection(user_id, key, secure=True, use_internal_url=False, **kwargs)[source]

Bases: libcloud.storage.drivers.cloudfiles.OpenStackSwiftConnection

Base connection class for the Cloudfiles driver.

auth_url = 'https://identity.api.rackspacecloud.com'
get_endpoint()[source]
rawResponseCls

alias of CloudFilesRawResponse

request(action, params=None, data='', headers=None, method='GET', raw=False, cdn_request=False)[source]
responseCls

alias of CloudFilesResponse

class libcloud.storage.drivers.cloudfiles.CloudFilesRawResponse(connection, response=None)[source]

Bases: libcloud.storage.drivers.cloudfiles.CloudFilesResponse, libcloud.common.base.RawResponse

Parameters:connection (Connection) – Parent connection object.
class libcloud.storage.drivers.cloudfiles.CloudFilesResponse(response, connection)[source]

Bases: libcloud.common.base.Response

Parameters:
parse_body()[source]
success()[source]
valid_response_codes = [404, 409]
class libcloud.storage.drivers.cloudfiles.CloudFilesStorageDriver(key, secret=None, secure=True, host=None, port=None, region='ord', use_internal_url=False, **kwargs)[source]

Bases: libcloud.storage.base.StorageDriver, libcloud.common.openstack.OpenStackDriverMixin

CloudFiles driver.

@inherits: StorageDriver.__init__

Parameters:region (str) – ID of the region which should be used.
connectionCls

alias of CloudFilesConnection

create_container(container_name)[source]
delete_container(container)[source]
delete_object(obj)[source]
download_object(obj, destination_path, overwrite_existing=False, delete_on_failure=True)[source]
download_object_as_stream(obj, chunk_size=None)[source]
enable_container_cdn(container, ex_ttl=None)[source]

@inherits: StorageDriver.enable_container_cdn

Parameters:ex_ttl (int) – cache time to live
ex_enable_static_website(container, index_file='index.html')[source]

Enable serving a static website.

Parameters:
  • container (Container) – Container instance
  • index_file – Name of the object which becomes an index page for

every sub-directory in this container. :type index_file: str

Return type:bool
ex_get_meta_data()[source]

Get meta data

Return type:dict
ex_get_object_temp_url(obj, method='GET', timeout=60)[source]

Create a temporary URL to allow others to retrieve or put objects in your Cloud Files account for as long or as short a time as you wish. This method is specifically for allowing users to retrieve or update an object.

Parameters:
  • obj (Object) – The object that you wish to make temporarily public
  • method (str) – Which method you would like to allow, ‘PUT’ or ‘GET’
  • timeout – Time (in seconds) after which you want the TempURL

to expire. :type timeout: int

Return type:bool
ex_multipart_upload_object(file_path, container, object_name, chunk_size=33554432, extra=None, verify_hash=True)[source]
ex_purge_object_from_cdn(obj, email=None)[source]

Purge edge cache for the specified object.

Parameters:email – Email where a notification will be sent when the job

completes. (optional) :type email: str

ex_set_account_metadata_temp_url_key(key)[source]

Set the metadata header X-Account-Meta-Temp-URL-Key on your Cloud Files account.

Parameters:key (str) – X-Account-Meta-Temp-URL-Key
Return type:bool
ex_set_error_page(container, file_name='error.html')[source]

Set a custom error page which is displayed if file is not found and serving of a static website is enabled.

Parameters:
  • container (Container) – Container instance
  • file_name (str) – Name of the object which becomes the error page.
Return type:

bool

get_container(container_name)[source]
get_container_cdn_url(container)[source]
get_object(container_name, object_name)[source]
get_object_cdn_url(obj)[source]
hash_type = 'md5'
iterate_container_objects(container, ex_prefix=None)[source]

Return a generator of objects for the given container.

Parameters:
  • container (Container) – Container instance
  • ex_prefix (str) – Only get objects with names starting with ex_prefix
Returns:

A generator of Object instances.

Return type:

generator of Object

iterate_containers()[source]
list_container_objects(container, ex_prefix=None)[source]

Return a list of objects for the given container.

Parameters:
  • container (Container) – Container instance.
  • ex_prefix (str) – Only get objects with names starting with ex_prefix
Returns:

A list of Object instances.

Return type:

list of Object

classmethod list_regions()[source]
name = 'CloudFiles'
supports_chunked_encoding = True
upload_object(file_path, container, object_name, extra=None, verify_hash=True, headers=None)[source]

Upload an object.

Note: This will override file with a same name if it already exists.

upload_object_via_stream(iterator, container, object_name, extra=None, headers=None)[source]
website = 'http://www.rackspace.com/'
class libcloud.storage.drivers.cloudfiles.FileChunkReader(file_path, chunk_size)[source]

Bases: object

next()[source]
class libcloud.storage.drivers.cloudfiles.OpenStackSwiftConnection(user_id, key, secure=True, **kwargs)[source]

Bases: libcloud.common.openstack.OpenStackBaseConnection

Connection class for the OpenStack Swift endpoint.

auth_url = 'https://identity.api.rackspacecloud.com'
get_endpoint(*args, **kwargs)[source]
rawResponseCls

alias of CloudFilesRawResponse

request(action, params=None, data='', headers=None, method='GET', raw=False, cdn_request=False)[source]
responseCls

alias of CloudFilesResponse

class libcloud.storage.drivers.cloudfiles.OpenStackSwiftStorageDriver(key, secret=None, secure=True, host=None, port=None, region=None, **kwargs)[source]

Bases: libcloud.storage.drivers.cloudfiles.CloudFilesStorageDriver

Storage driver for the OpenStack Swift.

connectionCls

alias of OpenStackSwiftConnection

name = 'OpenStack Swift'
type = 'cloudfiles_swift'

libcloud.storage.drivers.dummy module

class libcloud.storage.drivers.dummy.DummyFileObject(yield_count=5, chunk_len=10)[source]

Bases: file

read(size)[source]
class libcloud.storage.drivers.dummy.DummyIterator(data=None)[source]

Bases: object

get_md5_hash()[source]
next()[source]
class libcloud.storage.drivers.dummy.DummyStorageDriver(api_key, api_secret)[source]

Bases: libcloud.storage.base.StorageDriver

Dummy Storage driver.

>>> from libcloud.storage.drivers.dummy import DummyStorageDriver
>>> driver = DummyStorageDriver('key', 'secret')
>>> container = driver.create_container(container_name='test container')
>>> container
<Container: name=test container, provider=Dummy Storage Provider>
>>> container.name
'test container'
>>> container.extra['object_count']
0
Parameters:
  • api_key (str) – API key or username to used (required)
  • api_secret (str) – Secret password to be used (required)
Return type:

None

create_container(container_name)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> container_name = 'test container 1'
>>> container = driver.create_container(container_name=container_name)
>>> container
<Container: name=test container 1, provider=Dummy Storage Provider>
>>> container = driver.create_container(
...    container_name='test container 1')
... 
Traceback (most recent call last):
ContainerAlreadyExistsError:

@inherits: StorageDriver.create_container

delete_container(container)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> container = Container(name = 'test container',
...    extra={'object_count': 0}, driver=driver)
>>> driver.delete_container(container=container)
... 
Traceback (most recent call last):
ContainerDoesNotExistError:
>>> container = driver.create_container(
...      container_name='test container 1')
... 
>>> len(driver._containers)
1
>>> driver.delete_container(container=container)
True
>>> len(driver._containers)
0
>>> container = driver.create_container(
...    container_name='test container 1')
... 
>>> obj = container.upload_object_via_stream(
...   object_name='test object', iterator=DummyFileObject(5, 10),
...   extra={})
>>> driver.delete_container(container=container)
... 
Traceback (most recent call last):
ContainerIsNotEmptyError:

@inherits: StorageDriver.delete_container

delete_object(obj)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> container = driver.create_container(
...   container_name='test container 1')
... 
>>> obj = container.upload_object_via_stream(object_name='test object',
...   iterator=DummyFileObject(5, 10), extra={})
>>> obj 
<Object: name=test object, size=50, ...>
>>> container.delete_object(obj=obj)
True
>>> obj = Object(name='test object 2',
...    size=1000, hash=None, extra=None,
...    meta_data=None, container=container,driver=None)
>>> container.delete_object(obj=obj) 
Traceback (most recent call last):
ObjectDoesNotExistError:

@inherits: StorageDriver.delete_object

download_object(obj, destination_path, overwrite_existing=False, delete_on_failure=True)[source]
download_object_as_stream(obj, chunk_size=None)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> container = driver.create_container(
...   container_name='test container 1')
... 
>>> obj = container.upload_object_via_stream(object_name='test object',
...    iterator=DummyFileObject(5, 10), extra={})
>>> stream = container.download_object_as_stream(obj)
>>> stream 
<...closed...>

@inherits: StorageDriver.download_object_as_stream

get_container(container_name)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> driver.get_container('unknown') 
Traceback (most recent call last):
ContainerDoesNotExistError:
>>> container_name = 'test container 1'
>>> container = driver.create_container(container_name=container_name)
>>> container
<Container: name=test container 1, provider=Dummy Storage Provider>
>>> container.name
'test container 1'
>>> driver.get_container('test container 1')
<Container: name=test container 1, provider=Dummy Storage Provider>

@inherits: StorageDriver.get_container

get_container_cdn_url(container)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> driver.get_container('unknown') 
Traceback (most recent call last):
ContainerDoesNotExistError:
>>> container_name = 'test container 1'
>>> container = driver.create_container(container_name=container_name)
>>> container
<Container: name=test container 1, provider=Dummy Storage Provider>
>>> container.name
'test container 1'
>>> container.get_cdn_url()
'http://www.test.com/container/test_container_1'

@inherits: StorageDriver.get_container_cdn_url

get_meta_data()[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> driver.get_meta_data()['object_count']
0
>>> driver.get_meta_data()['container_count']
0
>>> driver.get_meta_data()['bytes_used']
0
>>> container_name = 'test container 1'
>>> container = driver.create_container(container_name=container_name)
>>> container_name = 'test container 2'
>>> container = driver.create_container(container_name=container_name)
>>> obj = container.upload_object_via_stream(
...  object_name='test object', iterator=DummyFileObject(5, 10),
...  extra={})
>>> driver.get_meta_data()['object_count']
1
>>> driver.get_meta_data()['container_count']
2
>>> driver.get_meta_data()['bytes_used']
50
Return type:dict
get_object(container_name, object_name)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> driver.get_object('unknown', 'unknown')
... 
Traceback (most recent call last):
ContainerDoesNotExistError:
>>> container_name = 'test container 1'
>>> container = driver.create_container(container_name=container_name)
>>> container
<Container: name=test container 1, provider=Dummy Storage Provider>
>>> driver.get_object(
...  'test container 1', 'unknown') 
Traceback (most recent call last):
ObjectDoesNotExistError:
>>> obj = container.upload_object_via_stream(object_name='test object',
...      iterator=DummyFileObject(5, 10), extra={})
>>> obj.name
'test object'
>>> obj.size
50

@inherits: StorageDriver.get_object

get_object_cdn_url(obj)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> container_name = 'test container 1'
>>> container = driver.create_container(container_name=container_name)
>>> container
<Container: name=test container 1, provider=Dummy Storage Provider>
>>> obj = container.upload_object_via_stream(
...      object_name='test object 5',
...      iterator=DummyFileObject(5, 10), extra={})
>>> obj.name
'test object 5'
>>> obj.get_cdn_url()
'http://www.test.com/object/test_object_5'

@inherits: StorageDriver.get_object_cdn_url

iterate_containers()[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> list(driver.iterate_containers())
[]
>>> container_name = 'test container 1'
>>> container = driver.create_container(container_name=container_name)
>>> container
<Container: name=test container 1, provider=Dummy Storage Provider>
>>> container.name
'test container 1'
>>> container_name = 'test container 2'
>>> container = driver.create_container(container_name=container_name)
>>> container
<Container: name=test container 2, provider=Dummy Storage Provider>
>>> container = driver.create_container(
...  container_name='test container 2')
... 
Traceback (most recent call last):
ContainerAlreadyExistsError:
>>> container_list=list(driver.iterate_containers())
>>> sorted([c.name for c in container_list])
['test container 1', 'test container 2']

@inherits: StorageDriver.iterate_containers

list_container_objects(container)[source]
name = 'Dummy Storage Provider'
upload_object(file_path, container, object_name, extra=None, file_hash=None)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> container_name = 'test container 1'
>>> container = driver.create_container(container_name=container_name)
>>> container.upload_object(file_path='/tmp/inexistent.file',
...     object_name='test') 
Traceback (most recent call last):
LibcloudError:
>>> file_path = path = os.path.abspath(__file__)
>>> file_size = os.path.getsize(file_path)
>>> obj = container.upload_object(file_path=file_path,
...                               object_name='test')
>>> obj 
<Object: name=test, size=...>
>>> obj.size == file_size
True

@inherits: StorageDriver.upload_object :param file_hash: File hash :type file_hash: str

upload_object_via_stream(iterator, container, object_name, extra=None)[source]
>>> driver = DummyStorageDriver('key', 'secret')
>>> container = driver.create_container(
...    container_name='test container 1')
... 
>>> obj = container.upload_object_via_stream(
...   object_name='test object', iterator=DummyFileObject(5, 10),
...   extra={})
>>> obj 
<Object: name=test object, size=50, ...>

@inherits: StorageDriver.upload_object_via_stream

website = 'http://example.com'

libcloud.storage.drivers.google_storage module

class libcloud.storage.drivers.google_storage.ContainerPermissions[source]

Bases: object

NONE = 0
OWNER = 3
READER = 1
WRITER = 2
values = ['NONE', 'READER', 'WRITER', 'OWNER']
class libcloud.storage.drivers.google_storage.GCSResponse(response, connection)[source]

Bases: libcloud.common.google.GoogleResponse

Parameters:
class libcloud.storage.drivers.google_storage.GoogleStorageConnection(user_id, key, secure=True, auth_type=None, credential_file=None, **kwargs)[source]

Bases: libcloud.common.base.ConnectionUserAndKey

Represents a single connection to the Google storage API endpoint.

This can either authenticate via the Google OAuth2 methods or via the S3 HMAC interoperability method.

PROJECT_ID_HEADER = 'x-goog-project-id'
add_default_headers(headers)[source]
get_project()[source]
host = 'storage.googleapis.com'
pre_connect_hook(params, headers)[source]
rawResponseCls

alias of S3RawResponse

responseCls

alias of S3Response

class libcloud.storage.drivers.google_storage.GoogleStorageDriver(key, secret=None, project=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.BaseS3StorageDriver

Driver for Google Cloud Storage.

Can authenticate via standard Google Cloud methods (Service Accounts, Installed App credentials, and GCE instance service accounts)

Examples:

Service Accounts:

driver = GoogleStorageDriver(key=client_email, secret=private_key, ...)

Installed Application:

driver = GoogleStorageDriver(key=client_id, secret=client_secret, ...)

From GCE instance:

driver = GoogleStorageDriver(key=foo, secret=bar, ...)

Can also authenticate via Google Cloud Storage’s S3 HMAC interoperability API. S3 user keys are 20 alphanumeric characters, starting with GOOG.

Example:

driver = GoogleStorageDriver(key='GOOG0123456789ABCXYZ',
                             secret=key_secret)
connectionCls

alias of GoogleStorageConnection

ex_delete_permissions(container_name, object_name=None, entity=None)[source]

Delete permissions for an ACL entity on a container or object.

Parameters:
  • container_name (str) – The container name.
  • object_name (str) – The object name. Optional. Not providing an object will delete a container permission.
  • entity (str or None) – The entity to whose permission will be deleted. Optional. If not provided, the role will be applied to the authenticated user, if using an OAuth2 authentication scheme.
ex_get_permissions(container_name, object_name=None)[source]

Return the permissions for the currently authenticated user.

Parameters:
  • container_name (str) – The container name.
  • object_name (str or None) – The object name. Optional. Not providing an object will return only container permissions.
Returns:

A tuple of container and object permissions.

Return type:

tuple of (int, int or None) from ContainerPermissions and ObjectPermissions, respectively.

ex_set_permissions(container_name, object_name=None, entity=None, role=None)[source]

Set the permissions for an ACL entity on a container or an object.

Parameters:
  • container_name (str) – The container name.
  • object_name (str) – The object name. Optional. Not providing an object will apply the acl to the container.
  • entity (str) – The entity to which apply the role. Optional. If not provided, the role will be applied to the authenticated user, if using an OAuth2 authentication scheme.
  • role (int from ContainerPermissions or ObjectPermissions or str.) – The permission/role to set on the entity.
Raises:

ValueError – If no entity was given, but was required. Or if the role isn’t valid for the bucket or object.

hash_type = 'md5'
http_vendor_prefix = 'x-goog'
jsonConnectionCls

alias of GoogleStorageJSONConnection

name = 'Google Cloud Storage'
namespace = 'http://doc.s3.amazonaws.com/2006-03-01'
supports_chunked_encoding = False
supports_s3_multipart_upload = False
website = 'http://cloud.google.com/storage'
class libcloud.storage.drivers.google_storage.GoogleStorageJSONConnection(user_id, key, secure=True, auth_type=None, credential_file=None, **kwargs)[source]

Bases: libcloud.storage.drivers.google_storage.GoogleStorageConnection

Represents a single connection to the Google storage JSON API endpoint.

This can either authenticate via the Google OAuth2 methods or via the S3 HMAC interoperability method.

add_default_headers(headers)[source]
host = 'www.googleapis.com'
rawResponseCls = None
responseCls

alias of GCSResponse

class libcloud.storage.drivers.google_storage.ObjectPermissions[source]

Bases: object

NONE = 0
OWNER = 2
READER = 1
values = ['NONE', 'READER', 'OWNER']

libcloud.storage.drivers.ktucloud module

class libcloud.storage.drivers.ktucloud.KTUCloudStorageConnection(user_id, key, secure=True, use_internal_url=False, **kwargs)[source]

Bases: libcloud.storage.drivers.cloudfiles.CloudFilesConnection

Connection class for the KT UCloud Storage endpoint.

auth_url = 'https://ssproxy.ucloudbiz.olleh.com/auth/v1.0'
get_endpoint()[source]
class libcloud.storage.drivers.ktucloud.KTUCloudStorageDriver(key, secret=None, secure=True, host=None, port=None, region='ord', use_internal_url=False, **kwargs)[source]

Bases: libcloud.storage.drivers.cloudfiles.CloudFilesStorageDriver

Cloudfiles storage driver for the UK endpoint.

@inherits: StorageDriver.__init__

Parameters:region (str) – ID of the region which should be used.
connectionCls

alias of KTUCloudStorageConnection

name = 'KTUCloud Storage'
type = 'ktucloud'

libcloud.storage.drivers.local module

libcloud.storage.drivers.nimbus module

class libcloud.storage.drivers.nimbus.NimbusConnection(*args, **kwargs)[source]

Bases: libcloud.common.base.ConnectionUserAndKey

host = 'nimbus.io'
pre_connect_hook(params, headers)[source]
responseCls

alias of NimbusResponse

class libcloud.storage.drivers.nimbus.NimbusResponse(response, connection)[source]

Bases: libcloud.common.base.JsonResponse

Parameters:
parse_error()[source]
success()[source]
valid_response_codes = [200, 404, 409, 400]
class libcloud.storage.drivers.nimbus.NimbusStorageDriver(*args, **kwargs)[source]

Bases: libcloud.storage.base.StorageDriver

connectionCls

alias of NimbusConnection

create_container(container_name)[source]
iterate_containers()[source]
name = 'Nimbus.io'
website = 'https://nimbus.io/'

libcloud.storage.drivers.ninefold module

class libcloud.storage.drivers.ninefold.NinefoldStorageDriver(key, secret=None, secure=True, host=None, port=None)[source]

Bases: libcloud.storage.drivers.atmos.AtmosDriver

host = 'api.ninefold.com'
name = 'Ninefold'
path = '/storage/v1.0'
type = 'ninefold'
website = 'http://ninefold.com/'

libcloud.storage.drivers.oss module

class libcloud.storage.drivers.oss.OSSStorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region=None, **kwargs)[source]

Bases: libcloud.storage.base.StorageDriver

Parameters:
  • key (str) – API key or username to be used (required)
  • secret (str) – Secret password to be used (required)
  • secure (bool) – Whether to use HTTPS or HTTP. Note: Some providers only support HTTPS, and it is on by default.
  • host (str) – Override hostname used for connections.
  • port (int) – Override port used for connections.
  • api_version (str) – Optional API version. Only used by drivers which support multiple API versions.
  • region (str) – Optional driver region. Only used by drivers which support multiple regions.
Return type:

None

connectionCls

alias of OSSConnection

create_container(container_name, ex_location=None)[source]

@inherits StorageDriver.create_container

Parameters:ex_location – The desired location where to create container
delete_container(container)[source]
delete_object(obj)[source]
download_object(obj, destination_path, overwrite_existing=False, delete_on_failure=True)[source]
download_object_as_stream(obj, chunk_size=None)[source]
ex_abort_all_multipart_uploads(container, prefix=None)[source]

Extension method for removing all partially completed OSS multipart uploads.

Parameters:
  • container (Container) – The container holding the uploads
  • prefix (str) – Delete only uploads of objects with this prefix
ex_iterate_multipart_uploads(container, prefix=None, delimiter=None, max_uploads=1000)[source]

Extension method for listing all in-progress OSS multipart uploads.

Each multipart upload which has not been committed or aborted is considered in-progress.

Parameters:
  • container (Container) – The container holding the uploads
  • prefix (str) – Print only uploads of objects with this prefix
  • delimiter (str) – The object/key names are grouped based on being split by this delimiter
  • max_uploads (int) – The max uplod items returned for one request
Returns:

A generator of OSSMultipartUpload instances.

Return type:

generator of OSSMultipartUpload

get_container(container_name)[source]
get_object(container_name, object_name)[source]
hash_type = 'md5'
http_vendor_prefix = 'x-oss-'
iterate_container_objects(container, ex_prefix=None)[source]

Return a generator of objects for the given container.

Parameters:
  • container (Container) – Container instance
  • ex_prefix (str) – Only return objects starting with ex_prefix
Returns:

A generator of Object instances.

Return type:

generator of Object

iterate_containers()[source]
list_container_objects(container, ex_prefix=None)[source]

Return a list of objects for the given container.

Parameters:
  • container (Container) – Container instance.
  • ex_prefix (str) – Only return objects starting with ex_prefix
Returns:

A list of Object instances.

Return type:

list of Object

name = 'Aliyun OSS'
namespace = None
supports_chunked_encoding = False
supports_multipart_upload = True
upload_object(file_path, container, object_name, extra=None, verify_hash=True, headers=None)[source]
upload_object_via_stream(iterator, container, object_name, extra=None, headers=None)[source]
website = 'http://www.aliyun.com/product/oss'
class libcloud.storage.drivers.oss.OSSMultipartUpload(key, id, initiated)[source]

Bases: object

Class representing an Aliyun OSS multipart upload

Class representing an Aliyun OSS multipart upload

Parameters:
  • key (str) – The object/key that was being uploaded
  • id (str) – The upload id assigned by Aliyun
  • initiated – The date/time at which the upload was started

libcloud.storage.drivers.rgw module

class libcloud.storage.drivers.rgw.S3RGWStorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region='default', **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

name = 'Ceph RGW'
website = 'http://ceph.com/'
class libcloud.storage.drivers.rgw.S3RGWOutscaleStorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region='eu-west-2', **kwargs)[source]

Bases: libcloud.storage.drivers.rgw.S3RGWStorageDriver

name = 'RGW Outscale'
website = 'https://en.outscale.com/'

libcloud.storage.drivers.s3 module

class libcloud.storage.drivers.s3.BaseS3Connection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, backoff=None, retry_delay=None)[source]

Bases: libcloud.common.base.ConnectionUserAndKey

Represents a single connection to the S3 Endpoint

add_default_params(params)[source]
static get_auth_signature(method, headers, params, expires, secret_key, path, vendor_prefix)[source]
Signature = URL-Encode( Base64( HMAC-SHA1( YourSecretAccessKeyID,
UTF-8-Encoding-Of( StringToSign ) ) ) );

StringToSign = HTTP-VERB + “

” +
Content-MD5 + “
” +
Content-Type + “
” +
Expires + “
” +
CanonicalizedVendorHeaders + CanonicalizedResource;
host = 's3.amazonaws.com'
pre_connect_hook(params, headers)[source]
rawResponseCls

alias of S3RawResponse

responseCls

alias of S3Response

class libcloud.storage.drivers.s3.BaseS3StorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region=None, **kwargs)[source]

Bases: libcloud.storage.base.StorageDriver

Parameters:
  • key (str) – API key or username to be used (required)
  • secret (str) – Secret password to be used (required)
  • secure (bool) – Whether to use HTTPS or HTTP. Note: Some providers only support HTTPS, and it is on by default.
  • host (str) – Override hostname used for connections.
  • port (int) – Override port used for connections.
  • api_version (str) – Optional API version. Only used by drivers which support multiple API versions.
  • region (str) – Optional driver region. Only used by drivers which support multiple regions.
Return type:

None

connectionCls

alias of BaseS3Connection

create_container(container_name)[source]
delete_container(container)[source]
delete_object(obj)[source]
download_object(obj, destination_path, overwrite_existing=False, delete_on_failure=True)[source]
download_object_as_stream(obj, chunk_size=None)[source]
ex_cleanup_all_multipart_uploads(container, prefix=None)[source]

Extension method for removing all partially completed S3 multipart uploads.

Parameters:
  • container (Container) – The container holding the uploads
  • prefix (str) – Delete only uploads of objects with this prefix
ex_iterate_multipart_uploads(container, prefix=None, delimiter=None)[source]

Extension method for listing all in-progress S3 multipart uploads.

Each multipart upload which has not been committed or aborted is considered in-progress.

Parameters:
  • container (Container) – The container holding the uploads
  • prefix (str) – Print only uploads of objects with this prefix
  • delimiter (str) – The object/key names are grouped based on being split by this delimiter
Returns:

A generator of S3MultipartUpload instances.

Return type:

generator of S3MultipartUpload

ex_location_name = ''
get_container(container_name)[source]
get_object(container_name, object_name)[source]
hash_type = 'md5'
http_vendor_prefix = 'x-amz'
iterate_container_objects(container, ex_prefix=None)[source]

Return a generator of objects for the given container.

Parameters:
  • container (Container) – Container instance
  • ex_prefix (str) – Only return objects starting with ex_prefix
Returns:

A generator of Object instances.

Return type:

generator of Object

iterate_containers()[source]
list_container_objects(container, ex_prefix=None)[source]

Return a list of objects for the given container.

Parameters:
  • container (Container) – Container instance.
  • ex_prefix (str) – Only return objects starting with ex_prefix
Returns:

A list of Object instances.

Return type:

list of Object

name = 'Amazon S3 (standard)'
namespace = 'http://s3.amazonaws.com/doc/2006-03-01/'
supports_chunked_encoding = False
supports_s3_multipart_upload = True
upload_object(file_path, container, object_name, extra=None, verify_hash=True, ex_storage_class=None)[source]

@inherits: StorageDriver.upload_object

Parameters:ex_storage_class (str) – Storage class
upload_object_via_stream(iterator, container, object_name, extra=None, ex_storage_class=None)[source]

@inherits: StorageDriver.upload_object_via_stream

Parameters:ex_storage_class (str) – Storage class
website = 'http://aws.amazon.com/s3/'
class libcloud.storage.drivers.s3.S3APNE1Connection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3Connection

host = 's3-ap-northeast-1.amazonaws.com'
class libcloud.storage.drivers.s3.S3APNE1StorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3APNE1Connection

ex_location_name = 'ap-northeast-1'
name = 'Amazon S3 (ap-northeast-1)'
class libcloud.storage.drivers.s3.S3APNE2Connection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.common.aws.SignedAWSConnection, libcloud.storage.drivers.s3.BaseS3Connection

host = 's3-ap-northeast-2.amazonaws.com'
service_name = 's3'
version = '2006-03-01'
class libcloud.storage.drivers.s3.S3APNE2StorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3APNE2Connection

ex_location_name = 'ap-northeast-2'
name = 'Amazon S3 (ap-northeast-2)'
region_name = 'ap-northeast-2'
supports_s3_multipart_upload = False
libcloud.storage.drivers.s3.S3APNEConnection

alias of S3APNE1Connection

libcloud.storage.drivers.s3.S3APNEStorageDriver

alias of S3APNE1StorageDriver

class libcloud.storage.drivers.s3.S3APSE2Connection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3Connection

host = 's3-ap-southeast-2.amazonaws.com'
class libcloud.storage.drivers.s3.S3APSE2StorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3APSE2Connection

ex_location_name = 'ap-southeast-2'
name = 'Amazon S3 (ap-southeast-2)'
class libcloud.storage.drivers.s3.S3APSEConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3Connection

host = 's3-ap-southeast-1.amazonaws.com'
class libcloud.storage.drivers.s3.S3APSEStorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3APSEConnection

ex_location_name = 'ap-southeast-1'
name = 'Amazon S3 (ap-southeast-1)'
class libcloud.storage.drivers.s3.S3CNNorthConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.common.aws.SignedAWSConnection, libcloud.storage.drivers.s3.BaseS3Connection

host = 's3.cn-north-1.amazonaws.com.cn'
service_name = 's3'
version = '2006-03-01'
class libcloud.storage.drivers.s3.S3CNNorthStorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3CNNorthConnection

ex_location_name = 'cn-north-1'
name = 'Amazon S3 (cn-north-1)'
region_name = 'cn-north-1'
supports_s3_multipart_upload = False
class libcloud.storage.drivers.s3.S3Connection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.common.aws.AWSTokenConnection, libcloud.storage.drivers.s3.BaseS3Connection

Represents a single connection to the S3 endpoint, with AWS-specific features.

class libcloud.storage.drivers.s3.S3EUWestConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3Connection

host = 's3-eu-west-1.amazonaws.com'
class libcloud.storage.drivers.s3.S3EUWestStorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3EUWestConnection

ex_location_name = 'EU'
name = 'Amazon S3 (eu-west-1)'
class libcloud.storage.drivers.s3.S3MultipartUpload(key, id, created_at, initiator, owner)[source]

Bases: object

Class representing an amazon s3 multipart upload

Class representing an amazon s3 multipart upload

Parameters:
  • key (str) – The object/key that was being uploaded
  • id (str) – The upload id assigned by amazon
  • created_at (str) – The date/time at which the upload was started
  • initiator (str) – The AWS owner/IAM user who initiated this
  • owner (str) – The AWS owner/IAM who will own this object
class libcloud.storage.drivers.s3.S3RawResponse(connection, response=None)[source]

Bases: libcloud.storage.drivers.s3.S3Response, libcloud.common.base.RawResponse

Parameters:connection (Connection) – Parent connection object.
class libcloud.storage.drivers.s3.S3Response(response, connection)[source]

Bases: libcloud.common.aws.AWSBaseResponse

Parameters:
namespace = None
parse_error()[source]
success()[source]
valid_response_codes = [404, 409, 400]
class libcloud.storage.drivers.s3.S3SAEastConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3Connection

host = 's3-sa-east-1.amazonaws.com'
class libcloud.storage.drivers.s3.S3SAEastStorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3SAEastConnection

ex_location_name = 'sa-east-1'
name = 'Amazon S3 (sa-east-1)'
class libcloud.storage.drivers.s3.S3StorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.common.aws.AWSDriver, libcloud.storage.drivers.s3.BaseS3StorageDriver

connectionCls

alias of S3Connection

class libcloud.storage.drivers.s3.S3USWestConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3Connection

host = 's3-us-west-1.amazonaws.com'
class libcloud.storage.drivers.s3.S3USWestOregonConnection(user_id, key, secure=True, host=None, port=None, url=None, timeout=None, proxy_url=None, token=None, retry_delay=None, backoff=None)[source]

Bases: libcloud.storage.drivers.s3.S3Connection

host = 's3-us-west-2.amazonaws.com'
class libcloud.storage.drivers.s3.S3USWestOregonStorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3USWestOregonConnection

ex_location_name = 'us-west-2'
name = 'Amazon S3 (us-west-2)'
class libcloud.storage.drivers.s3.S3USWestStorageDriver(key, secret=None, secure=True, host=None, port=None, api_version=None, region=None, token=None, **kwargs)[source]

Bases: libcloud.storage.drivers.s3.S3StorageDriver

connectionCls

alias of S3USWestConnection

ex_location_name = 'us-west-1'
name = 'Amazon S3 (us-west-1)'

Module contents

Drivers for working with different providers