API

S3FileSystem(*args, **kwargs)

Access S3 as if it were a file system.

S3FileSystem.cat(path[, recursive, on_error])

Fetch (potentially multiple) paths' contents

S3FileSystem.du(path[, total, maxdepth, ...])

Space used by files and optionally directories within a path

S3FileSystem.exists(path)

Is there a file at the given path

S3FileSystem.find(path[, maxdepth, ...])

List all files below path.

S3FileSystem.get(rpath, lpath[, recursive, ...])

Copy file(s) to local.

S3FileSystem.glob(path[, maxdepth])

Find files by glob-matching.

S3FileSystem.info(path, **kwargs)

Give details of entry at path

S3FileSystem.ls(path[, detail])

List objects at path.

S3FileSystem.mkdir(path[, acl, create_parents])

Create directory entry at path

S3FileSystem.mv(path1, path2[, recursive, ...])

Move file(s) from one location to another

S3FileSystem.open(path[, mode, block_size, ...])

Return a file-like object from the filesystem

S3FileSystem.put(lpath, rpath[, recursive, ...])

Copy file(s) from local.

S3FileSystem.read_block(fn, offset, length)

Read a block of bytes from

S3FileSystem.rm(path[, recursive, maxdepth])

Delete files.

S3FileSystem.tail(path[, size])

Get the last size bytes from file

S3FileSystem.touch(path[, truncate, data])

Create empty file or truncate

S3File(s3, path[, mode, block_size, acl, ...])

Open S3 key as a file.

S3File.close()

Close file

S3File.flush([force])

Write buffered data to backend store.

S3File.info()

File information about this path

S3File.read([length])

Return data from cache, or fetch pieces as necessary

S3File.seek(loc[, whence])

Set current file location

S3File.tell()

Current file location

S3File.write(data)

Write data to buffer.

S3Map(root, s3[, check, create])

Mirror previous class, not implemented in fsspec

class s3fs.core.S3FileSystem(*args, **kwargs)[source]

Access S3 as if it were a file system.

This exposes a filesystem-like API (ls, cp, open, etc.) on top of S3 storage.

Provide credentials either explicitly (key=, secret=) or depend on boto’s credential methods. See botocore documentation for more information. If no credentials are available, use anon=True.

Parameters
  • anon (bool (False)) – Whether to use anonymous connection (public buckets only). If False, uses the key/secret given, or boto’s credential resolver (client_kwargs, environment, variables, config files, EC2 IAM server, in that order)

  • endpoint_url (string (None)) – Use this endpoint_url, if specified. Needed for connecting to non-AWS S3 buckets. Takes precedence over endpoint_url in client_kwargs.

  • key (string (None)) – If not anonymous, use this access key ID, if specified. Takes precedence over aws_access_key_id in client_kwargs.

  • secret (string (None)) – If not anonymous, use this secret access key, if specified. Takes precedence over aws_secret_access_key in client_kwargs.

  • token (string (None)) – If not anonymous, use this security token, if specified

  • use_ssl (bool (True)) – Whether to use SSL in connections to S3; may be faster without, but insecure. If use_ssl is also set in client_kwargs, the value set in client_kwargs will take priority.

  • s3_additional_kwargs (dict of parameters that are used when calling s3 api) – methods. Typically used for things like “ServerSideEncryption”.

  • client_kwargs (dict of parameters for the botocore client) –

  • requester_pays (bool (False)) – If RequesterPays buckets are supported.

  • default_block_size (int (None)) – If given, the default block size value used for open(), if no specific value is given at all time. The built-in default is 5MB.

  • default_fill_cache (Bool (True)) – Whether to use cache filling with open by default. Refer to S3File.open.

  • default_cache_type (string ("readahead")) – If given, the default cache_type value used for open(). Set to “none” if no caching is desired. See fsspec’s documentation for other available cache_type values. Default cache_type is “readahead”.

  • version_aware (bool (False)) – Whether to support bucket versioning. If enable this will require the user to have the necessary IAM permissions for dealing with versioned objects. Note that in the event that you only need to work with the latest version of objects in a versioned bucket, and do not need the VersionId for those objects, you should set version_aware to False for performance reasons. When set to True, filesystem instances will use the S3 ListObjectVersions API call to list directory contents, which requires listing all historical object versions.

  • cache_regions (bool (False)) – Whether to cache bucket regions or not. Whenever a new bucket is used, it will first find out which region it belongs and then use the client for that region.

  • asynchronous (bool (False)) – Whether this instance is to be used from inside coroutines.

  • config_kwargs (dict of parameters passed to botocore.client.Config) –

  • kwargs (other parameters for core session.) –

  • session (aiobotocore AioSession object to be used for all connections.) – This session will be used inplace of creating a new session inside S3FileSystem. For example: aiobotocore.session.AioSession(profile=’test_user’)

  • max_concurrency (int (1)) – The maximum number of concurrent transfers to use per file for multipart upload (put()) operations. Defaults to 1 (sequential). When used in conjunction with S3FileSystem.put(batch_size=...) the maximum number of simultaneous connections is max_concurrency * batch_size. We may extend this parameter to affect pipe(), cat() and get(). Increasing this value will result in higher memory usage during multipart upload operations (by max_concurrency * chunksize bytes per file).

  • fsspec (The following parameters are passed on to) –

  • skip_instance_cache (to control reuse of instances) –

  • use_listings_cache (to control reuse of directory listings) –

  • listings_expiry_time (to control reuse of directory listings) –

  • max_paths (to control reuse of directory listings) –

Examples

>>> s3 = S3FileSystem(anon=False)  
>>> s3.ls('my-bucket/')  
['my-file.txt']
>>> with s3.open('my-bucket/my-file.txt', mode='rb') as f:  
...     print(f.read())  
b'Hello, world!'
cat(path, recursive=False, on_error='raise', **kwargs)

Fetch (potentially multiple) paths’ contents

Parameters
  • recursive (bool) – If True, assume the path(s) are directories, and get all the contained files

  • on_error ("raise", "omit", "return") – If raise, an underlying exception will be raised (converted to KeyError if the type is in self.missing_exceptions); if omit, keys with exception will simply not be included in the output; if “return”, all keys are included in the output, but the value will be bytes or an exception instance.

  • kwargs (passed to cat_file) –

Returns

  • dict of {path (contents} if there are multiple paths)

  • or the path has been otherwise expanded

cat_file(path, start=None, end=None, **kwargs)

Get the content of a file

Parameters
  • path (URL of file on this filesystems) –

  • start (int) – Bytes limits of the read. If negative, backwards from end, like usual python slices. Either can be None for start or end of file, respectively

  • end (int) – Bytes limits of the read. If negative, backwards from end, like usual python slices. Either can be None for start or end of file, respectively

  • kwargs (passed to open().) –

cat_ranges(paths, starts, ends, max_gap=None, on_error='return', **kwargs)

Get the contents of byte ranges from one or more files

Parameters
  • paths (list) – A list of of filepaths on this filesystems

  • starts (int or list) – Bytes limits of the read. If using a single int, the same value will be used to read all the specified files.

  • ends (int or list) – Bytes limits of the read. If using a single int, the same value will be used to read all the specified files.

checksum(path, refresh=False)

Unique value for current version of file

If the checksum is the same from one moment to another, the contents are guaranteed to be the same. If the checksum changes, the contents might have changed.

Parameters
  • path (string/bytes) – path of file to get checksum for

  • refresh (bool (=False)) – if False, look in local cache for file details first

chmod(path, acl, recursive=False, **kwargs)

Set Access Control on a bucket/key

See http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl

Parameters
  • path (string) – the object to set

  • acl (string) – the value of ACL to apply

  • recursive (bool) – whether to apply the ACL to all keys below the given path too

classmethod clear_instance_cache()

Clear the cache of filesystem instances.

Notes

Unless overridden by setting the cachable class attribute to False, the filesystem class stores a reference to newly created instances. This prevents Python’s normal rules around garbage collection from working, since the instances refcount will not drop to zero until clear_instance_cache is called.

clear_multipart_uploads(bucket)

Remove any partial uploads in the bucket

connect(refresh=False, kwargs={})

Establish S3 connection object. :rtype: Session to be closed later with await .close()

copy(path1, path2, recursive=False, maxdepth=None, on_error=None, **kwargs)

Copy within two locations in the filesystem

on_error“raise”, “ignore”

If raise, any not-found exceptions will be raised; if ignore any not-found exceptions will cause the path to be skipped; defaults to raise unless recursive is true, where the default is ignore

cp(path1, path2, **kwargs)

Alias of AbstractFileSystem.copy.

created(path)

Return the created timestamp of a file as a datetime.datetime

classmethod current()

Return the most recently instantiated FileSystem

If no instance has been created, then create one with defaults

delete(path, recursive=False, maxdepth=None)

Alias of AbstractFileSystem.rm.

disk_usage(path, total=True, maxdepth=None, **kwargs)

Alias of AbstractFileSystem.du.

download(rpath, lpath, recursive=False, **kwargs)

Alias of AbstractFileSystem.get.

du(path, total=True, maxdepth=None, withdirs=False, **kwargs)

Space used by files and optionally directories within a path

Directory size does not include the size of its contents.

Parameters
  • path (str) –

  • total (bool) – Whether to sum all the file sizes

  • maxdepth (int or None) – Maximum number of directory levels to descend, None for unlimited.

  • withdirs (bool) – Whether to include directory paths in the output.

  • kwargs (passed to find) –

Returns

  • Dict of {path (size} if total=False, or int otherwise, where numbers)

  • refer to bytes used.

end_transaction()

Finish write transaction, non-context version

exists(path)

Is there a file at the given path

expand_path(path, recursive=False, maxdepth=None, **kwargs)

Turn one or more globs or directories into a list of all matching paths to files or directories.

kwargs are passed to glob or find, which may in turn call ls

find(path, maxdepth=None, withdirs=None, detail=False, prefix='', **kwargs)

List all files below path. Like posix find command without conditions

Parameters
  • path (str) –

  • maxdepth (int or None) – If not None, the maximum number of levels to descend

  • withdirs (bool) – Whether to include directory paths in the output. This is True when used by glob, but users usually only want files.

  • prefix (str) – Only return files that match ^{path}/{prefix} (if there is an exact match filename == {path}/{prefix}, it also will be included)

static from_dict(dct: Dict[str, Any]) AbstractFileSystem

Recreate a filesystem instance from dictionary representation.

See .to_dict() for the expected structure of the input.

Parameters

dct (Dict[str, Any]) –

Return type

file system instance, not necessarily of this particular class.

Warning

This can import arbitrary modules (as determined by the cls key). Make sure you haven’t installed any modules that may execute malicious code at import time.

static from_json(blob: str) AbstractFileSystem

Recreate a filesystem instance from JSON representation.

See .to_json() for the expected structure of the input.

Parameters

blob (str) –

Return type

file system instance, not necessarily of this particular class.

Warning

This can import arbitrary modules (as determined by the cls key). Make sure you haven’t installed any modules that may execute malicious code at import time.

property fsid

Persistent filesystem id that can be used to compare filesystems across sessions.

get(rpath, lpath, recursive=False, callback=<fsspec.callbacks.NoOpCallback object>, maxdepth=None, **kwargs)

Copy file(s) to local.

Copies a specific file or tree of files (if recursive=True). If lpath ends with a “/”, it will be assumed to be a directory, and target files will go within. Can submit a list of paths, which may be glob-patterns and will be expanded.

Calls get_file for each source.

get_delegated_s3pars(exp=3600)

Get temporary credentials from STS, appropriate for sending across a network. Only relevant where the key/secret were explicitly provided.

Parameters

exp (int) – Time in seconds that credentials are good for

Return type

dict of parameters

get_file(rpath, lpath, callback=<fsspec.callbacks.NoOpCallback object>, outfile=None, **kwargs)

Copy single remote file to local

get_mapper(root='', check=False, create=False, missing_exceptions=None)

Create key/value store based on this file-system

Makes a MutableMapping interface to the FS at the given root path. See fsspec.mapping.FSMap for further details.

get_tags(path)[source]

Retrieve tag key/values for the given path

Returns

{str

Return type

str}

getxattr(path, attr_name, **kwargs)

Get an attribute from the metadata.

Examples

>>> mys3fs.getxattr('mykey', 'attribute_1')  
'value_1'
glob(path, maxdepth=None, **kwargs)

Find files by glob-matching.

If the path ends with ‘/’, only folders are returned.

We support "**", "?" and "[..]". We do not support ^ for pattern negation.

The maxdepth option is applied on the first ** found in the path.

kwargs are passed to ls.

head(path, size=1024)

Get the first size bytes from file

info(path, **kwargs)

Give details of entry at path

Returns a single dictionary, with exactly the same information as ls would with detail=True.

The default implementation should calls ls and could be overridden by a shortcut. kwargs are passed on to `ls().

Some file systems might not be able to measure the file’s size, in which case, the returned dict will include 'size': None.

Returns

  • dict with keys (name (full path in the FS), size (in bytes), type (file,)

  • directory, or something else) and other FS-specific keys.

invalidate_cache(path=None)[source]

Discard any cached directory information

Parameters

path (string or None) – If None, clear all listings cached else listings at or under given path.

invalidate_region_cache()

Invalidate the region cache (associated with buckets) if cache_regions is turned on.

isdir(path)

Is this entry directory-like?

isfile(path)

Is this entry file-like?

lexists(path, **kwargs)

If there is a file at the given path (including broken links)

listdir(path, detail=True, **kwargs)

Alias of AbstractFileSystem.ls.

ls(path, detail=True, **kwargs)

List objects at path.

This should include subdirectories and files at that location. The difference between a file and a directory must be clear when details are requested.

The specific keys, or perhaps a FileInfo class, or similar, is TBD, but must be consistent across implementations. Must include:

  • full path to the entry (without protocol)

  • size of the entry, in bytes. If the value cannot be determined, will be None.

  • type of entry, “file”, “directory” or other

Additional information may be present, appropriate to the file-system, e.g., generation, checksum, etc.

May use refresh=True|False to allow use of self._ls_from_cache to check for a saved listing and avoid calling the backend. This would be common where listing may be expensive.

Parameters
  • path (str) –

  • detail (bool) – if True, gives a list of dictionaries, where each is the same as the result of info(path). If False, gives a list of paths (str).

  • kwargs (may have additional backend-specific options, such as version) – information

Returns

  • List of strings if detail is False, or list of directory information

  • dicts if detail is True.

make_bucket_versioned(bucket, versioned: bool = True)

Set bucket versioning status

makedir(path, create_parents=True, **kwargs)

Alias of AbstractFileSystem.mkdir.

makedirs(path, exist_ok=False)

Recursively make directories

Creates directory at path and any intervening required directories. Raises exception if, for instance, the path already exists but is a file.

Parameters
  • path (str) – leaf directory name

  • exist_ok (bool (False)) – If False, will error if the target already exists

merge(path, filelist, **kwargs)

Create single S3 file from list of S3 files

Uses multi-part, no data is downloaded. The original files are not deleted.

Parameters
  • path (str) – The final file to produce

  • filelist (list of str) – The paths, in order, to assemble into the final file.

metadata(path, refresh=False, **kwargs)

Return metadata of path.

Parameters
  • path (string/bytes) – filename to get metadata for

  • refresh (bool (=False)) – (ignored)

mkdir(path, acl=False, create_parents=True, **kwargs)

Create directory entry at path

For systems that don’t have true directories, may create an for this instance only and not touch the real filesystem

Parameters
  • path (str) – location

  • create_parents (bool) – if True, this is equivalent to makedirs

  • kwargs – may be permissions, etc.

mkdirs(path, exist_ok=False)

Alias of AbstractFileSystem.makedirs.

modified(path, version_id=None, refresh=False)[source]

Return the last modified timestamp of file at path as a datetime

move(path1, path2, **kwargs)

Alias of AbstractFileSystem.mv.

mv(path1, path2, recursive=False, maxdepth=None, **kwargs)

Move file(s) from one location to another

open(path, mode='rb', block_size=None, cache_options=None, compression=None, **kwargs)

Return a file-like object from the filesystem

The resultant instance must function correctly in a context with block.

Parameters
  • path (str) – Target file

  • mode (str like 'rb', 'w') – See builtin open()

  • block_size (int) – Some indication of buffering - this is a value in bytes

  • cache_options (dict, optional) – Extra arguments to pass through to the cache.

  • compression (string or None) – If given, open file using compression codec. Can either be a compression name (a key in fsspec.compression.compr) or “infer” to guess the compression from the filename suffix.

  • encoding (passed on to TextIOWrapper for text mode) –

  • errors (passed on to TextIOWrapper for text mode) –

  • newline (passed on to TextIOWrapper for text mode) –

pipe(path, value=None, **kwargs)

Put value into path

(counterpart to cat)

Parameters
  • path (string or dict(str, bytes)) – If a string, a single remote location to put value bytes; if a dict, a mapping of {path: bytesvalue}.

  • value (bytes, optional) – If using a single path, these are the bytes to put there. Ignored if path is a dict

pipe_file(path, value, **kwargs)

Set the bytes of given file

put(lpath, rpath, recursive=False, callback=<fsspec.callbacks.NoOpCallback object>, maxdepth=None, **kwargs)

Copy file(s) from local.

Copies a specific file or tree of files (if recursive=True). If rpath ends with a “/”, it will be assumed to be a directory, and target files will go within.

Calls put_file for each source.

put_file(lpath, rpath, callback=<fsspec.callbacks.NoOpCallback object>, **kwargs)

Copy single file to remote

put_tags(path, tags, mode='o')[source]

Set tags for given existing key

Tags are a str:str mapping that can be attached to any key, see https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/allocation-tag-restrictions.html

This is similar to, but distinct from, key metadata, which is usually set at key creation time.

Parameters
  • path (str) – Existing key to attach tags to

  • tags (dict str, str) – Tags to apply.

  • mode – One of ‘o’ or ‘m’ ‘o’: Will over-write any existing tags. ‘m’: Will merge in new tags with existing tags. Incurs two remote calls.

read_block(fn, offset, length, delimiter=None)

Read a block of bytes from

Starting at offset of the file, read length bytes. If delimiter is set then we ensure that the read starts and stops at delimiter boundaries that follow the locations offset and offset + length. If offset is zero then we start at zero. The bytestring returned WILL include the end delimiter string.

If offset+length is beyond the eof, reads to eof.

Parameters
  • fn (string) – Path to filename

  • offset (int) – Byte offset to start read

  • length (int) – Number of bytes to read. If None, read to end.

  • delimiter (bytes (optional)) – Ensure reading starts and stops at delimiter bytestring

Examples

>>> fs.read_block('data/file.csv', 0, 13)  
b'Alice, 100\nBo'
>>> fs.read_block('data/file.csv', 0, 13, delimiter=b'\n')  
b'Alice, 100\nBob, 200\n'

Use length=None to read to the end of the file. >>> fs.read_block(‘data/file.csv’, 0, None, delimiter=b’n’) # doctest: +SKIP b’Alice, 100nBob, 200nCharlie, 300’

See also

fsspec.utils.read_block()

read_bytes(path, start=None, end=None, **kwargs)

Alias of AbstractFileSystem.cat_file.

read_text(path, encoding=None, errors=None, newline=None, **kwargs)

Get the contents of the file as a string.

Parameters
  • path (str) – URL of file on this filesystems

  • encoding (same as open.) –

  • errors (same as open.) –

  • newline (same as open.) –

rename(path1, path2, **kwargs)

Alias of AbstractFileSystem.mv.

rm(path, recursive=False, maxdepth=None)

Delete files.

Parameters
  • path (str or list of str) – File(s) to delete.

  • recursive (bool) – If file(s) are directories, recursively delete contents and then also remove the directory

  • maxdepth (int or None) – Depth to pass to walk for finding files to delete, if recursive. If None, there will be no limit and infinite recursion may be possible.

rm_file(path)

Delete a file

rmdir(path)

Remove a directory, if empty

async set_session(refresh=False, kwargs={})[source]

Establish S3 connection object. :rtype: Session to be closed later with await .close()

setxattr(path, copy_kwargs=None, **kw_args)

Set metadata.

Attributes have to be of the form documented in the Metadata Reference.

Parameters
  • kw_args (key-value pairs like field="value", where the values must be) – strings. Does not alter existing fields, unless the field appears here - if the value is None, delete the field.

  • copy_kwargs (dict, optional) – dictionary of additional params to use for the underlying s3.copy_object.

Examples

>>> mys3file.setxattr(attribute_1='value1', attribute_2='value2')  
# Example for use with copy_args
>>> mys3file.setxattr(copy_kwargs={'ContentType': 'application/pdf'},
...     attribute_1='value1')  
sign(path, expiration=100, **kwargs)[source]

Create a signed URL representing the given path

Some implementations allow temporary URLs to be generated, as a way of delegating credentials.

Parameters
  • path (str) – The path on the filesystem

  • expiration (int) – Number of seconds to enable the URL for (if supported)

Returns

URL – The signed URL

Return type

str

:raises NotImplementedError : if method is not implemented for a filesystem:

size(path)

Size in bytes of file

sizes(paths)

Size in bytes of each file in a list of paths

split_path(path) Tuple[str, str, Optional[str]][source]

Normalise S3 path string into bucket and key.

Parameters

path (string) – Input path, like s3://mybucket/path/to/file

Examples

>>> split_path("s3://mybucket/path/to/file")
['mybucket', 'path/to/file', None]
>>> split_path("s3://mybucket/path/to/versioned_file?versionId=some_version_id")
['mybucket', 'path/to/versioned_file', 'some_version_id']
start_transaction()

Begin write transaction for deferring files, non-context version

stat(path, **kwargs)

Alias of AbstractFileSystem.info.

tail(path, size=1024)

Get the last size bytes from file

to_dict(*, include_password: bool = True) Dict[str, Any]

JSON-serializable dictionary representation of this filesystem instance.

Parameters

include_password (bool, default True) – Whether to include the password (if any) in the output.

Returns

  • Dictionary with keys cls (the python location of this class),

  • protocol (text name of this class’s protocol, first one in case of

  • multiple), args (positional args, usually empty), and all other

  • keyword arguments as their own keys.

Warning

Serialized filesystems may contain sensitive information which have been passed to the constructor, such as passwords and tokens. Make sure you store and send them in a secure environment!

to_json(*, include_password: bool = True) str

JSON representation of this filesystem instance.

Parameters

include_password (bool, default True) – Whether to include the password (if any) in the output.

Returns

  • JSON string with keys cls (the python location of this class),

  • protocol (text name of this class’s protocol, first one in case of

  • multiple), args (positional args, usually empty), and all other

  • keyword arguments as their own keys.

Warning

Serialized filesystems may contain sensitive information which have been passed to the constructor, such as passwords and tokens. Make sure you store and send them in a secure environment!

touch(path, truncate=True, data=None, **kwargs)

Create empty file or truncate

property transaction

A context within which files are committed together upon exit

Requires the file class to implement .commit() and .discard() for the normal and exception cases.

transaction_type

alias of Transaction

ukey(path)

Hash of file properties, to tell if it has changed

unstrip_protocol(name: str) str

Format FS-specific path to generic, including protocol

upload(lpath, rpath, recursive=False, **kwargs)

Alias of AbstractFileSystem.put.

url(path, expires=3600, client_method='get_object', **kwargs)

Generate presigned URL to access path by HTTP

Parameters
  • path (string) – the key path we are interested in

  • expires (int) – the number of seconds this signature will be good for.

walk(path, maxdepth=None, topdown=True, on_error='omit', **kwargs)

Return all files belows path

List all files, recursing into subdirectories; output is iterator-style, like os.walk(). For a simple list of files, find() is available.

When topdown is True, the caller can modify the dirnames list in-place (perhaps using del or slice assignment), and walk() will only recurse into the subdirectories whose names remain in dirnames; this can be used to prune the search, impose a specific order of visiting, or even to inform walk() about directories the caller creates or renames before it resumes walk() again. Modifying dirnames when topdown is False has no effect. (see os.walk)

Note that the “files” outputted will include anything that is not a directory, such as links.

Parameters
  • path (str) – Root to recurse into

  • maxdepth (int) – Maximum recursion depth. None means limitless, but not recommended on link-based file-systems.

  • topdown (bool (True)) – Whether to walk the directory tree from the top downwards or from the bottom upwards.

  • on_error ("omit", "raise", a collable) – if omit (default), path with exception will simply be empty; If raise, an underlying exception will be raised; if callable, it will be called with a single OSError instance as argument

  • kwargs (passed to ls) –

write_bytes(path, value, **kwargs)

Alias of AbstractFileSystem.pipe_file.

write_text(path, value, encoding=None, errors=None, newline=None, **kwargs)

Write the text to the given file.

An existing file will be overwritten.

Parameters
  • path (str) – URL of file on this filesystems

  • value (str) – Text to write.

  • encoding (same as open.) –

  • errors (same as open.) –

  • newline (same as open.) –

class s3fs.core.S3File(s3, path, mode='rb', block_size=5242880, acl=False, version_id=None, fill_cache=True, s3_additional_kwargs=None, autocommit=True, cache_type='readahead', requester_pays=False, cache_options=None, size=None)[source]

Open S3 key as a file. Data is only loaded and cached on demand.

Parameters
  • s3 (S3FileSystem) – botocore connection

  • path (string) – S3 bucket/key to access

  • mode (str) – One of ‘rb’, ‘wb’, ‘ab’. These have the same meaning as they do for the built-in open function.

  • block_size (int) – read-ahead size for finding delimiters

  • fill_cache (bool) – If seeking to new a part of the file beyond the current buffer, with this True, the buffer will be filled between the sections to best support random access. When reading only a few specific chunks out of a file, performance may be better if False.

  • acl (str) – Canned ACL to apply

  • version_id (str) – Optional version to read the file at. If not specified this will default to the current version of the object. This is only used for reading.

  • requester_pays (bool (False)) – If RequesterPays buckets are supported.

Examples

>>> s3 = S3FileSystem()  
>>> with s3.open('my-bucket/my-file.txt', mode='rb') as f:  
...     ...  

See also

S3FileSystem.open

used to create S3File objects

close()

Close file

Finalizes writes, discards cache

commit()[source]

Move from temp to final destination

discard()[source]

Throw away temporary file

fileno()

Returns underlying file descriptor if one exists.

OSError is raised if the IO object does not use a file descriptor.

flush(force=False)

Write buffered data to backend store.

Writes the current buffer, if it is larger than the block-size, or if the file is being closed.

Parameters

force (bool) – When closing, write the last block even if it is smaller than blocks are allowed to be. Disallows further writing to this file.

getxattr(xattr_name, **kwargs)[source]

Get an attribute from the metadata. See getxattr().

Examples

>>> mys3file.getxattr('attribute_1')  
'value_1'
info()

File information about this path

isatty()

Return whether this is an ‘interactive’ stream.

Return False if it can’t be determined.

metadata(refresh=False, **kwargs)[source]

Return metadata of file. See metadata().

Metadata is cached unless refresh=True.

read(length=- 1)

Return data from cache, or fetch pieces as necessary

Parameters

length (int (-1)) – Number of bytes to read; if <0, all remaining bytes.

readable()

Whether opened for reading

readinto(b)

mirrors builtin file’s readinto method

https://docs.python.org/3/library/io.html#io.RawIOBase.readinto

readline()

Read until first occurrence of newline character

Note that, because of character encoding, this is not necessarily a true line ending.

readlines()

Return all data, split by the newline character

readuntil(char=b'\n', blocks=None)

Return data between current position and first occurrence of char

char is included in the output, except if the end of the tile is encountered first.

Parameters
  • char (bytes) – Thing to find

  • blocks (None or int) – How much to read in each go. Defaults to file blocksize - which may mean a new read on every call.

seek(loc, whence=0)

Set current file location

Parameters
  • loc (int) – byte location

  • whence ({0, 1, 2}) – from start of file, current location or end of file, resp.

seekable()

Whether is seekable (only in read mode)

setxattr(copy_kwargs=None, **kwargs)[source]

Set metadata. See setxattr().

Examples

>>> mys3file.setxattr(attribute_1='value1', attribute_2='value2')  
tell()

Current file location

truncate()

Truncate file to size bytes.

File pointer is left unchanged. Size defaults to the current IO position as reported by tell(). Returns the new size.

url(**kwargs)[source]

HTTP URL to read this file (if it already exists)

writable()

Whether opened for writing

write(data)

Write data to buffer.

Buffer only sent on flush() or if buffer is greater than or equal to blocksize.

Parameters

data (bytes) – Set of bytes to be written.

writelines(lines, /)

Write a list of lines to stream.

Line separators are not added, so it is usual for each of the lines provided to have a line separator at the end.

s3fs.mapping.S3Map(root, s3, check=False, create=False)[source]

Mirror previous class, not implemented in fsspec

class s3fs.utils.ParamKwargsHelper(s3)[source]

Utility class to help extract the subset of keys that an s3 method is actually using

Parameters

s3 (boto S3FileSystem) –

class s3fs.utils.SSEParams(server_side_encryption=None, sse_customer_algorithm=None, sse_customer_key=None, sse_kms_key_id=None)[source]