U C^@sddlZddlmZddlZddlZddlZddlmZddlm Z m Z m Z e dZ ddZGd d d eZzddlZejjZWnek reZYnXGd d d eed ZGdddejZdS)N)md5) Transaction) read_blocktokenizestringify_pathZfsspeccCs |||SNclsargskwargsr r //tmp/pip-install-6_kvzl1k/fsspec/fsspec/spec.py make_instance srcs4eZdZdZdZdZfddZfddZZS)_Cacheda Metaclass for caching file system instances. Notes ----- Instances are cached according to * The values of the class attributes listed in `_extra_tokenize_attributes` * The arguments passed to ``__init__``. This creates an additional reference to the filesystem, which prevents the filesystem from being garbage collected when all *user* references go away. A call to the :meth:`AbstractFileSystem.clear_instance_cache` must *also* be made for a filesystem instance to be garbage collected. Tr cstj||i|_dSr)super__init___cacher  __class__r rr%sz_Cached.__init__cs|tfddjD}tf|||}jrD|jkrDj|Stj||}||_||_||_ jrt|j|<|SdS)Nc3s|]}t|dVqdSr)getattr).0attrr r r -sz#_Cached.__call__..) tuple_extra_tokenize_attributesrcachablerr__call__ _fs_token_ storage_argsstorage_options)r r r Z extra_tokenstokenobjrrrr,s   z_Cached.__call__) __name__ __module__ __qualname____doc__rrrr __classcell__r r rrrs  rc@s@eZdZdZdZdZdZdZdZdZ dZ d d Z e d d Z d dZddZddZeddZeddZeddZe ddZddZddZd~d d!Zdd"d#Zdd$d%Zd&d'Zdd(d)Zd*d+Zdd,d-Zdd.d/Z dd0d1Z!d2d3Z"d4d5Z#d6d7Z$d8d9Z%d:d;Z&dd?Z(d@dAZ)ddBdCZ*ddDdEZ+ddGdHZ,ddIdJZ-dKdLZ.dMdNZ/dOdPZ0ddQdRZ1edSdTZ2ddVdWZ3ddXdYZ4ddZd[Z5d\d]Z6dd^d_Z7d`daZ8dbdcZ9ddddeZ:edfdgZ;ddhdiZdndoZ?dpdqZ@drdsZAddtduZBdvdwZCddxdyZDddzd{ZEdd|d}ZFdS)AbstractFileSystemz An abstract super-class for pythonic file-systems Implementations are expected to be compatible with or, better, subclass from here. TFi@/Zabstractr cOs\|jr dSd|_d|_d|_i|_|ddr:tdt|ddrRtdtd|_dS)aCreate and configure file-system instance Instances may be cachable, so if similar enough arguments are seen a new instance is not required. The token attribute exists to allow implementations to cache instances if they wish. A reasonable default should be provided if there are no arguments. Subclasses should call this method. Magic kwargs that affect functionality here: add_docs: if True, will append docstrings from this spec to the specific implementation NTFZadd_docsz add_docs is no longer supported.Z add_aliaseszadd_aliases has been removed.) _cached_intrans _transactiondircachepopwarningswarn FutureWarningr)selfr r!r r rrYs    zAbstractFileSystem.__init__cCs|jSr)rr4r r r _fs_tokenxszAbstractFileSystem._fs_tokencCs|jSr)r6r5r r r__dask_tokenize__|sz$AbstractFileSystem.__dask_tokenize__cCs t|jdS)N)intr6r5r r r__hash__szAbstractFileSystem.__hash__cCst|t|o|j|jkSr) isinstancetyper6r4otherr r r__eq__szAbstractFileSystem.__eq__cCst|}t|jtr|jfn|j}|D]T}|d}||drX|t|dd}q&||dr&|t|dd}q&|p|jS)z Turn path from fully-qualified to file-system-specific May require FS-specific handling, e.g., for relative paths or links. r*z://N:r)rr;protocolstrrstrip startswithlen root_marker)r pathprotosrBr r r_strip_protocols z"AbstractFileSystem._strip_protocolcCsiS)afIf kwargs can be encoded in the paths, extract them here This should happen before instantiation of the class; incoming paths then should be amended to strip the options in methods. Examples may look like an sftp path "sftp://user@host:/my/path", where the user and host should become kwargs and later get stripped. r )pathsr r r_get_kwargs_from_urlss z(AbstractFileSystem._get_kwargs_from_urlscCs&t|js|St|jdSdS)z} Return the most recently created FileSystem If no instance has been created, then create one with defaults N)rFrlistvaluesrr r rcurrents zAbstractFileSystem.currentcCs|jdkrt||_|jS)zA context within which files are committed together upon exit Requires the file class to implement `.commit()` and `.discard()` for the normal and exception cases. N)r.rr5r r r transactions  zAbstractFileSystem.transactioncCsd|_t||_|jS)z@Begin write transaction for deferring files, non-context versionT)r-rr.rQr5r r rstart_transactions z$AbstractFileSystem.start_transactioncCs|jd|_dS)z-Finish write transaction, non-context versionN)rQZcompleter.r5r r rend_transactions z"AbstractFileSystem.end_transactionNcCsdS)z Discard any cached directory information Parameters ---------- path: string or None If None, clear all listings cached else listings at or under given path. Nr r4rHr r rinvalidate_caches z#AbstractFileSystem.invalidate_cachecKsdS)a Create directory entry at path For systems that don't have true directories, may create an for this instance only and not touch the real filesystem Parameters ---------- path: str location create_parents: bool if True, this is equivalent to ``makedirs`` kwargs: may be permissions, etc. Nr r4rHcreate_parentsr r r rmkdirszAbstractFileSystem.mkdircCsdS)aRecursively make directories Creates directory at path and any intervening required directories. Raises exception if, for instance, the path already exists but is a file. Parameters ---------- path: str leaf directory name exist_ok: bool (False) If True, will error if the target already exists Nr r4rHexist_okr r rmakedirsszAbstractFileSystem.makedirscCsdS)zRemove a directory, if emptyNr rTr r rrmdirszAbstractFileSystem.rmdircKstdS)amList objects at path. This should include subdirectories and files at that location. The difference between a file and a directory must be clear when details are requested. The specific keys, or perhaps a FileInfo class, or similar, is TBD, but must be consistent across implementations. Must include: - full path to the entry (without protocol) - size of the entry, in bytes. If the value cannot be determined, will be ``None``. - type of entry, "file", "directory" or other Additional information may be present, aproriate to the file-system, e.g., generation, checksum, etc. May use refresh=True|False to allow use of self._ls_from_cache to check for a saved listing and avoid calling the backend. This would be common where listing may be expensive. Parameters ---------- path: str detail: bool if True, gives a list of dictionaries, where each is the same as the result of ``info(path)``. If False, gives a list of paths (str). kwargs: may have additional backend-specific options, such as version information Returns ------- List of strings if detail is False, or list of directory information dicts if detail is True. NNotImplementedErrorr4rHdetailr r r rlss&zAbstractFileSystem.lscs\|}|jkr|jS||jkrXfdd|j|D}t|dkrTt|SdS)zCheck cache for listing Returns listing, if found (may me empty list for a directly that exists but contains nothing), None if not in cache. csg|]}|dkr|qS)namer )rfrHr r (s z5AbstractFileSystem._ls_from_cache..rN)_parentr/rFFileNotFoundError)r4rHparentfilesr rdr_ls_from_caches     z!AbstractFileSystem._ls_from_cachec ks,||}g}g}g}z|j|fddi|}Wn"ttfk rRgggfYSX|D]r}|dd} |ddkr| |kr|| || dddqX| |kr|d qX|| dddqX|||fV|D]J} |d ks|dkr|j| fd |d k r |dnd i|D] } | Vqqd S) a@ Return all files belows path List all files, recursing into subdirectories; output is iterator-style, like ``os.walk()``. For a simple list of files, ``find()`` is available. Note that the "files" outputted will include anything that is not a directory, such as links. Parameters ---------- path: str Root to recurse into maxdepth: int Maximum recursion depth. None means limitless, but not recommended on link-based file-systems. kwargs: passed to ``ls`` r`Trbr*r< directoryrrMr+Nmaxdepth)rJrargIOErrorrDappendrsplitwalk) r4rHrlr Z full_dirsdirsrilistinginforbdresr r rrp.s6     zAbstractFileSystem.walkc Kst}|j||f|D]N\}}}|r,||7}|D]2}|r0||kr0||r\d|d|gn|q0q||r||kr||t|S)aList all files below path. Like posix ``find`` command without conditions Parameters ---------- path : str maxdepth: int or None If not None, the maximum number of levels to descend withdirs: bool Whether to include directory paths in the output. This is True when used by glob, but users usually only want files. kwargs are passed to ``ls``. r*)setrpaddjoinrDisfilesorted) r4rHrlwithdirsr outrqrirbr r rfindbs & zAbstractFileSystem.findcKsRi}|j|fd|i|D]}||}|d||d<q|rJt|S|SdS)aSpace used by files within a path Parameters ---------- path: str total: bool whether to sum all the file sizes maxdepth: int or None maximum number of directory levels to descend, None for unlimited. kwargs: passed to ``ls`` Returns ------- Dict of {fn: size} if total=False, or int otherwise, where numbers refer to bytes used. rlsizerbN)r}rssumrO)r4rHtotalrlr sizesrcrsr r rdus  zAbstractFileSystem.duc sddl}ddlm}|d}||}|ddkr@|dnt|}|ddkr`|dnt|}|ddkr|dnt|}t|||} ||s|} d} |r|d 7}n||r|gSgSntd|d| kr&|d|  d} |d| d} d |kr d n|| dd dd} nd } d |kr8d nd} |j| f| d d|} d| dd dd dd dd dd dd dd d ddd| dd| d d!| dd"fd#d$| D}tt|S)%a Find files by glob-matching. If the path ends with '/' and does not contain "*", it is essentially the same as ``ls(path)``, returning only files. We support ``"**"``, ``"?"`` and ``"[..]"``. kwargs are passed to ``ls``. rN) has_magicr**?[rz/*z**r+T)rlr{^\z\\.z\.+z\+//(z\()z\)|z\|$z[*]{2}z =PLACEHOLDER=z[*]z[^/]*z.*cs(h|] }|dddr|qS)rr*)matchreplacerD)rppatternr r sz*AbstractFileSystem.glob..)reglobrendswithrJr}rFminexistsrindexcountrrDsubcompilerNrz)r4rHr rrZendsZindstarZindquesZindbraceindrootdepthZind2Zallpathsr|r rrrsr          *  zAbstractFileSystem.globcCs$z||WdSYdSXdS)z!Is there a file at the given pathTFNrsrTr r rrs  zAbstractFileSystem.existsc s||j|fddi|}fdd|D}|rD|dS|jfddi|}dfdd|D}t|dkrd |dkrd |dd <|dSt|dks|rdd d Std S) agGive details of entry at path Returns a single dictionary, with exactly the same information as ``ls`` would with ``detail=True``. The default implementation should calls ls and could be overridden by a shortcut. kwargs are passed on to ```ls()``. Some file systems might not be able to measure the file's size, in which case, the returned dict will include ``'size': None``. Returns ------- dict with keys: name (full path in the FS), size (in bytes), type (file, directory, or something else) and other FS-specific keys. r`Tcs"g|]}|ddkr|qSrbr*rDrordr rresz+AbstractFileSystem.info..rr*cs"g|]}|ddkr|qSrrrrdr rresrr~Nrk)rbr~r<)rJrarfrDrFrg)r4rHr r|Zout1r rdrrss      zAbstractFileSystem.infocCstt||dS)aUnique value for current version of file If the checksum is the same from one moment to another, the contents are guaranteed to be the same. If the checksum changes, the contents *might* have changed. This should normally be overridden; default will probably capture creation/modification timestamp (which would be good) or maybe access timestamp (which would be bad) r8)r9rrsrTr r rchecksums zAbstractFileSystem.checksumcCs||ddS)zSize in bytes of filer~N)rsgetrTr r rr~szAbstractFileSystem.sizecCs0z||ddkWStk r*YdSXdS)zIs this entry directory-like?r<rkFN)rsrgrTr r risdirszAbstractFileSystem.isdircCs(z||ddkWSYdSXdS)zIs this entry file-like?r<fileFNrrTr r rryszAbstractFileSystem.isfilecCs||dS)z Get the content of a file rbopenreadrTr r rcat szAbstractFileSystem.catc s||rZ|}fdd|D}|D]&tj}tj|s0t|q0n g}g}t||D]Z\|jdf|<}td&} d} | r| |j } | | qW5QRXW5QRXqpdS)zCopy file to local. Possible extension: maybe should be able to copy to any file-system (streaming through local). cs,g|]$}tj|tddqSNr*osrHrxrFlstriprrHlpathrpathr rre-sz*AbstractFileSystem.get..rwbTN) rJr}rrHdirnamerr[ziprr blocksizewrite) r4rr recursiver rpathslpathsrf1f2datar rrr$s&       zAbstractFileSystem.getc s|rzg}tD] \}}|fdd|D7}qtjd}||r`tj||n|fdd|D} n g}|g} t|| D]Z\}tdB} |j|df|&} d} | r| |j } | | qW5QRXW5QRXqdS) z Upload file from local csg|]}tj|qSr )rrHrx)rfilename)rr rreDsz*AbstractFileSystem.put..r*cs,g|]$}tj|tddqSrrr)rrpath2r rreLsrrTN) rrprHbasenamerDrrxrrrrr) r4rrrr rZ subdirlistfilelistrootdirrrrrr )rrrrput?s(    zAbstractFileSystem.putc Cs.||d}||W5QRSQRXdS)z( Get the first ``size`` bytes from file rNrr4rHr~rcr r rheadZszAbstractFileSystem.headc CsD||d.}|t| |j d|W5QRSQRXdS)z' Get the last ``size`` bytes from file rN)rseekmaxr~rrr r rtail_szAbstractFileSystem.tailcKstdS)z, Copy within two locations in the filesystemNr]r4path1path2r r r rcopyeszAbstractFileSystem.copycKs"|j||f||j|dddS)z( Move file from one location to another F)rN)rrmrr r rmviszAbstractFileSystem.mvcCstdS)z Delete a fileNr]rTr r r_rmnszAbstractFileSystem._rmc Cst|ts|g}|D]n}|rx|j||d}tt|D]>\}}}|D]$} |rZd|| gn| } || qD||q6q||qdS)aDelete files. Parameters ---------- path: str or list of str File(s) to delete. recursive: bool If file(s) are directories, recursively delete contents and then also remove the directory maxdepth: int or None Depth to pass to walk for finding files to delete, if recursive. If None, there will be no limit and infinite recursion may be possible. )rlr*N)r;rNrpreversedrxrr\r) r4rHrrlrr|Zpa__rirbfnr r rrrs  zAbstractFileSystem.rmcCs8||d}d|kr.|j|dddS|jSdS)Nr*rr)rJrDrGro)r rHr r rrfszAbstractFileSystem._parentrcKst|||||fd|i|S)z4Return raw bytes-mode file-like from the file-system cache_options)AbstractBufferedFile)r4rHmode block_size autocommitrr r r r_opens zAbstractFileSystem._openc  sddl}||}d|krX|ddd}fdddD}|j|j|||ff|Sd |j }|j|f||||d } |s|jj | | SdS) a: Return a file-like object from the filesystem The resultant instance must function correctly in a context ``with`` block. Parameters ---------- path: str Target file mode: str like 'rb', 'w' See builtin ``open()`` block_size: int Some indication of buffering - this is a value in bytes cache_options : dict, optional Extra arguments to pass through to the cache. encoding, errors, newline: passed on to TextIOWrapper for text mode rNbtr+cs i|]}|kr||qSr )r0)rkr r r sz+AbstractFileSystem.open..)encodingerrorsnewliner)rrrr) iorJr TextIOWrapperrr0r-rrQrirn) r4rHrrrr rZ text_kwargsacrcr rrrs4  zAbstractFileSystem.openc Ks4|s||s,|j|df|W5QRXntdS)a" Create empty file, or update timestamp Parameters ---------- path: str file location truncate: bool If True, always set file size to 0; if False, update timestamp and leave file unchanged, if backend allows this rN)rrr^)r4rHtruncater r r rtouchs  zAbstractFileSystem.touchcCstt||S)z2Hash of file properties, to tell if it has changed)rrCrsencode hexdigestrTr r rukeyszAbstractFileSystem.ukeyc Cs`||dJ}|j}|dkr |}|dk r<|||kr<||}t||||W5QRSQRXdS)a Read a block of bytes from Starting at ``offset`` of the file, read ``length`` bytes. If ``delimiter`` is set then we ensure that the read starts and stops at delimiter boundaries that follow the locations ``offset`` and ``offset + length``. If ``offset`` is zero then we start at zero. The bytestring returned WILL include the end delimiter string. If offset+length is beyond the eof, reads to eof. Parameters ---------- fn: string Path to filename offset: int Byte offset to start read length: int Number of bytes to read delimiter: bytes (optional) Ensure reading starts and stops at delimiter bytestring Examples -------- >>> fs.read_block('data/file.csv', 0, 13) # doctest: +SKIP b'Alice, 100\nBo' >>> fs.read_block('data/file.csv', 0, 13, delimiter=b'\n') # doctest: +SKIP b'Alice, 100\nBob, 200\n' Use ``length=None`` to read to the end of the file. >>> fs.read_block('data/file.csv', 0, None, delimiter=b'\n') # doctest: +SKIP b'Alice, 100\nBob, 200\nCharlie, 300' See Also -------- utils.read_block rN)rr~r)r4roffsetlength delimiterrcr~r r rrs%zAbstractFileSystem.read_blockcCstt||j|jffSr)rr<r r!r5r r r __reduce__szAbstractFileSystem.__reduce__cCs|S)zW Make a version of the FS instance which will be acceptable to pyarrow r r5r r r_get_pyarrow_filesystemsz*AbstractFileSystem._get_pyarrow_filesystemcCsddlm}|||||S)zCreate key/value store based on this file-system Makes a MutibleMapping interface to the FS at the given root path. See ``fsspec.mapping.FSMap`` for further details. r)FSMap)mappingr)r4rcheckcreaterr r r get_mapper&s zAbstractFileSystem.get_mappercCs|jdS)a Clear the cache of filesystem instances. Notes ----- Unless overridden by setting the ``cachable`` class attribute to False, the filesystem class stores a reference to newly created instances. This prevents Python's normal rules around garbage collection from working, since the instances refcount will not drop to zero until ``clear_instance_cache`` is called. N)rclearrr r rclear_instance_cache0s z'AbstractFileSystem.clear_instance_cachecKs|j|fd|i|S)z%Alias of :ref:`FilesystemSpec.mkdir`.rW)rXrVr r rmakedirBszAbstractFileSystem.makedircCs|j||dS)z(Alias of :ref:`FilesystemSpec.makedirs`.)rZ)r[rYr r rmkdirsFszAbstractFileSystem.mkdirscKs|j|fd|i|S)z"Alias of :ref:`FilesystemSpec.ls`.r`)rar_r r rlistdirJszAbstractFileSystem.listdircKs|j||f|S)z$Alias of :ref:`FilesystemSpec.copy`.)rrr r rcpNszAbstractFileSystem.cpcKs|j||f|Sz"Alias of :ref:`FilesystemSpec.mv`.rrr r rmoveRszAbstractFileSystem.movecKs|j|f|S)z$Alias of :ref:`FilesystemSpec.info`.r)r4rHr r r rstatVszAbstractFileSystem.statcKs|j|f||d|S)z"Alias of :ref:`FilesystemSpec.du`.)rrl)r)r4rHrrlr r r r disk_usageZszAbstractFileSystem.disk_usagecKs|j||f|Srrrr r rrename^szAbstractFileSystem.renamecCs|j|||dS)z"Alias of :ref:`FilesystemSpec.rm`.)rrl)r)r4rHrrlr r rdeletebszAbstractFileSystem.deletecKs|j||fd|i|S)z#Alias of :ref:`FilesystemSpec.put`.r)r)r4rrrr r r ruploadfszAbstractFileSystem.uploadcKs|j||fd|i|S)z#Alias of :ref:`FilesystemSpec.get`.r)r)r4rrrr r r rdownloadjszAbstractFileSystem.download)N)T)F)T)N)NF)TN)F)F)r)r)FN)rNTN)rNN)T)N)FF)T)F)T)TN)FN)F)F)Gr$r%r&r'rr,rseprBrGrrpropertyr6r7r:r? classmethodrJ staticmethodrLrPrQrRrSrUrXr[r\rarjrpr}rrrrsrr~rryrrrrrrrrrrfrrrrrrrrrrrrrrrrrrrr r r r rr)Gs        ( 4  = "         /  -        r)) metaclassc@s&eZdZdZdZdIdd Zed d Zejd d Zd dZ ddZ ddZ ddZ ddZ ddZdJddZddZdKdd ZdLd!d"Zd#d$Zd%d&ZdMd(d)Zd*d+ZdNd-d.Zd/d0Zd1d2Zd3d4Zd5d6Zd7d8Zd9d:Zd;d<Zd=d>Zd?d@Z dAdBZ!dCdDZ"e"Z#dEdFZ$dGdHZ%dS)Ora>Convenient class to derive from to provide buffering In the case that the backend does not provide a pythonic file-like object already, this class contains much of the logic to build one. The only methods that need to be overridden are ``_upload_chunk``, ``_initate_upload`` and ``_fetch_range``. iPrdefaultT readaheadNc Ksddlm} ||_||_||_|dkr,|jn||_d|_||_d|_ d|_ d|_ |dkr\i}d|kr~t dt|d|d<||_|d krtd |d krt|d s|||_|jd |_| ||j|j|jf||_nt|_d|_d|_d|_dS)a Template for files with buffered reading and writing Parameters ---------- fs: instance of FileSystem path: str location in file-system mode: str Normal file modes. Currently only 'wb', 'ab' or 'rb'. Some file systems may be read-only, and some may not support append. block_size: int Buffer size for reading or writing, 'default' for class default autocommit: bool Whether to write to final destination; may only impact what happens when file is being closed. cache_type: {"readahead", "none", "mmap", "bytes"}, default "readahead" Caching policy in read mode. See the definitions in ``core``. cache_options : dict Additional options passed to the constructor for the cache specified by `cache_type`. kwargs: Gets stored as self.kwargs r)caches)rNrNFZtrimzyPassing 'trim' to control the cache behavior has been deprecated. Specify it within the 'cache_options' argument instead.>rabrzFile mode not supportedrdetailsr~)corerrHfsrDEFAULT_BLOCK_SIZErlocrendstartclosedr1r2r3r0r r^hasattrrsrr~ _fetch_rangecacherBytesIObufferrforcedlocation) r4rrHrrrZ cache_typerr rr r rrzsJ#     zAbstractBufferedFile.__init__cCs|jSr_closedr5r r rrszAbstractBufferedFile.closedcCs ||_dSrr")r4cr r rrscCs&d|jkrt|Stt|jdSdS)Nwr8)ridr9rrr5r r rr:s zAbstractBufferedFile.__hash__cCs$|jdko"|jdko"t|t|kS)zAFiles are equal if they have the same checksum, only in read moder)rhashr=r r rr?szAbstractBufferedFile.__eq__cCsdS)z#Move from temp to final destinationNr r5r r rcommitszAbstractBufferedFile.commitcCsdS)zThrow away temporary fileNr r5r r rdiscardszAbstractBufferedFile.discardcCsd|jkr|jStddS)z" File information about this path rz Info not available while writingN)rr ValueErrorr5r r rrss zAbstractBufferedFile.infocCs|jS)z Current file location )rr5r r rtellszAbstractBufferedFile.tellrcCsxt|}|jdkstd|dkr(|}n4|dkr<|j|}n |dkrP|j|}n td||dkrltd||_|jS)z Set current file location Parameters ---------- loc: int byte location whence: {0, 1, 2} from start of file, current location or end of file, resp. rz Seek only available in read moderrrz(invalid whence (%s, should be 0, 1 or 2)zSeek before start of file)r9rr+rr~)r4rwhencenlocr r rrs     zAbstractBufferedFile.seekcCsd|jdkrtd|jr td|jr.td|j|}|j|7_|j|jkr`| |S)z Write data to buffer. Buffer only sent on flush() or if buffer is greater than or equal to blocksize. Parameters ---------- data: bytes Set of bytes to be written. >rrzFile not in write modeI/O operation on closed file.z0This file has been force-flushed, can only close) rr+rr rrrr,rflush)r4rr|r r rrs  zAbstractBufferedFile.writeFcCs|jrtd|r |jr td|r*d|_|jdkr8dS|sP|j|jkrPdS|jdkrhd|_||j |ddk r|j|j dd 7_t |_dS) a{ Write buffered data to backend store. Writes the current buffer, if it is larger than the block-size, or if the file is being closed. Parameters ---------- force: bool When closing, write the last block even if it is smaller than blocks are allowed to be. Disallows further writing to this file. zFlush on closed filez+Force flush cannot be called more than onceT>rrNr)finalFr) rr+r rrr,rr_initiate_upload _upload_chunkrrr)r4forcer r rr0s    zAbstractBufferedFile.flushcCsdS)z Write one part of a multi-block file upload Parameters ========== final: bool This is the last block, so should complete file, if self.autocommit is True. Nr )r4r1r r rr3Csz"AbstractBufferedFile._upload_chunkcCsdS)z Create remote file/upload Nr r5r r rr2Nsz%AbstractBufferedFile._initiate_uploadcCstdS)z*Get the specified set of bytes from remoteNr])r4rrr r rrRsz!AbstractBufferedFile._fetch_rangerMcCs|dkr dnt|}|jdkr&td|dkr:|j|j}|jrHtdtd||j|j|f|dkrpdS|j |j|j|}|jt |7_|S) z Return data from cache, or fetch pieces as necessary Parameters ---------- length: int (-1) Number of bytes to read; if <0, all remaining bytes. NrMrzFile not in read moderr/z%s read: %i - %i) r9rr+r~rrloggerdebugrZ_fetchrF)r4rr|r r rrVs   zAbstractBufferedFile.readcCs&|t|}||dt|<t|S)zxmirrors builtin file's readinto method https://docs.python.org/3/library/io.html#io.RawIOBase.readinto N)rrF)r4rrr r rreadintonszAbstractBufferedFile.readinto cCsg}|}||p|j}t|dkr*qz||}|dkrn||d|t||||t|qz||qd|S)aReturn data between current position and first occurrence of char char is included in the output, except if the end of the tile is encountered first. Parameters ---------- char: bytes Thing to find blocks: None or int How much to read in each go. Defaults to file blocksize - which may mean a new read on every call. rrMNr5)r,rrrFr}rnrrx)r4charblocksr|rpartfoundr r r readuntilws   zAbstractBufferedFile.readuntilcCs |dS)zRead until first occurrence of newline character Note that, because of character encoding, this is not necessarily a true line ending. r9)r>r5r r rreadlineszAbstractBufferedFile.readlinecCs|}|r|StdSr)r? StopIteration)r4r|r r r__next__szAbstractBufferedFile.__next__cCs|Srr r5r r r__iter__szAbstractBufferedFile.__iter__cCsH|}|d}dd|ddD}|dr6|S||dgSdS)z/Return all data, split by the newline characterr9cSsg|] }|dqS)r9r )rlr r rresz2AbstractBufferedFile.readlines..NrM)rsplitr)r4rlinesr|r r r readliness   zAbstractBufferedFile.readlinescCs ||Sr)r8)r4rr r r readinto1szAbstractBufferedFile.readinto1cCsf|jr dS|jdkrd|_n@|js.|jdd|jdk r\|j|j|j|j|jd|_dS)z> Close file Finalizes writes, discards cache NrT)r4) rrrr r0rrUrHrfr5r r rcloses   zAbstractBufferedFile.closecCs|jdko|j S)zWhether opened for readingrrrr5r r rreadableszAbstractBufferedFile.readablecCs|S)z'Whether is seekable (only in read mode))rJr5r r rseekableszAbstractBufferedFile.seekablecCs|jdko|j S)zWhether opened for writing>rrrIr5r r rwritableszAbstractBufferedFile.writablecCs |dSrrHr5r r r__del__szAbstractBufferedFile.__del__cCsdt|jj|jfS)Nz)r<rr$rHr5r r r__str__szAbstractBufferedFile.__str__cCs|Srr r5r r r __enter__szAbstractBufferedFile.__enter__cGs |dSrrM)r4r r r r__exit__szAbstractBufferedFile.__exit__)rrTrN)r)F)F)rM)r9N)&r$r%r&r'rrr rsetterr:r?r(r)rsr,rrr0r3r2rrr8r>r?rArBrFrGrHrJrKrLrNrO__repr__rPrQr r r rrosR M    &    r)r1hashlibrrrloggingrQrutilsrrr getLoggerr6rr<rZpyarrowpa filesystemZDaskFileSystemZup ImportErrorobjectr)IOBaserr r r rs,   .  .