a full featured file system for online data storage

S3QL

S3QL is a file system that stores all its data online using storage services like Google Storage, Amazon S3, or OpenStack. S3QL effectively provides a virtual drive of dynamic, infinite capacity that can be accessed from any computer with internet access.

S3QL is a standard conforming, full featured UNIX file system that is conceptually indistinguishable from any local file system. Furthermore, S3QL has additional features like compression, encryption, data de-duplication, immutable trees and snapshotting which make it especially suitable for online backup and archival.

S3QL is designed to favor simplicity and elegance over performance and feature-creep. Care has been taken to make the source code as readable and serviceable as possible. Solid error detection and error handling have been included from the very first line, and S3QL comes with extensive automated test cases for all its components.

Features

  • Transparency. Conceptually, S3QL is indistinguishable from a local file system. For example, it supports hardlinks, symlinks, standard unix permissions, extended attributes and file sizes up to 2 TB.

  • Dynamic Size. The size of an S3QL file system grows and shrinks dynamically as required.

  • Compression. Before storage, all data may be compressed with the LZMA, bzip2 or deflate (gzip) algorithm.

  • Encryption. After compression (but before upload), all data can be AES encrypted with a 256 bit key. An additional SHA256 HMAC checksum is used to protect the data against manipulation.

  • Data De-duplication. If several files have identical contents, the redundant data will be stored only once. This works across all files stored in the file system, and also if only some parts of the files are identical while other parts differ.

  • Immutable Trees. Directory trees can be made immutable, so that their contents can no longer be changed in any way whatsoever. This can be used to ensure that backups can not be modified after they have been made.

  • Copy-on-Write/Snapshotting. S3QL can replicate entire directory trees without using any additional storage space. Only if one of the copies is modified, the part of the data that has been modified will take up additional storage space. This can be used to create intelligent snapshots that preserve the state of a directory at different points in time using a minimum amount of space.

  • High Performance independent of network latency. All operations that do not write or read file contents (like creating directories or moving, renaming, and changing permissions of files and directories) are very fast because they are carried out without any network transactions.

    S3QL achieves this by saving the entire file and directory structure in a database. This database is locally cached and the remote copy updated asynchronously.

  • Support for low bandwidth connections. S3QL splits file contents into smaller blocks and caches blocks locally. This minimizes both the number of network transactions required for reading and writing data, and the amount of data that has to be transferred when only parts of a file are read or written.

Development Status

S3QL is considered stable and suitable for production use. Starting with version 2.17.1, S3QL uses semantic versioning. This means that backwards-incompatible versions (e.g., versions that require an upgrade of the file system revision) will be reflected in an increase of the major version number.

Supported Platforms

S3QL is developed and tested under Linux. Users have also reported running S3QL successfully on OS-X, FreeBSD and NetBSD. We try to maintain compatibility with these systems, but (due to lack of pre-release testers) we cannot guarantee that every release will run on all non-Linux systems. Please report any bugs you find, and we will try to fix them.

Typical Usage

Before a file system can be mounted, the backend which will hold the data has to be initialized. This is done with the mkfs.s3ql command. Here we are using the Amazon S3 backend, and nikratio-s3ql-bucket is the S3 bucket in which the file system will be stored.

mkfs.s3ql s3://ap-south-1/nikratio-s3ql-bucket

To mount the S3QL file system stored in the S3 bucket nikratio_s3ql_bucket in the directory /mnt/s3ql, enter:

mount.s3ql s3://ap-south-1/nikratio-s3ql-bucket /mnt/s3ql

Now you can instruct your favorite backup program to run a backup into the directory /mnt/s3ql and the data will be stored on Amazon S3. When you are done, the file system has to be unmounted with

umount.s3ql /mnt/s3ql

Need Help?

The following resources are available:

Please report any bugs you may encounter in the GitHub Issue Tracker.

Contributing

The S3QL source code is available on GitHub.

Comments
  • Initial support for BackBlaze B2

    Initial support for BackBlaze B2

    Hello,

    Here's a prototype implementation of BackBlaze B2 Backend. It may need some extended tests, but respond correctly to backend's test feature. There is still some things left to do :

    • backend does not support backslashes on filenames. To pass unit tests, i replace them with other characters. It seems not to be a problem, since s3ql does not use backslashes on normal operation, but remain quite dirty
    • backend does not support copy operation. The code download et upload again the file being copied.
  • Backblaze B2 Backend

    Backblaze B2 Backend

    I tried to implement a backend for Backblaze B2. I tested it a bit and it seems to run, but as I am not (yet) that familiar with python and this project, I am hoping for comments/corrections/suggestions.

  • mount.s3ql hangs on dugong.HostnameNotResolvable error

    mount.s3ql hangs on dugong.HostnameNotResolvable error

    Running 2.8, a number of exceptions like the below occurred, I believe while data was being written to the mount:

    Apr  4 01:47:03 wolfie mount.s3ql[29386]: Thread-5] root.excepthook: Uncaught top-level exception:
    Traceback (most recent call last):
      File "/usr/lib64/python3.6/site-packages/dugong/__init__.py", line 1544, in create_socket
        return socket.create_connection(address)
      File "/usr/lib64/python3.6/socket.py", line 704, in create_connection
        for res in getaddrinfo(host, port, 0, SOCK_STREAM):
      File "/usr/lib64/python3.6/socket.py", line 745, in getaddrinfo
        for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
    socket.gaierror: [Errno -3] Temporary failure in name resolution
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/lib64/python3.6/site-packages/s3ql/mount.py", line 64, in run_with_except_hook
        run_old(*args, **kw)
      File "/usr/lib64/python3.6/threading.py", line 864, in run
        self._target(*self._args, **self._kwargs)
      File "/usr/lib64/python3.6/site-packages/s3ql/block_cache.py", line 409, in _upload_loop
        self._do_upload(*tmp)
      File "/usr/lib64/python3.6/site-packages/s3ql/block_cache.py", line 436, in _do_upload
        % obj_id).get_obj_size()
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/common.py", line 108, in wrapped
        return method(*a, **kw)
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/common.py", line 340, in perform_write
        return fn(fh)
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/comprenc.py", line 371, in __exit__
        self.close()
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/comprenc.py", line 365, in close
        self.fh.close()
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/comprenc.py", line 530, in close
        self.fh.close()
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/common.py", line 108, in wrapped
        return method(*a, **kw)
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/s3c.py", line 948, in close
        headers=self.headers, body=self.fh)
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/gs.py", line 188, in _do_request
        query_string=query_string, body=body)
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/s3c.py", line 480, in _do_request
        query_string=query_string, body=body)
      File "/usr/lib64/python3.6/site-packages/s3ql/backends/s3c.py", line 718, in _send_request
        headers=headers, body=BodyFollowing(body_len))
      File "/usr/lib64/python3.6/site-packages/dugong/__init__.py", line 569, in send_request
        self.timeout)
      File "/usr/lib64/python3.6/site-packages/dugong/__init__.py", line 1495, in eval_coroutine
        if not next(crt).poll(timeout=timeout):
      File "/usr/lib64/python3.6/site-packages/dugong/__init__.py", line 596, in co_send_request
        self.connect()
      File "/usr/lib64/python3.6/site-packages/dugong/__init__.py", line 490, in connect
        self._sock = create_socket((self.hostname, self.port))
      File "/usr/lib64/python3.6/site-packages/dugong/__init__.py", line 1548, in create_socket
        raise HostnameNotResolvable(address[0])
    dugong.HostnameNotResolvable: Host commondatastorage.googleapis.com does not have any ip addresses
    

    When I discovered this later in the morning, I attempted various commands like 'ls' and 's3qlstat', but they would all hang on IO waits. 'fusermount -u' would simply complain that the filesystem was in use. I had too use 'kill -9' on mount.s3ql, then 'fusermount -u', and finally, 'fsck.s3ql'. Everything seemed fine, with fsck was uploading dirty blocks, then this...

    Apr  4 09:26:44 wolfie fsck.s3ql[32519]: MainThread] s3ql.fsck.log_error: Writing dirty block 1 of inode 556620 to backend
    Apr  4 09:26:44 wolfie fsck.s3ql[32519]: MainThread] s3ql.fsck.log_error: Writing dirty block 0 of inode 556610 to backend
    Apr  4 09:30:06 wolfie fsck.s3ql[32519]: MainThread] s3ql.fsck.log_error: Writing dirty block 0 of inode 556612 to backend
    Apr  4 09:33:27 wolfie fsck.s3ql[32519]: MainThread] s3ql.fsck.log_error: Writing dirty block 1 of inode 556616 to backend
    Apr  4 09:36:19 wolfie fsck.s3ql[32519]: MainThread] s3ql.fsck.log_error: Writing dirty block 1 of inode 556610 to backend
    Apr  4 09:39:40 wolfie fsck.s3ql[32519]: MainThread] s3ql.fsck.check: Dropping temporary indices...
    Apr  4 09:39:40 wolfie fsck.s3ql[32519]: MainThread] root.excepthook: Uncaught top-level exception:
    Traceback (most recent call last):
      File "/usr/lib/python-exec/python3.6/fsck.s3ql", line 11, in <module>
        load_entry_point('s3ql==2.28', 'console_scripts', 'fsck.s3ql')()
      File "/usr/lib64/python3.6/site-packages/s3ql/fsck.py", line 1322, in main
        fsck.check()
      File "/usr/lib64/python3.6/site-packages/s3ql/fsck.py", line 78, in check
        self.check_cache()
      File "/usr/lib64/python3.6/site-packages/s3ql/fsck.py", line 195, in check_cache
        raise RuntimeError('Strange file in cache directory: %s' % filename)
    RuntimeError: Strange file in cache directory: 550328-1.tmp
    
  • WIP: #191: Container friendliness

    WIP: #191: Container friendliness

    Hi,

    Following #191, I finally had time for an MR of "container friendliness" as I so-called it over there.

    This is achieved mostly by implementing a mix of what I did a few weeks back on my own, and recommendations by @d--j in the aforementioned issue.

    While not completely "shell-free", I tried to keep scripting to a minimum for reasonable environment-variables-driven configuration.

    Now there's 2 things to note:

    • a slightly unfortunate limitation relating to signal handling and the docker entrypoint I ran into, detailed hereafter
    • there's no tests (or CI setup) for this yet and I do plan to at least have a go at it. However, this would involve some docker-compose (un)fun, which will be a bit tedious, so I'll wait to be sure at least the implementation seems fine before committing that time :sweat_smile:

    Limitations:

    • you cannot use the syslog logging, mostly because setting it up in a docker container would be overly complex, have little value, and considerably complexify/bloat the image
    • you can use the none logging, but you will then not see the logs, which is due to the next limitation
    • I was not able to come up with a solution using the foreground mode of mount.s3ql

    For the later, it seems to me that mount.s3ql does indeed terminate on receiving a stop signal, and unmounts the fuse mount, but doesn't cleanly close the filesystem. Meaning you should actually never "cancel" that process yourself, but rather run fusermount/umount.s3ql separately, which will eventually stop it.

    In this setup, a stop signal has the following path:

    1. it is is handled by docker (either by Ctrl+C-ing an interactively-attached container or docker stopping and associated),
    2. the container's entrypoint, dumb-init ensures it is proxied to entrypoint.sh rather than swallowed by docker's default init (& never seen by the entrypoint)
    3. is received by entrypoint.sh
    4. which executes the shutdown hook

    However, to make this work with the foreground mode, the signal needs to somehow be read by entrypoint.sh, and invoke the shutdown hook, then wait for the mount.s3ql process to finish, without the latter ever knowing about the signal...

    Unfortunately I couldn't set this up, even starting mount.s3ql as a background subshell, as it would still always see the signal before the shutdown hook was called...

    In the end, I believe this limitation is a reasonable tradeoff, unless some shell signals/concurrency guru is willing to help out.

  • [Invalid Credentials] s3ql.backends.gs.RequestError after a hour

    [Invalid Credentials] s3ql.backends.gs.RequestError after a hour

    Hi, I am new to s3ql. When using google bucket with ADC the mount work but after an hour my mount fail with

    ls: cannot open directory '.': Transport endpoint is not connected
    

    Reading the log the error is Unauthorized

    The relevant log file is:

    2019-02-20 18:51:00.053 1467:MainThread s3ql.mount.determine_threads: Using 4 upload threads.
    2019-02-20 18:51:00.054 1467:MainThread s3ql.mount.main: Autodetected 1048532 file descriptors available for cache entries
    2019-02-20 18:51:00.247 1467:MainThread s3ql.backends.gs._get_access_token: Requesting new access token
    2019-02-20 18:51:06.003 1467:MainThread s3ql.mount.get_metadata: Using cached metadata.
    2019-02-20 18:51:06.007 1467:MainThread s3ql.mount.main: Setting cache size to 55240 MB
    2019-02-20 18:51:06.008 1467:MainThread s3ql.mount.main: Mounting gs://bucket/main at /mnt/bucketmain...
    2019-02-20 18:51:06.015 1473:MainThread s3ql.daemonize.detach_process_context: Daemonizing, new PID is 1474
    2019-02-20 19:55:08.263 1474:Thread-6 root.excepthook: Uncaught top-level exception:
    Traceback (most recent call last):
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/mount.py", line 58, in run_with_except_hook
        run_old(*args, **kw)
      File "/usr/lib/python3.6/threading.py", line 864, in run
        self._target(*self._args, **self._kwargs)
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/block_cache.py", line 445, in _upload_loop
        self._do_upload(*tmp)
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/block_cache.py", line 472, in _do_upload
        % obj_id).get_obj_size()
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/backends/common.py", line 108, in wrapped
        return method(*a, **kw)
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/backends/common.py", line 279, in perform_write
        return fn(fh)
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/backends/comprenc.py", line 389, in __exit__
        self.close()
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/backends/comprenc.py", line 383, in close
        self.fh.close()
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/backends/comprenc.py", line 548, in close
        self.fh.close()
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/backends/gs.py", line 933, in close
        self.metadata, size=self.obj_size)
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/backends/common.py", line 108, in wrapped
        return method(*a, **kw)
      File "/root/.virtualenvs/s3ql/lib/python3.6/site-packages/s3ql/backends/gs.py", line 485, in write_fh
        raise _map_request_error(exc, key) or exc
    s3ql.backends.gs.RequestError: <RequestError, code=401, reason='Unauthorized', message='Invalid Credentials'>
    2019-02-20 19:55:08.321 1474:Thread-5 s3ql.mount.exchook: Unhandled top-level exception during shutdown (will not be re-raised)
    2019-02-20 19:55:08.322 1474:Thread-5 root.excepthook: Uncaught top-level exception:
    

    Now, I don't understand if is my fault but I exported the

    export GOOGLE_APPLICATION_CREDENTIALS="[PATH]"

    and set the credential as ADC and any password like in the documentation http://www.rath.org/s3ql-docs/backends.html

    Thanxs

  • Gdrive Implementation

    Gdrive Implementation

    Hi, This is my first attemp to implement gdrive using base code of the implementation of @mkhon

    I modified the code of @mkhon to have the next features:

    • [x] S3QL GDrive Full Implementation
    • [x] Gdrive Error Handling
    • [ ] batch requests(better performance in write/delete little files, avoid ban for too much requests)
    • [x] oauth client integrated with s3ql
    • [x] avoid unecesary requests
    • [x] md5 checksum read/write

    I don't expect you accept now the changes, I want to know I am in the good way, I want to refactor some things before merge it to main.

    About Oauth auth I finally modified your oauth util for google storage I added a parameter --oauth_type where you can define if you want to generate token for google storage or google drive and also I added the possibilty to use own clientID/secret. You should modify your clientID to accept google drive, because right now you must have your own clientID to generate token.

    The idea is oauth client generate a refresh token and when you mount s3ql you set the next values: user: youclient_id password:client_secret:refreshToken

    Let me know what do you thing about this implementation, I'm not an expert in python so I suppose there are lot of things could be done better.

  • initial support for Amazon Cloud Drive

    initial support for Amazon Cloud Drive

    Here's some initial implementation of an Amazon Cloud Drive backend. It stores metadata as a client property, which means we download them with other basic file info, but pretty much invisible on the web gui/other clients, but it's probably not a big problem (but changing APP_ID would break the fs...)

    Things still left to do:

    • This commit has a temporary app_id and client_id, someone should do a proper app registration, then create a user-friendly-ish webapp to get refresh token from amazon (there's already one here thanks to the acd_cli project).
    • ACD assigns a (random?) id to every file uploaded, and all requests need this id instead of filenames. The filename->id translation normally requires an extra api call, and the latency is horrible. The code now caches the replies (node_cache), but it's an extra call for each when when used for the first time (and this affects new files too, as there's no upload or overwrite, only upload that fails if a file with the same name exists and overwrite an exising file by id) and delete. Maybe it'd be better to mass download the filelist of the whole s3ql directory, and answer all these queries locally, and only uploading the changes. Since one fs instance can only be mounted by one s3ql process, this shouldn't cause problems, but we probably shouldn't store all metadata in ram like the current implementation.
    • No server side copying. The code now downloads the source, and uploads it again, but this breaks the contract in AbstractBackend... But at least the rename method is overridden to not use copy.
  • Feature request: Backblaze B2 support

    Feature request: Backblaze B2 support

    It would be nice to see support for B2. It's one of the newer ones but practically unbeatable in price, and we have success with it using other OSS backup tools like hashbackup or Restic. Docs are at https://www.backblaze.com/b2/docs/

  • WIP: Add mock server for OpenStack Swift

    WIP: Add mock server for OpenStack Swift

    Adds a mock server for OpenStack Swift so that the Swift Backend gets more test coverage in default cases (i.e. without tests on live filesystems).

    For now the mock server does not handle bulk deletes but copy via COPY. Ideally it would be good to have three different kinds/configurations of the mock server (One that has no special support for anything, one that supports copy via COPY and a third that support bulk delete). But I have no good idea how to do this.

    (Since TravisCI is integrated with GitHub I use GitHub for this pull request.)

  • Support storing multiple files in the same backend object (

    Support storing multiple files in the same backend object ("fragments")

    [migrated from BitBucket]

    Storing lots of small files is very inefficient, since every file requires its own block.

    We should add support for fragments, so that multiple files can be stored in the same block.

    With the new bucket interface, we should be able to implement this relatively easily:

    • Upload workers get list of cache entries, new blocks may be coalesced into single object
    • CommitThread() and expire() only call to worker threads once they have a reasonably big chunk of data ready
    • We keep objects until reference count of all contained blocks is zero
    • Therefore, blocks may continue to exist with refcount=0 and can possibly be reused
    • s3qladm may need a "cleanup" function to get rid of these blocks
    • When downloading object, db can be used to determine which blocks in the object belong to files (and should be added to cache) and which ones can be discarded
    • Minimum size of cache entries passed to workers could be adjusted dynamically based on upload bandwith, latency, and compression ratio of previous uploads
  • Keystone v3 fails and OVH Swift

    Keystone v3 fails and OVH Swift

    Using latest version 3.3.2

    This seems similar to #140 I have been using s3ql with keystone v2 to access ovh object storage for a couple of years without issues. OVH are now going to move keystone v3 only. Accordingly I have added: --backend-options domain=Default to my mount command as per documentation to force v3, but it results in failure with following log output (same as #140):

    2020-02-07 11:40:03.194 1457:MainThread s3ql.mount.determine_threads: Using 2 upload threads. 2020-02-07 11:40:03.194 1457:MainThread s3ql.mount.main: Autodetected 4058 file descriptors available for cache entries 2020-02-07 11:40:03.281 1457:MainThread s3ql.backends.common.get_ssl_context: Reading default CA certificates. 2020-02-07 11:40:03.287 1457:MainThread s3ql.backends.swift._do_request: started with 'GET', '/', None, {'limit': 1}, None, None 2020-02-07 11:40:03.288 1457:MainThread s3ql.backends.swift._do_request: no active connection, calling _get_conn() 2020-02-07 11:40:03.288 1457:MainThread s3ql.backends.swiftks._get_conn: started 2020-02-07 11:40:03.386 1457:MainThread root.excepthook: No permission to access backend.

    Note that my backend-login is in the form tenant:user

    ovh have not been helpful so far, their guide for using s3ql does not work for v3

    It may be irrelevant but there seems a subtle difference between the code in backends/swiftks.py the is different from that suggested by ovh domain:{name vs. domain:{id

    POST /v3/auth/tokens HTTP/1.1 Host: auth.cloud.ovh.net Content-Length: Content-Type: application/json

    { "auth": { "identity": { "methods": [ "password" > ], "password": { "user": { "name": "", "domain": { "name": "Default" }, "password": "" } } } } }

    Anyone else have it working with ovh and keystone 3 ?

  • fsck.s3ql crashes ERROR: Uncaught top-level exception:

    fsck.s3ql crashes ERROR: Uncaught top-level exception: "Path b'lost+found' does not exist"

    I have and old s3ql filesystem, I don't have the VM and I closed s3ql with a system shutdown or ctrl-c (the last time i used it) so the filesystem is not in clean state. I run fsck and failed with "Path b'lost+found' does not exist" This is a second time that I use a fsck without synced metadata, and the folder lost+found was create that time.

    Enter backend login: Enter backend password: Enter file system encryption passphrase: Starting fsck of S3URL_REDACTED Backend reports that file system is still mounted elsewhere. Either the file system has not been unmounted cleanly or the data has not yet propagated through the backend. In the later case, waiting for a while should fix the problem, in the former case you should try to run fsck on the computer where the file system has been mounted most recently. You may also continue and use whatever metadata is available in the backend. However, in that case YOU MAY LOOSE ALL DATA THAT HAS BEEN UPLOADED OR MODIFIED SINCE THE LAST SUCCESSFULL METADATA UPLOAD. Moreover, files and directories that you have deleted since then MAY REAPPEAR WITH SOME OF THEIR CONTENT LOST. Enter "continue, I know what I am doing" to use the outdated data anyway: continue, I know what I am doing

    Downloading and decompressing metadata... Reading metadata... ..objects.. ..blocks.. ..inodes.. ..inode_blocks.. ..symlink_targets.. ..names.. ..contents.. ..ext_attributes.. Creating temporary extra indices... Checking lost+found... Checking for dirty cache objects... Checking names (refcounts)... Checking contents (names)... WARNING: Content entry for inode 3 refers to non-existing name with id 1, moving to /lost+found/-3 Dropping temporary indices... ERROR: Uncaught top-level exception: Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/database.py", line 143, in get_row row = next(res) StopIteration

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/common.py", line 117, in inode_for_path inode = conn.get_val("SELECT inode FROM contents_v WHERE name=? AND parent_inode=?", File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/database.py", line 127, in get_val return self.get_row(*a, **kw)[0] File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/database.py", line 145, in get_row raise NoSuchRowError() s3ql.database.NoSuchRowError: Query produced 0 result rows

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/usr/local/bin/fsck.s3ql", line 11, in load_entry_point('s3ql==3.8.1', 'console_scripts', 'fsck.s3ql')() File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/fsck.py", line 1289, in main fsck.check(check_cache) File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/fsck.py", line 86, in check self.check_contents_name() File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/fsck.py", line 323, in check_contents_name (id_p_new, newname) = self.resolve_free(b"/lost+found", newname) File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/fsck.py", line 1068, in resolve_free inode_p = inode_for_path(path, self.conn) File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/common.py", line 120, in inode_for_path raise KeyError('Path %s does not exist' % path) KeyError: "Path b'lost+found' does not exist"

    A second fsck

    Enter backend login: Enter backend password: Enter file system encryption passphrase: Starting fsck of S3URL_REDACTED Using cached metadata. WARNING: Remote metadata is outdated. Checking DB integrity... Creating temporary extra indices... Checking lost+found... Checking for dirty cache objects... Checking names (refcounts)... Checking contents (names)... Dropping temporary indices... ERROR: Uncaught top-level exception: Traceback (most recent call last): File "/usr/local/bin/fsck.s3ql", line 11, in load_entry_point('s3ql==3.8.1', 'console_scripts', 'fsck.s3ql')() File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/fsck.py", line 1289, in main fsck.check(check_cache) File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/fsck.py", line 86, in check self.check_contents_name() File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/fsck.py", line 318, in check_contents_name path = get_path(inode_p, self.conn)[1:] File "/usr/local/lib/python3.8/dist-packages/s3ql-3.8.1-py3.8-linux-x86_64.egg/s3ql/common.py", line 147, in get_path raise RuntimeError('Failed to resolve name "%s" at inode %d to path', RuntimeError: ('Failed to resolve name "%s" at inode %d to path', None, 3)

    apt list sqlite3 3.31.1-4ubuntu0.3 amd64 [installed]

    s3ql version s3ql-3.8.1

    Ubuntu 20.04

    i don't remember the s3ql version of the old VM

  • rsync changes to defunct state while copying from a s3ql to another

    rsync changes to defunct state while copying from a s3ql to another

    I'm using rsync to copy files from a server A using s3ql to another server B, also using s3ql. My rsync command is executed from the destination server (B) and looks like this : rsync -avz --progress -H -X --partial --one-file-system A:/mnt/s3ql /mnt/s3ql/test

    After a while, rsync process state changes to defunct state and command freezes.

    I've tried using version 3.0.0 of s3ql but also version 3.8.1.

    I'm using a cache dir of 1G and there is my mount command : mount.s3ql --allow-other --cachedir=/tmp/cache --cachesize=1024000 --compress=lzma-4 --threads=3 --metadata-upload-interval=72000 local:///mnt/mfsmount /mnt/s3ql

    So far I've tried to decrease number of threads (previous was 8) and decrease cache size (previous was 8Go).

  • B2 backend does not clean up stale upload connections

    B2 backend does not clean up stale upload connections

    B2 backend maintains a pool of upload URLs and associated connections, which do not get cleaned up after being established unless an error happens.

    This means that if one uses high thread count (say, threads=32) with B2 backend, after a period of intensive I/O metadata upload may hang for hours while the backend tries all connections one by one, establishing that each of them does not work and closing them, waiting for 5 minutes between attempts in the back-off logic:

    Jun 23 06:23:01 stratofortress mount.s4ql[2089141]: Dumping metadata...
    Jun 23 06:23:01 stratofortress mount.s3ql[2089141]: ..objects..
    Jun 23 06:23:01 stratofortress mount.s3ql[2089141]: ..blocks..
    Jun 23 06:23:02 stratofortress mount.s3ql[2089141]: ..inodes..
    Jun 23 06:23:02 stratofortress mount.s3ql[2089141]: ..inode_blocks..
    Jun 23 06:23:03 stratofortress mount.s3ql[2089141]: ..symlink_targets..
    Jun 23 06:23:03 stratofortress mount.s3ql[2089141]: ..names..
    Jun 23 06:23:03 stratofortress mount.s3ql[2089141]: ..contents..
    Jun 23 06:23:03 stratofortress mount.s3ql[2089141]: ..ext_attributes..
    Jun 23 06:23:04 stratofortress mount.s3ql[2089141]: Compressing and uploading metadata...
    Jun 23 06:23:10 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 3)...
    Jun 23 06:23:10 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 4)...
    Jun 23 06:23:10 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 5)...
    Jun 23 06:23:11 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 6)...
    Jun 23 06:23:12 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 7)...
    Jun 23 06:23:14 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 8)...
    Jun 23 06:23:17 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 9)...
    Jun 23 06:23:22 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 10)...
    Jun 23 06:23:35 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 11)...
    Jun 23 06:24:06 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 12)...
    Jun 23 06:25:05 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 13)...
    Jun 23 06:26:33 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 14)...
    Jun 23 06:29:41 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 15)...
    Jun 23 06:35:06 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 16)...
    Jun 23 06:40:19 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 17)...
    Jun 23 06:45:52 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 18)...
    Jun 23 06:50:59 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 19)...
    Jun 23 06:57:27 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 20)...
    Jun 23 07:04:49 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 21)...
    Jun 23 07:11:37 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 22)...
    Jun 23 07:16:48 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 23)...
    Jun 23 07:23:41 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 24)...
    Jun 23 07:29:06 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 25)...
    Jun 23 07:36:26 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 26)...
    Jun 23 07:42:11 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 27)...
    Jun 23 07:47:40 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 28)...
    Jun 23 07:53:26 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 29)...
    Jun 23 08:00:31 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 30)...
    Jun 23 08:07:38 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 31)...
    Jun 23 08:13:44 stratofortress mount.s3ql[2089141]: Encountered ConnectionClosed (connection was interrupted), retrying ObjectW.close (attempt 32)...
    Jun 23 08:19:17 stratofortress mount.s3ql[2089141]: Wrote 38.9 MiB of compressed metadata.
    Jun 23 08:19:17 stratofortress mount.s3ql[2089141]: Cycling metadata backups...
    Jun 23 08:19:17 stratofortress mount.s3ql[2089141]: Backing up old metadata...
    

    S3QL should either verify the connections before using them, or schedule closure of each established connection after a period of inactivity.

  • Better handle I/O errors in backends

    Better handle I/O errors in backends

    If the B2 backend encounters an ENOSPACE error when writing into the temporary file, write() will return the ENOSPACE exception, but a successive call to close() will result in a checksum error (because the checksum wasn't updated to reflect the incomplete write to the temporary file), and then a dugong.StateError (probably because after checking the checksum we did not read the rest of the response).

    Either write() should update the checksum to reflect the partial data that was written (thus eliminating the checksum error on upload), or perhaps it should set a flag that this object should not be uploaded at all on close.

    Other backends may have similar issues.

  • Running s3qlrm can generate lots of `FileNotFoundError` entries in `mount.log`

    Running s3qlrm can generate lots of `FileNotFoundError` entries in `mount.log`

    I have found a large number of errors in my ~/.s3ql/mount.log:

    2020-11-04 18:33:37.380 5454:Thread-25 pyfuse3.run: Failed to submit invalidate_entry request for parent inode 200499000, name b'security'
    Traceback (most recent call last):
      File "src/internal.pxi", line 125, in pyfuse3._notify_loop
      File "src/pyfuse3.pyx", line 849, in pyfuse3.invalidate_entry
    FileNotFoundError: [Errno 2] fuse_lowlevel_notify_inval_entry returned: No such file or directory
    

    I suspect these are generated when running s3qlrm shortly before unmount and are harmless. Since invalidate requests are processed in a queue, the kernel may issue forget requests on its own before S3QL gets around to sending the invalidate request to the kernel.

    Still, it would be great to (1) confirm that this is indeed what happens and (2) find a way to avoid the errors.

  • Various non-deterministic test failures

    Various non-deterministic test failures

    2. Test shows the following error:

    tests/t5_cache.py::TestPerstCache::test_cache_flush[True] FAILED                                                                                                                                           [ 87%]
    
    ==================================================================================================== FAILURES ====================================================================================================
    _____________________________________________________________________________________ TestPerstCache.test_cache_flush[True] ______________________________________________________________________________________
    Traceback (most recent call last):
      File "/usr/src/s3ql-3.1/tests/t5_cache.py", line 120, in test_cache_flush
        self.fsck(args=['--keep-cache'])
      File "/usr/src/s3ql-3.1/tests/t4_fuse.py", line 128, in fsck
        assert proc.wait() == expect_retcode
    AssertionError: assert 128 == 0
      -128
      +0
    ---------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------
    Please store the following master key in a safe location. It allows 
    decryption of the S3QL file system in case the storage objects holding 
    this information get corrupted:
    ---BEGIN MASTER KEY---
    dQow xSHp ZzBW QHpF bcRj xjo0 yBVP qw36 gxtI Dr9L vZ0=
    ---END MASTER KEY---
    ---------------------------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------------------------
    WARNING: Maximum object sizes less than 1 MiB will degrade performance.
    WARNING: Deleted spurious object 2
    ================================================================================ 1 failed, 297 passed, 5 skipped in 57.66 seconds ================================================================================
    

    3. Succesive test runs show different errors:

    tests/t5_cache.py::TestPerstCache::test_cache_flush[True] FAILED                                                                                                                                           [ 87%]
    
    ==================================================================================================== FAILURES ====================================================================================================
    _____________________________________________________________________________________ TestPerstCache.test_cache_flush[True] ______________________________________________________________________________________
    Traceback (most recent call last):
      File "/usr/src/s3ql-3.1/tests/t5_cache.py", line 123, in test_cache_flush
        assert fh.read() == TEST_DATA
    AssertionError: assert b'\n)(tnuomu....ne/nib/rsu/!#' == b'#!/usr/bin/e...lf.umount()\n'
      At index 0 diff: 10 != 35
      Full diff:
      - (b'\n)(tnuomu.fles        \nATAD_TSET == )(daer.hf tressa            \n:hf sa '
      -  b")'br' ,)'eliftset' ,rid_tnm.fles(niojp(nepo htiw        \n)(tnuom.fles   "
      -  b"     \n)]'ehcac-peek--' ,'etomer-ecrof--'[=sgra                  \n,0=edoc"
      -  b'ter_tcepxe(kcsf.fles        \nderongi si ehcac taht erus ekaM #        \n\n'
      -  b'kab = rid_ehcac.fles            \n)rid_ehcac.fles(eertmr.lituhs          '...
      
      ...Full output truncated (154 lines hidden), use '-vv' to show
    ---------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------
    Please store the following master key in a safe location. It allows 
    decryption of the S3QL file system in case the storage objects holding 
    this information get corrupted:
    ---BEGIN MASTER KEY---
    kYom jZq7 2fqs M8RY wXe4 QC4Y NJmR Yc2E SHxJ J7Dl 7NI=
    ---END MASTER KEY---
    ---------------------------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------------------------
    WARNING: Maximum object sizes less than 1 MiB will degrade performance.
    ================================================================================ 1 failed, 297 passed, 5 skipped in 58.27 seconds ================================================================================
    

    Next run:

    tests/t5_failsafe.py::TestNewerMetadata::test FAILED                                                                                                                                                       [ 89%]
    tests/t5_failsafe.py::TestNewerMetadata::test ERROR                                                                                                                                                        [ 89%]
    
    ===================================================================================================== ERRORS =====================================================================================================
    __________________________________________________________________________________ ERROR at teardown of TestNewerMetadata.test ___________________________________________________________________________________
    Traceback (most recent call last):
      File "/usr/src/s3ql-3.1/tests/pytest_checklogs.py", line 143, in pytest_runtest_teardown
        check_output(item)
      File "/usr/src/s3ql-3.1/tests/pytest_checklogs.py", line 132, in check_output
        check_test_output(capmethod, item)
      File "/usr/src/s3ql-3.1/tests/pytest_checklogs.py", line 106, in check_test_output
        raise AssertionError('Suspicious output to stderr (matched "%s")' % hit.group(0))
    AssertionError: Suspicious output to stderr (matched "ERROR")
    ---------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------
    Please store the following master key in a safe location. It allows 
    decryption of the S3QL file system in case the storage objects holding 
    this information get corrupted:
    ---BEGIN MASTER KEY---
    sp0w ux24 JwJ+ XlRW WHKT lhfW YUH9 /74h hbqg 9tES FeE=
    ---END MASTER KEY---
    ---------------------------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------------------------
    WARNING: Maximum object sizes less than 1 MiB will degrade performance.
    -------------------------------------------------------------------------------------------- Captured stderr teardown --------------------------------------------------------------------------------------------
    ERROR: Remote metadata is newer than local (1555031773 vs 1555031772), refusing to overwrite!
    ERROR: The locally cached metadata will be *lost* the next time the file system is mounted or checked and has therefore been backed up.
    ==================================================================================================== FAILURES ====================================================================================================
    _____________________________________________________________________________________________ TestNewerMetadata.test _____________________________________________________________________________________________
    Traceback (most recent call last):
      File "/usr/src/s3ql-3.1/tests/t5_failsafe.py", line 143, in test
        time.sleep(1)
      File "/usr/local/lib/python3.6/site-packages/_pytest/python_api.py", line 729, in __exit__
        fail(self.message)
      File "/usr/local/lib/python3.6/site-packages/_pytest/outcomes.py", line 117, in fail
        raise Failed(msg=msg, pytrace=pytrace)
    Failed: DID NOT RAISE <class 'PermissionError'>
    ---------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------
    Please store the following master key in a safe location. It allows 
    decryption of the S3QL file system in case the storage objects holding 
    this information get corrupted:
    ---BEGIN MASTER KEY---
    sp0w ux24 JwJ+ XlRW WHKT lhfW YUH9 /74h hbqg 9tES FeE=
    ---END MASTER KEY---
    ---------------------------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------------------------
    WARNING: Maximum object sizes less than 1 MiB will degrade performance.
    =========================================================================== 1 failed, 302 passed, 6 skipped, 1 error in 88.13 seconds ============================================================================
    

    Next run:

    tests/t5_cache.py::TestPerstCache::test_cache_flush_unclean FAILED                                                                                                                                         [ 88%]
    
    ==================================================================================================== FAILURES ====================================================================================================
    ____________________________________________________________________________________ TestPerstCache.test_cache_flush_unclean _____________________________________________________________________________________
    Traceback (most recent call last):
      File "/usr/src/s3ql-3.1/tests/t5_cache.py", line 161, in test_cache_flush_unclean
        args=['--force-remote'])
      File "/usr/src/s3ql-3.1/tests/t4_fuse.py", line 128, in fsck
        assert proc.wait() == expect_retcode
    AssertionError: assert 128 == 0
      -128
      +0
    ---------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------
    Please store the following master key in a safe location. It allows 
    decryption of the S3QL file system in case the storage objects holding 
    this information get corrupted:
    ---BEGIN MASTER KEY---
    /r2k e8L/ 2SUJ 43O8 wSyw 6A3e QtJH ow3u Myr2 T4eI D40=
    ---END MASTER KEY---
    Backend reports that file system is still mounted elsewhere. Either
    the file system has not been unmounted cleanly or the data has not yet
    propagated through the backend. In the later case, waiting for a while
    should fix the problem, in the former case you should try to run fsck
    on the computer where the file system has been mounted most recently.
    You may also continue and use whatever metadata is available in the
    backend. However, in that case YOU MAY LOOSE ALL DATA THAT HAS BEEN
    UPLOADED OR MODIFIED SINCE THE LAST SUCCESSFULL METADATA UPLOAD.
    Moreover, files and directories that you have deleted since then MAY
    REAPPEAR WITH SOME OF THEIR CONTENT LOST.
    Enter "continue, I know what I am doing" to use the outdated data anyway:
    > (--force-remote specified, continuing anyway)
    ---------------------------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------------------------
    WARNING: Maximum object sizes less than 1 MiB will degrade performance.
    WARNING: Deleted spurious object 1
    ================================================================================ 1 failed, 299 passed, 5 skipped in 68.40 seconds ================================================================================
    
Related tags
Synchronize local directories with Tahoe-LAFS storage grids
Synchronize local directories with Tahoe-LAFS storage grids

Gridsync Gridsync aims to provide a cross-platform, graphical user interface for Tahoe-LAFS, the Least Authority File Store. It is intended to simplif

Nov 30, 2022
Nerd-Storage is a simple web server for sharing files on the local network.
Nerd-Storage is a simple web server for sharing files on the local network.

Nerd-Storage is a simple web server for sharing files on the local network. It supports the download of files and directories, the upload of multiple files at once, making a directory, updates and deletions.

Jun 7, 2022
An open source multi-tool for exploring and publishing data
An open source multi-tool for exploring and publishing data

Datasette An open source multi-tool for exploring and publishing data Datasette is a tool for exploring and publishing data. It helps people take data

Nov 29, 2022
Qtas(Quite a Storage)is an experimental distributed storage system developed by Q-team in BJFU Advanced Computer Network sources.

Qtas(Quite a Storage)is a experimental distributed storage system developed by Q-team in BJFU Advanced Computer Network sources.

Jan 12, 2022
Qtas(Quite a Storage)is an experimental distributed storage system developed by Q-team in BJFU Advanced Computer Network sources.

Qtas(Quite a Storage)is a experimental distributed storage system developed by Q-team in BJFU Advanced Computer Network sources.

Jan 12, 2022
Ralph is a command-line tool to fetch, extract, convert and push your tracking logs from various storage backends to your LRS or any other compatible storage or database backend.

Ralph is a command-line tool to fetch, extract, convert and push your tracking logs (aka learning events) from various storage backends to your

Nov 9, 2022
DNA Storage Simulator that analyzes and simulates DNA storage

DNA Storage Simulator This monorepository contains code for a research project by Mayank Keoliya and supervised by Djordje Jevdjic, that analyzes and

Sep 25, 2022
Storage-optimizer - Identify potintial optimizations on the cloud storage accounts

Storage Optimizer Identify potintial optimizations on the cloud storage accounts

Feb 13, 2022
Full-featured Python interface for the Slack API

This repository is archived and will not receive any updates It's time to say goodbye. I'm archiving Slacker. It's been getting harder to find time to

Nov 25, 2022
Full featured redis cache backend for Django.

Redis cache backend for Django This is a Jazzband project. By contributing you agree to abide by the Contributor Code of Conduct and follow the guidel

Dec 1, 2022
A full-featured, hackable tiling window manager written and configured in Python
A full-featured, hackable tiling window manager written and configured in Python

A full-featured, hackable tiling window manager written and configured in Python Features Simple, small and extensible. It's easy to write your own la

Nov 28, 2022
Full featured redis cache backend for Django.

Redis cache backend for Django This is a Jazzband project. By contributing you agree to abide by the Contributor Code of Conduct and follow the guidel

Dec 1, 2022
CL-Gym: Full-Featured PyTorch Library for Continual Learning

CL-Gym: Full-Featured PyTorch Library for Continual Learning CL-Gym is a small yet very flexible library for continual learning research and developme

Dec 6, 2022
Pixie - A full-featured 2D graphics library for Python

Pixie - A full-featured 2D graphics library for Python Pixie is a 2D graphics library similar to Cairo and Skia. pip install pixie-python Features: Ty

Oct 27, 2022
Full featured multi arch/os debugger built on top of PyQt5 and frida

Full featured multi arch/os debugger built on top of PyQt5 and frida

Nov 29, 2022
Shypan, a simple, easy to use, full-featured library written in Python.

Shypan, a simple, easy to use, full-featured library written in Python.

Dec 8, 2021
Notepy is a full-featured Notepad Python app
Notepy is a full-featured Notepad Python app

Notepy A full featured python text-editor Notable features Autocompletion for parenthesis and quote Auto identation Syntax highlighting Compile and ru

Sep 28, 2022
Full-featured django project start tool.

django-start-tool Introduction django-start-tool is a full-featured replacement for django-admin startproject which provides cli for creating the same

Aug 30, 2022
A full featured game of falling pieces using python's pygame library.
A full featured game of falling pieces using python's pygame library.

A full featured game of falling shapes using python's pygame library. Key Features • How To Play • Download • Contributing • License Key Features Sing

Oct 31, 2022