The web end of seafile server.

Build Status

Introduction

Seahub is the web frontend for Seafile.

Preparation

Getting it

You can grab souce code from GitHub.

$ git clone git://github.com/haiwen/seahub.git

Set up a virtualenv to install dependencies locally:

$ virtualenv .virtualenv
$ . .virtualenv/bin/activate

Install python libraries by pip:

$ pip install -r requirements.txt

Configuration

Modify CCNET_CONF_DIR, SEAFILE_CENTRAL_CONF_DIR, SEAFILE_CONF_DIR and PYTHONPATH in setenv.sh.template to fit your path.

CCNET_CONF_DIR is the directory, that contains the ccnet socket (and formerly ccnet.conf).

Since 5.0 SEAFILE_CENTRAL_CONF_DIR contains most config files.

SEAFILE_CONF_DIR is the seafile-data directory (and formerly contained seafile.conf).

Run and Verify

Run as:

$ . .virtualenv/bin/activate
$ ./run-seahub.sh.template

Then open your browser, and input http://localhost:8000/, there should be a Login page. You can create admin account using seahub-admin.py script under tools/ directory.

Internationalization (I18n)

Please refer to https://github.com/haiwen/seafile/wiki/Seahub-Translation

Comments
  • Creation date 1970-01-01 pretty silly

    Creation date 1970-01-01 pretty silly

    Hi there, I read somewhere you optimized the file view in seahub by not showing the real creation date but '1970-01-01' instead. I think this is pretty wrong - just looks like a bug. Either remove the date completely or, which I would prefer, just show the real date as before. Maybe make it an option to be set in seahub_settings.py? But this current way just is waste of space and nerves. Thanks and regards

  • 500 Internal Server Error when accesing files/history via Seahub

    500 Internal Server Error when accesing files/history via Seahub

    Since a few versions (around 3.1.X), when accessing the history via Seahub on my private Seafile server, I encounter a "500 Internal server error". No change have been made on the Apache configuration since the "old" days of 3.0.

    Some times, when trying hard, I can temporarily access the history/viewing the files, but eventually I'll go back to the 500 Internal Error issue.

    Library is encrypted if that matters.

  • Log of failed login attempts

    Log of failed login attempts

    If Seahub would log failed login attempts with IP and username, fail2ban could be used to prevent brute force attacks against seafile. I know there already is a captcha feature to do this. But I think with current captcha solving programs the fail2ban approach is more secure.

  • Enable thumbnail generation for image lightbox

    Enable thumbnail generation for image lightbox

    This is a pull request based on the ticket: #594

    This update makes use of the thumbnail generation for larger image galleries. While browsing images using the lightbox the user doesn't have to download the full size image which could have several mb in size. instead a downscaled image is created (default 1280px).

    The thumbnail_create function is modified to use jpeg instead of png for thumbs > 100 pixel to save space (1mb > 200kb).

    The generation hooks into the mangificPopop elementParse() callback but as a blocking request until the thumb is generated.

    The new settings variables have been introduced:

    • THUMBNAIL_LARGE = '100'
    • THUMBNAIL_EXTENSION_LARGE = 'jpeg'
    • ENABLE_THUMBNAIL_LARGE = True
    • THUMBNAIL_LARGE_SIZE = '1280'

    A new url has been added introduced (similar to create) to create thumbs based on the THUMBNAIL_LARGE_SIZE setting

    • /thumbnail//large/

    Possible updates/tweaks for the future:

    • switing to ajax method instead of image (right now the thumbnail generation is a blocking ajax callback)
    • make cleanup urls /thumbnail//create/(small|medium|large|xlarge)
    • take care of the devices screen resolution/retina/HiDPI settings, etc.
  • Thumbnail generator creates too many processes

    Thumbnail generator creates too many processes

    Hello,

    when opening a library/folder containing many pictures (several hundred) the seahub thumbnail generator creates too many processes and makes my tiny server run out of memory. I use an ARM A20, 1GB RAM machine with seafile for raspberry pi. When opening a folder with many pictures, first I see lots of processes in top:

    top - 11:50:22 up 24 min, 1 user, load average: 20,49, 6,27, 2,75 Tasks: 167 total, 47 running, 120 sleeping, 0 stopped, 0 zombie %Cpu(s): 85,3 us, 14,1 sy, 0,0 ni, 0,0 id, 0,0 wa, 0,0 hi, 0,6 si, 0,0 st KiB Mem: 1008516 total, 957004 used, 51512 free, 100684 buffers KiB Swap: 0 total, 0 used, 0 free. 90960 cached Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1906 seafile+ 20 0 502420 7564 2824 S 9,6 0,8 0:28.33 seaf-server 986 mysql 20 0 328716 60628 5380 S 4,8 6,0 0:34.45 mysqld 2150 seafile+ 20 0 48192 25248 1852 R 4,5 2,5 0:00.99 python2.7 2157 seafile+ 20 0 47936 25036 1852 R 4,5 2,5 0:00.92 python2.7 2158 seafile+ 20 0 47936 25224 1852 R 4,5 2,5 0:00.97 python2.7 2195 seafile+ 20 0 47308 24664 1744 R 4,5 2,4 0:00.81 python2.7 2107 seafile+ 20 0 52168 29348 2000 R 4,1 2,9 0:02.23 python2.7 2133 seafile+ 20 0 47936 25092 1852 R 4,1 2,5 0:00.97 python2.7 2137 seafile+ 20 0 47936 25116 1852 R 4,1 2,5 0:00.98 python2.7 2138 seafile+ 20 0 48192 25364 1852 R 4,1 2,5 0:01.02 python2.7 ... many more ... 2171 seafile+ 20 0 47576 24888 1792 R 3,7 2,5 0:00.85 python2.7 2172 seafile+ 20 0 47308 24616 1740 R 3,7 2,4 0:00.84 python2.7 2173 seafile+ 20 0 47936 25040 1852 R 3,7 2,5 0:00.89 python2.7 2176 seafile+ 20 0 47308 24552 1692 R 3,7 2,4 0:00.85 python2.7 2182 seafile+ 20 0 47572 24892 1796 R 3,7 2,5 0:00.84 python2.7 2183 seafile+ 20 0 47680 24904 1808 R 3,7 2,5 0:00.83 python2.7 2186 seafile+ 20 0 47576 24896 1792 R 3,7 2,5 0:00.82 python2.7 2187 root 20 0 84180 15608 8564 R 3,7 1,5 0:00.76 horde-alarms 2190 seafile+ 20 0 47572 24888 1792 R 3,7 2,5 0:00.82 python2.7

    After < 1 minute I see in syslog that my server runs out of memory and starts to kill processes: [ 1510.748932] lowmemorykiller: Killing 'mysqld' (1004), adj 0, [ 1510.748941] to free 56168kB on behalf of 'python2.7' (2195) because [ 1510.748946] cache 6084kB is below limit 6144kB for oom_score_adj 0 [ 1510.748951] Free memory is -2644kB above reserved ...

    The most weird thing about this is, in grid mode this doesn't happen, there I only see 5 processes max. It runs slow but doesn't crash. But I cannot prevent my users from using list mode.

    I can reproduce this issue any time and provide more output, just tell me what you need.

    Any help is appreciated!

  • Custom frontend (index/home page before login)

    Custom frontend (index/home page before login)

    I am wondering how is/are the pages handled when a user tries to access the domain seafile is installed on ? I see that initially a login page is shown always and if user is logged in, hes redirected to libraryHomePage.

    I wanted to try and create a few custom html/php based frontend pages like a actual "home page", help page, FAQ, etc. which will open by default on domain root and open the seafile login page externally by its url (adding link in nav bar and such) Basically have a proper website-like structure

    I know one of the solutions is to host the seafile instances on sub-domains like cloud.xxx.com or on non-root like xxx.com/seafile but are there any alternatives ?

  •  Massive Web UI performance issues on Raspberry Pi

    Massive Web UI performance issues on Raspberry Pi

    From https://github.com/haiwen/seafile/issues/736

    On Raspberry Pi, it takes up to 30 seconds (clarification: most of the time it's around 4s) to load a simple repository overview page.

    I did a little bit of profiling on this but couldn't find one single source of slowness. It's hard to profile performance, though, because 90% of the logic is in templates, not Python code, which are difficult to properly profile.

    May I suggest that on long term, logic should be moved from templates into the views and models, which has the following benefits:

    • It's the recommended programming style for Django
    • It's easier to profile and optimize
    • It's faster by multiple orders of magnitude
  • SERVICE_URL in ccnet/ccnet.conf ignored

    SERVICE_URL in ccnet/ccnet.conf ignored

    Hi, since server 1.7.0.1 the option SERVICE_URL in ccnet/ccnet.conf seems to be ignored, at least any shared URL has at once the default port 8000 URL instead of the one configured. It worked with 1.6.1 before.

  • Change in upload link handling.

    Change in upload link handling.

    This is a change in the upload API that happens somewhere between 6.3 and 7.1. I was able to use repos/<repo-id>/upload-link/ to get an upload URL and then use the parent_dir=<...> to specify the directoy I want to upload the file into.

    Now in 7.1.1, if I don't give the p= parameter for thhe upload-link request, I get {"error": "Permission denied."} as return value (with status code 200 btw). If I only use p= in thte first request but not the parent_dir= in the second request, I get {"error": "Invalid URL. "} (with a new line before the closing quote, which might be invalid JSON, also 200 status code). I had to provide he directory in both requests. This is fine to work with but is a bit strange. The error return is also a bit strange and give no useful information.....

  • OnlyOffice support generates document key of 0000000000000000000000000000000000000000 for emtpy files

    OnlyOffice support generates document key of 0000000000000000000000000000000000000000 for emtpy files

    If a file is empty (i.e. file size of 0), for instance if you have created a new file in seafile, the key passed to OnlyOffice is always 0000000000000000000000000000000000000000

    For instance the script created looks like:

    var config = { "document": { "fileType": "docx", "key": "0000000000000000000000000000000000000000", "title": "yetanother.docx",

    The problem is that this creates a key clash if you attempt to open multiple empty files, as OnlyOffice thinks that these are the same file.

  • mail notifications not working

    mail notifications not working

    I've tried to setup mail delivery as described here: http://manual.seafile.com/config/sending_email.html but wasn't succesfull. Neither with gmail (and app password) no via my own mailserver.

    Is there any option to get debug info? I couldn't find anything in the logs. I've only found out that some mails have been send to my local mailbox (on the server, not the mailserver).

  • Bump decode-uri-component from 0.2.0 to 0.2.2 in /frontend

    Bump decode-uri-component from 0.2.0 to 0.2.2 in /frontend

    Bumps decode-uri-component from 0.2.0 to 0.2.2.

    Release notes

    Sourced from decode-uri-component's releases.

    v0.2.2

    • Prevent overwriting previously decoded tokens 980e0bf

    https://github.com/SamVerschueren/decode-uri-component/compare/v0.2.1...v0.2.2

    v0.2.1

    • Switch to GitHub workflows 76abc93
    • Fix issue where decode throws - fixes #6 746ca5d
    • Update license (#1) 486d7e2
    • Tidelift tasks a650457
    • Meta tweaks 66e1c28

    https://github.com/SamVerschueren/decode-uri-component/compare/v0.2.0...v0.2.1

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

  • download do not get deleted upon deletion of referenced file - possible information leakage

    download do not get deleted upon deletion of referenced file - possible information leakage

    We noticed a strange beahavior, that we consider a bug:

    After deleting a file that was shared via download-link, he added a file with the same name. The newly created file was still accessible through the prior created download-link. We think, this is a bug and the download-link should get deleted with the file. Otherwise the user could leak information, without noticing.

    Howto reproduce:

    • add file "test.md"
    • create download-link for "test.md"
    • delete "test.md"
    • create new file "test.md"
    • test formerly created download-link

    Possible solutions:

    • check for download-links upon deletion of file or folder and delete them with the file/folder
    • use unique file-id as reference in download-links (like the internal link does)
    • ...

    Thank you for looking into this!

    regards

  • 如何对源码进行打包操作,并用于docker安装

    如何对源码进行打包操作,并用于docker安装

    背景

    您好,我在seahub和seafile-server的源码基础上进行了一些自定义的修改,在开发环境中一切都运行正常,现在需要对源码进行打包操作

    问题

    现在看起来 seahub/scripts/build/build-server.py 看起来是用来打包的脚本,我参考 https://manual.seafile.com/build_seafile/rpi/#run-the-packaging-script 执行了相关的命令,但是发现需要一些不相关的基础服务,如 seafdav,seafobj,这些服务并没有出现在开发环境配置中

    预期

    现在我想把 seahub和seafile-server的源码进行打包操作,用于docker镜像的安装,预期结果如同 https://seafile-downloads.oss-cn-shanghai.aliyuncs.com/seafile-server_9.0.9_x86-64.tar.gz

  • [Feature Request] Access files with tags from side panel

    [Feature Request] Access files with tags from side panel

    Since file tagging is supported with in seafile, it would be great to access fast all tagged file from side panel with ability to disable form user settings if this feature is not used or needed.

  • Bug: Error when logging in with keycloak 20. Userinfo is None

    Bug: Error when logging in with keycloak 20. Userinfo is None

    Error happens, when I try to use single-sign-on. The Keycloak-Auth is working, but during callback there is an error. Looks like it happens when seafile is calling the userinfo-api.

    It was working with Keycloak 17.

    • Keycloak 20
    • Seafile Server 9.0.5

    grafik

    This is the keycloak-log:

    2022-11-17 11:19:38,131 WARN  [org.keycloak.events] (executor-thread-30) type=USER_INFO_REQUEST_ERROR, realmId=master, clientId=null, userId=null, ipAddress=10.10.10.101, error=access_denied, auth_method=validate_access_token
    
    
    2022-11-17 10:55:52,387 [INFO] seahub.oauth.views:156 format_user_info user info resp:
    2022-11-17 10:55:52,387 [ERROR] django.request:224 log_response Internal Server Error: /oauth/callback/
    Traceback (most recent call last):
      File "/haiwen/seafile-server-9.0.9/seahub/thirdpart/requests/models.py", line 910, in json
        return complexjson.loads(self.text, **kwargs)
      File "/usr/lib/python3.9/json/__init__.py", line 346, in loads
        return _default_decoder.decode(s)
      File "/usr/lib/python3.9/json/decoder.py", line 337, in decode
        obj, end = self.raw_decode(s, idx=_w(s, 0).end())
      File "/usr/lib/python3.9/json/decoder.py", line 355, in raw_decode
        raise JSONDecodeError("Expecting value", s, err.value) from None
    json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/haiwen/seafile-server-9.0.9/seahub/thirdpart/django/core/handlers/exception.py", line 47, in inner
        response = get_response(request)
      File "/haiwen/seafile-server-9.0.9/seahub/thirdpart/django/core/handlers/base.py", line 181, in _get_response
        response = wrapped_callback(request, *callback_args, **callback_kwargs)
      File "/haiwen/seafile-server-9.0.9/seahub/seahub/oauth/views.py", line 87, in _decorated
        return func(request)
      File "/haiwen/seafile-server-9.0.9/seahub/seahub/oauth/views.py", line 177, in oauth_callback
        user_info, error = format_user_info(user_info_resp)
      File "/haiwen/seafile-server-9.0.9/seahub/seahub/oauth/views.py", line 159, in format_user_info
        user_info_json = user_info_resp.json()
      File "/haiwen/seafile-server-9.0.9/seahub/thirdpart/requests/models.py", line 917, in json
        raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
    requests.exceptions.JSONDecodeError: [Errno Expecting value] : 0
    
    

    My Investigation:

    The userinfo is empty for some reason:

    https://github.com/haiwen/seahub/blob/4d38b499411012e963d6381731260e1ef06174b5/seahub/oauth/views.py#L159

    This should be caught by try/catch.

    Additionally, Keycloak removed deprecated stuff during version 18-20. I think, some access_token handling changed, so the userinfo-request is unauthenticated. Maybe related to this comment https://github.com/oauth2-proxy/oauth2-proxy/issues/1216#issuecomment-857086005

ZODB Client-Server framework

ZEO - Single-server client-server database server for ZODB ZEO is a client-server storage for ZODB for sharing a single storage among many clients. Wh

Nov 4, 2022
🐤 Nix-TTS: An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation

?? Nix-TTS An Incredibly Lightweight End-to-End Text-to-Speech Model via Non End-to-End Distillation Rendi Chevi, Radityo Eko Prasojo, Alham Fikri Aji

Dec 1, 2022
An end-to-end machine learning web app to predict rugby scores (Pandas, SQLite, Keras, Flask, Docker)
An end-to-end machine learning web app to predict rugby scores (Pandas, SQLite, Keras, Flask, Docker)

Rugby score prediction An end-to-end machine learning web app to predict rugby scores Overview An demo project to provide a high-level overview of the

May 24, 2022
This is a simple Todo web application built Django (back-end) and React JS (front-end)

Django REST Todo app This is a simple Todo web application built with Django (back-end) and React JS (front-end). The project enables you to systemati

May 6, 2022
Python codes for the server and client end that facilitates file transfers. (Using AWS EC2 instance as the server)

Server-and-Client-File-Transfer Python codes for the server and client end that facilitates file transfers. I will be using an AWS EC2 instance as the

Oct 13, 2021
An open source library for deep learning end-to-end dialog systems and chatbots.
An open source library for deep learning end-to-end dialog systems and chatbots.

DeepPavlov is an open-source conversational AI library built on TensorFlow, Keras and PyTorch. DeepPavlov is designed for development of production re

Nov 28, 2022
End-to-End Object Detection with Fully Convolutional Network
End-to-End Object Detection with Fully Convolutional Network

This project provides an implementation for "End-to-End Object Detection with Fully Convolutional Network" on PyTorch.

Dec 1, 2022
An open source library for deep learning end-to-end dialog systems and chatbots.
An open source library for deep learning end-to-end dialog systems and chatbots.

DeepPavlov is an open-source conversational AI library built on TensorFlow, Keras and PyTorch. DeepPavlov is designed for development of production re

Dec 1, 2022
An open source library for deep learning end-to-end dialog systems and chatbots.
An open source library for deep learning end-to-end dialog systems and chatbots.

DeepPavlov is an open-source conversational AI library built on TensorFlow, Keras and PyTorch. DeepPavlov is designed for development of production re

Feb 18, 2021
:mag: End-to-End Framework for building natural language search interfaces to data by utilizing Transformers and the State-of-the-Art of NLP. Supporting DPR, Elasticsearch, HuggingFace’s Modelhub and much more!
:mag: End-to-End Framework for building natural language search interfaces to data by utilizing Transformers and the State-of-the-Art of NLP. Supporting DPR, Elasticsearch, HuggingFace’s Modelhub and much more!

Haystack is an end-to-end framework that enables you to build powerful and production-ready pipelines for different search use cases. Whether you want

Feb 18, 2021
Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning on image-text and video-text tasks

Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning on image-text and video-text tasks. It takes raw videos/images + text as inputs, and outputs task predictions. ClipBERT is designed based on 2D CNNs and transformers, and uses a sparse sampling strategy to enable efficient end-to-end video-and-language learning.

Nov 27, 2022
3D Vision functions with end-to-end support for deep learning developers, written in Ivy.
3D Vision functions with end-to-end support for deep learning developers, written in Ivy.

Ivy vision focuses predominantly on 3D vision, with functions for camera geometry, image projections, co-ordinate frame transformations, forward warping, inverse warping, optical flow, depth triangulation, voxel grids, point clouds, signed distance functions, and others. Check out the docs for more info!

Nov 16, 2022
A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.
A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.

Object Pose Estimation Demo This tutorial will go through the steps necessary to perform pose estimation with a UR3 robotic arm in Unity. You’ll gain

Nov 30, 2022
[CVPR2021 Oral] End-to-End Video Instance Segmentation with Transformers
[CVPR2021 Oral] End-to-End Video Instance Segmentation with Transformers

VisTR: End-to-End Video Instance Segmentation with Transformers This is the official implementation of the VisTR paper: Installation We provide instru

Nov 27, 2022
This repository lets you train neural networks models for performing end-to-end full-page handwriting recognition using the Apache MXNet deep learning frameworks on the IAM Dataset.
This repository lets you train neural networks models for performing end-to-end full-page handwriting recognition using the Apache MXNet deep learning frameworks on the IAM Dataset.

Handwritten Text Recognition (OCR) with MXNet Gluon These notebooks have been created by Jonathan Chung, as part of his internship as Applied Scientis

Nov 17, 2022
Unofficial implementation of "TableNet: Deep Learning model for end-to-end Table detection and Tabular data extraction from Scanned Document Images"
Unofficial implementation of

TableNet Unofficial implementation of ICDAR 2019 paper : TableNet: Deep Learning model for end-to-end Table detection and Tabular data extraction from

Nov 25, 2022
End-to-end pipeline for real-time scene text detection and recognition.
End-to-end pipeline for real-time scene text detection and recognition.

Real-time-Scene-Text-Detection-and-Recognition-System End-to-end pipeline for real-time scene text detection and recognition. The detection model use

Aug 4, 2022
The code of "Mask TextSpotter: An End-to-End Trainable Neural Network for Spotting Text with Arbitrary Shapes"

Mask TextSpotter A Pytorch implementation of Mask TextSpotter along with its extension can be find here Introduction This is the official implementati

Nov 21, 2022
textspotter - An End-to-End TextSpotter with Explicit Alignment and Attention
textspotter - An End-to-End TextSpotter with Explicit Alignment and Attention

An End-to-End TextSpotter with Explicit Alignment and Attention This is initially described in our CVPR 2018 paper. Getting Started Installation Clone

Nov 10, 2022
Code for the AAAI 2018 publication "SEE: Towards Semi-Supervised End-to-End Scene Text Recognition"

SEE: Towards Semi-Supervised End-to-End Scene Text Recognition Code for the AAAI 2018 publication "SEE: Towards Semi-Supervised End-to-End Scene Text

Dec 2, 2022