SeaweedFS a fast distributed storage system for blobs, objects, files, and data lake, for billions of files

Overview

SeaweedFS

Slack Twitter Build Status GoDoc Wiki Docker Pulls SeaweedFS on Maven Central

SeaweedFS Logo

Sponsor SeaweedFS via Patreon

SeaweedFS is an independent Apache-licensed open source project with its ongoing development made possible entirely thanks to the support of these awesome backers. If you'd like to grow SeaweedFS even stronger, please consider joining our sponsors on Patreon.

Your support will be really appreciated by me and other supporters!

Gold Sponsors

shuguang


Table of Contents

Quick Start for S3 API on Docker

docker run -p 8333:8333 chrislusf/seaweedfs server -s3

Quick Start with single binary

  • Download the latest binary from https://github.com/chrislusf/seaweedfs/releases and unzip a single binary file weed or weed.exe
  • Run weed server -dir=/some/data/dir -s3 to start one master, one volume server, one filer, and one S3 gateway.

Also, to increase capacity, just add more volume servers by running weed volume -dir="/some/data/dir2" -mserver=" :9333" -port=8081 locally, or on a different machine, or on thousands of machines. That is it!

Introduction

SeaweedFS is a simple and highly scalable distributed file system. There are two objectives:

  1. to store billions of files!
  2. to serve the files fast!

SeaweedFS started as an Object Store to handle small files efficiently. Instead of managing all file metadata in a central master, the central master only manages volumes on volume servers, and these volume servers manage files and their metadata. This relieves concurrency pressure from the central master and spreads file metadata into volume servers, allowing faster file access (O(1), usually just one disk read operation).

There is only 40 bytes of disk storage overhead for each file's metadata. It is so simple with O(1) disk reads that you are welcome to challenge the performance with your actual use cases.

SeaweedFS started by implementing Facebook's Haystack design paper. Also, SeaweedFS implements erasure coding with ideas from f4: Facebook’s Warm BLOB Storage System, and has a lot of similarities with Facebook’s Tectonic Filesystem

On top of the object store, optional Filer can support directories and POSIX attributes. Filer is a separate linearly-scalable stateless server with customizable metadata stores, e.g., MySql, Postgres, Redis, Cassandra, HBase, Mongodb, Elastic Search, LevelDB, RocksDB, Sqlite, MemSql, TiDB, Etcd, CockroachDB, etc.

For any distributed key value stores, the large values can be offloaded to SeaweedFS. With the fast access speed and linearly scalable capacity, SeaweedFS can work as a distributed Key-Large-Value store.

SeaweedFS can transparently integrate with the cloud. With hot data on local cluster, and warm data on the cloud with O(1) access time, SeaweedFS can achieve both fast local access time and elastic cloud storage capacity. What's more, the cloud storage access API cost is minimized. Faster and Cheaper than direct cloud storage!

Back to TOC

Additional Features

  • Can choose no replication or different replication levels, rack and data center aware.
  • Automatic master servers failover - no single point of failure (SPOF).
  • Automatic Gzip compression depending on file mime type.
  • Automatic compaction to reclaim disk space after deletion or update.
  • Automatic entry TTL expiration.
  • Any server with some disk spaces can add to the total storage space.
  • Adding/Removing servers does not cause any data re-balancing unless triggered by admin commands.
  • Optional picture resizing.
  • Support ETag, Accept-Range, Last-Modified, etc.
  • Support in-memory/leveldb/readonly mode tuning for memory/performance balance.
  • Support rebalancing the writable and readonly volumes.
  • Customizable Multiple Storage Tiers: Customizable storage disk types to balance performance and cost.
  • Transparent cloud integration: unlimited capacity via tiered cloud storage for warm data.
  • Erasure Coding for warm storage Rack-Aware 10.4 erasure coding reduces storage cost and increases availability.

Back to TOC

Filer Features

Kubernetes

Back to TOC

Example: Using Seaweed Object Store

By default, the master node runs on port 9333, and the volume nodes run on port 8080. Let's start one master node, and two volume nodes on port 8080 and 8081. Ideally, they should be started from different machines. We'll use localhost as an example.

SeaweedFS uses HTTP REST operations to read, write, and delete. The responses are in JSON or JSONP format.

Start Master Server

> ./weed master

Start Volume Servers

weed volume -dir="/tmp/data2" -max=10 -mserver="localhost:9333" -port=8081 & ">
> weed volume -dir="/tmp/data1" -max=5  -mserver="localhost:9333" -port=8080 &
> weed volume -dir="/tmp/data2" -max=10 -mserver="localhost:9333" -port=8081 &

Write File

To upload a file: first, send a HTTP POST, PUT, or GET request to /dir/assign to get an fid and a volume server url:

> curl http://localhost:9333/dir/assign
{"count":1,"fid":"3,01637037d6","url":"127.0.0.1:8080","publicUrl":"localhost:8080"}

Second, to store the file content, send a HTTP multi-part POST request to url + '/' + fid from the response:

> curl -F [email protected]/home/chris/myphoto.jpg http://127.0.0.1:8080/3,01637037d6
{"name":"myphoto.jpg","size":43234,"eTag":"1cc0118e"}

To update, send another POST request with updated file content.

For deletion, send an HTTP DELETE request to the same url + '/' + fid URL:

> curl -X DELETE http://127.0.0.1:8080/3,01637037d6

Save File Id

Now, you can save the fid, 3,01637037d6 in this case, to a database field.

The number 3 at the start represents a volume id. After the comma, it's one file key, 01, and a file cookie, 637037d6.

The volume id is an unsigned 32-bit integer. The file key is an unsigned 64-bit integer. The file cookie is an unsigned 32-bit integer, used to prevent URL guessing.

The file key and file cookie are both coded in hex. You can store the tuple in your own format, or simply store the fid as a string.

If stored as a string, in theory, you would need 8+1+16+8=33 bytes. A char(33) would be enough, if not more than enough, since most uses will not need 2^32 volumes.

If space is really a concern, you can store the file id in your own format. You would need one 4-byte integer for volume id, 8-byte long number for file key, and a 4-byte integer for the file cookie. So 16 bytes are more than enough.

Read File

Here is an example of how to render the URL.

First look up the volume server's URLs by the file's volumeId:

> curl http://localhost:9333/dir/lookup?volumeId=3
{"volumeId":"3","locations":[{"publicUrl":"localhost:8080","url":"localhost:8080"}]}

Since (usually) there are not too many volume servers, and volumes don't move often, you can cache the results most of the time. Depending on the replication type, one volume can have multiple replica locations. Just randomly pick one location to read.

Now you can take the public url, render the url or directly read from the volume server via url:

 http://localhost:8080/3,01637037d6.jpg

Notice we add a file extension ".jpg" here. It's optional and just one way for the client to specify the file content type.

If you want a nicer URL, you can use one of these alternative URL formats:

 http://localhost:8080/3/01637037d6/my_preferred_name.jpg
 http://localhost:8080/3/01637037d6.jpg
 http://localhost:8080/3,01637037d6.jpg
 http://localhost:8080/3/01637037d6
 http://localhost:8080/3,01637037d6

If you want to get a scaled version of an image, you can add some params:

http://localhost:8080/3/01637037d6.jpg?height=200&width=200
http://localhost:8080/3/01637037d6.jpg?height=200&width=200&mode=fit
http://localhost:8080/3/01637037d6.jpg?height=200&width=200&mode=fill

Rack-Aware and Data Center-Aware Replication

SeaweedFS applies the replication strategy at a volume level. So, when you are getting a file id, you can specify the replication strategy. For example:

curl http://localhost:9333/dir/assign?replication=001

The replication parameter options are:

000: no replication
001: replicate once on the same rack
010: replicate once on a different rack, but same data center
100: replicate once on a different data center
200: replicate twice on two different data center
110: replicate once on a different rack, and once on a different data center

More details about replication can be found on the wiki.

You can also set the default replication strategy when starting the master server.

Allocate File Key on Specific Data Center

Volume servers can be started with a specific data center name:

 weed volume -dir=/tmp/1 -port=8080 -dataCenter=dc1
 weed volume -dir=/tmp/2 -port=8081 -dataCenter=dc2

When requesting a file key, an optional "dataCenter" parameter can limit the assigned volume to the specific data center. For example, this specifies that the assigned volume should be limited to 'dc1':

 http://localhost:9333/dir/assign?dataCenter=dc1

Other Features

Back to TOC

Object Store Architecture

Usually distributed file systems split each file into chunks, a central master keeps a mapping of filenames, chunk indices to chunk handles, and also which chunks each chunk server has.

The main drawback is that the central master can't handle many small files efficiently, and since all read requests need to go through the chunk master, so it might not scale well for many concurrent users.

Instead of managing chunks, SeaweedFS manages data volumes in the master server. Each data volume is 32GB in size, and can hold a lot of files. And each storage node can have many data volumes. So the master node only needs to store the metadata about the volumes, which is a fairly small amount of data and is generally stable.

The actual file metadata is stored in each volume on volume servers. Since each volume server only manages metadata of files on its own disk, with only 16 bytes for each file, all file access can read file metadata just from memory and only needs one disk operation to actually read file data.

For comparison, consider that an xfs inode structure in Linux is 536 bytes.

Master Server and Volume Server

The architecture is fairly simple. The actual data is stored in volumes on storage nodes. One volume server can have multiple volumes, and can both support read and write access with basic authentication.

All volumes are managed by a master server. The master server contains the volume id to volume server mapping. This is fairly static information, and can be easily cached.

On each write request, the master server also generates a file key, which is a growing 64-bit unsigned integer. Since write requests are not generally as frequent as read requests, one master server should be able to handle the concurrency well.

Write and Read files

When a client sends a write request, the master server returns (volume id, file key, file cookie, volume node url) for the file. The client then contacts the volume node and POSTs the file content.

When a client needs to read a file based on (volume id, file key, file cookie), it asks the master server by the volume id for the (volume node url, volume node public url), or retrieves this from a cache. Then the client can GET the content, or just render the URL on web pages and let browsers fetch the content.

Please see the example for details on the write-read process.

Storage Size

In the current implementation, each volume can hold 32 gibibytes (32GiB or 8x2^32 bytes). This is because we align content to 8 bytes. We can easily increase this to 64GiB, or 128GiB, or more, by changing 2 lines of code, at the cost of some wasted padding space due to alignment.

There can be 4 gibibytes (4GiB or 2^32 bytes) of volumes. So the total system size is 8 x 4GiB x 4GiB which is 128 exbibytes (128EiB or 2^67 bytes).

Each individual file size is limited to the volume size.

Saving memory

All file meta information stored on an volume server is readable from memory without disk access. Each file takes just a 16-byte map entry of <64bit key, 32bit offset, 32bit size>. Of course, each map entry has its own space cost for the map. But usually the disk space runs out before the memory does.

Tiered Storage to the cloud

The local volume servers are much faster, while cloud storages have elastic capacity and are actually more cost-efficient if not accessed often (usually free to upload, but relatively costly to access). With the append-only structure and O(1) access time, SeaweedFS can take advantage of both local and cloud storage by offloading the warm data to the cloud.

Usually hot data are fresh and warm data are old. SeaweedFS puts the newly created volumes on local servers, and optionally upload the older volumes on the cloud. If the older data are accessed less often, this literally gives you unlimited capacity with limited local servers, and still fast for new data.

With the O(1) access time, the network latency cost is kept at minimum.

If the hot/warm data is split as 20/80, with 20 servers, you can achieve storage capacity of 100 servers. That's a cost saving of 80%! Or you can repurpose the 80 servers to store new data also, and get 5X storage throughput.

Back to TOC

Compared to Other File Systems

Most other distributed file systems seem more complicated than necessary.

SeaweedFS is meant to be fast and simple, in both setup and operation. If you do not understand how it works when you reach here, we've failed! Please raise an issue with any questions or update this file with clarifications.

SeaweedFS is constantly moving forward. Same with other systems. These comparisons can be outdated quickly. Please help to keep them updated.

Back to TOC

Compared to HDFS

HDFS uses the chunk approach for each file, and is ideal for storing large files.

SeaweedFS is ideal for serving relatively smaller files quickly and concurrently.

SeaweedFS can also store extra large files by splitting them into manageable data chunks, and store the file ids of the data chunks into a meta chunk. This is managed by "weed upload/download" tool, and the weed master or volume servers are agnostic about it.

Back to TOC

Compared to GlusterFS, Ceph

The architectures are mostly the same. SeaweedFS aims to store and read files fast, with a simple and flat architecture. The main differences are

  • SeaweedFS optimizes for small files, ensuring O(1) disk seek operation, and can also handle large files.
  • SeaweedFS statically assigns a volume id for a file. Locating file content becomes just a lookup of the volume id, which can be easily cached.
  • SeaweedFS Filer metadata store can be any well-known and proven data stores, e.g., Redis, Cassandra, HBase, Mongodb, Elastic Search, MySql, Postgres, Sqlite, MemSql, TiDB, CockroachDB, Etcd etc, and is easy to customized.
  • SeaweedFS Volume server also communicates directly with clients via HTTP, supporting range queries, direct uploads, etc.
System File Metadata File Content Read POSIX REST API Optimized for large number of small files
SeaweedFS lookup volume id, cacheable O(1) disk seek Yes Yes
SeaweedFS Filer Linearly Scalable, Customizable O(1) disk seek FUSE Yes Yes
GlusterFS hashing FUSE, NFS
Ceph hashing + rules FUSE Yes
MooseFS in memory FUSE No
MinIO separate meta file for each file Yes No

Back to TOC

Compared to GlusterFS

GlusterFS stores files, both directories and content, in configurable volumes called "bricks".

GlusterFS hashes the path and filename into ids, and assigned to virtual volumes, and then mapped to "bricks".

Back to TOC

Compared to MooseFS

MooseFS chooses to neglect small file issue. From moosefs 3.0 manual, "even a small file will occupy 64KiB plus additionally 4KiB of checksums and 1KiB for the header", because it "was initially designed for keeping large amounts (like several thousands) of very big files"

MooseFS Master Server keeps all meta data in memory. Same issue as HDFS namenode.

Back to TOC

Compared to Ceph

Ceph can be setup similar to SeaweedFS as a key->blob store. It is much more complicated, with the need to support layers on top of it. Here is a more detailed comparison

SeaweedFS has a centralized master group to look up free volumes, while Ceph uses hashing and metadata servers to locate its objects. Having a centralized master makes it easy to code and manage.

Same as SeaweedFS, Ceph is also based on the object store RADOS. Ceph is rather complicated with mixed reviews.

Ceph uses CRUSH hashing to automatically manage the data placement, which is efficient to locate the data. But the data has to be placed according to the CRUSH algorithm. Any wrong configuration would cause data loss. Topology changes, such as adding new servers to increase capacity, will cause data migration with high IO cost to fit the CRUSH algorithm. SeaweedFS places data by assigning them to any writable volumes. If writes to one volume failed, just pick another volume to write. Adding more volumes are also as simple as it can be.

SeaweedFS is optimized for small files. Small files are stored as one continuous block of content, with at most 8 unused bytes between files. Small file access is O(1) disk read.

SeaweedFS Filer uses off-the-shelf stores, such as MySql, Postgres, Sqlite, Mongodb, Redis, Elastic Search, Cassandra, HBase, MemSql, TiDB, CockroachCB, Etcd, to manage file directories. These stores are proven, scalable, and easier to manage.

SeaweedFS comparable to Ceph advantage
Master MDS simpler
Volume OSD optimized for small files
Filer Ceph FS linearly scalable, Customizable, O(1) or O(logN)

Back to TOC

Compared to MinIO

MinIO follows AWS S3 closely and is ideal for testing for S3 API. It has good UI, policies, versionings, etc. SeaweedFS is trying to catch up here. It is also possible to put MinIO as a gateway in front of SeaweedFS later.

MinIO metadata are in simple files. Each file write will incur extra writes to corresponding meta file.

MinIO does not have optimization for lots of small files. The files are simply stored as is to local disks. Plus the extra meta file and shards for erasure coding, it only amplifies the LOSF problem.

MinIO has multiple disk IO to read one file. SeaweedFS has O(1) disk reads, even for erasure coded files.

MinIO has full-time erasure coding. SeaweedFS uses replication on hot data for faster speed and optionally applies erasure coding on warm data.

MinIO does not have POSIX-like API support.

MinIO has specific requirements on storage layout. It is not flexible to adjust capacity. In SeaweedFS, just start one volume server pointing to the master. That's all.

Dev Plan

  • More tools and documentation, on how to manage and scale the system.
  • Read and write stream data.
  • Support structured data.

This is a super exciting project! And we need helpers and support!

Back to TOC

Installation Guide

Installation guide for users who are not familiar with golang

Step 1: install go on your machine and setup the environment by following the instructions at:

https://golang.org/doc/install

make sure you set up your $GOPATH

Step 2: checkout this repo:

git clone https://github.com/chrislusf/seaweedfs.git

Step 3: download, compile, and install the project by executing the following command

make install

Once this is done, you will find the executable "weed" in your $GOPATH/bin directory

Back to TOC

Disk Related Topics

Hard Drive Performance

When testing read performance on SeaweedFS, it basically becomes a performance test of your hard drive's random read speed. Hard drives usually get 100MB/s~200MB/s.

Solid State Disk

To modify or delete small files, SSD must delete a whole block at a time, and move content in existing blocks to a new block. SSD is fast when brand new, but will get fragmented over time and you have to garbage collect, compacting blocks. SeaweedFS is friendly to SSD since it is append-only. Deletion and compaction are done on volume level in the background, not slowing reading and not causing fragmentation.

Back to TOC

Benchmark

My Own Unscientific Single Machine Results on Mac Book with Solid State Disk, CPU: 1 Intel Core i7 2.6GHz.

Write 1 million 1KB file:

Concurrency Level:      16
Time taken for tests:   66.753 seconds
Complete requests:      1048576
Failed requests:        0
Total transferred:      1106789009 bytes
Requests per second:    15708.23 [#/sec]
Transfer rate:          16191.69 [Kbytes/sec]

Connection Times (ms)
              min      avg        max      std
Total:        0.3      1.0       84.3      0.9

Percentage of the requests served within a certain time (ms)
   50%      0.8 ms
   66%      1.0 ms
   75%      1.1 ms
   80%      1.2 ms
   90%      1.4 ms
   95%      1.7 ms
   98%      2.1 ms
   99%      2.6 ms
  100%     84.3 ms

Randomly read 1 million files:

Concurrency Level:      16
Time taken for tests:   22.301 seconds
Complete requests:      1048576
Failed requests:        0
Total transferred:      1106812873 bytes
Requests per second:    47019.38 [#/sec]
Transfer rate:          48467.57 [Kbytes/sec]

Connection Times (ms)
              min      avg        max      std
Total:        0.0      0.3       54.1      0.2

Percentage of the requests served within a certain time (ms)
   50%      0.3 ms
   90%      0.4 ms
   98%      0.6 ms
   99%      0.7 ms
  100%     54.1 ms

Back to TOC

License

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

The text of this page is available for modification and reuse under the terms of the Creative Commons Attribution-Sharealike 3.0 Unported License and the GNU Free Documentation License (unversioned, with no invariant sections, front-cover texts, or back-cover texts).

Back to TOC

Stargazers over time

Stargazers over time

Comments
  • Lots of

    Lots of "volume_server_handlers_read.go:75] request /661,33289a3ecd5edd with unmaching cookie seen: 1053646557 expected: 1674074305 " error log in PROD env

    any tips to find the problem?

    when users access some files of 10TB files, they will get 404 error, to check the logs:

    I0223 06:37:10 12031 topology_vacuum.go:90] check vacuum on collection:web volume:769
    I0223 06:37:10 12031 topology_vacuum.go:90] check vacuum on collection:web volume:770
    I0223 06:37:10 12031 topology_vacuum.go:90] check vacuum on collection:web volume:771
    I0223 06:37:10 12031 topology_vacuum.go:90] check vacuum on collection:web volume:765
    I0223 06:37:10 12031 topology_vacuum.go:90] check vacuum on collection:web volume:766
    I0223 06:37:38 12067 volume_server_handlers_read.go:69] read error: Not Found /661,33289a3ecd5edd_1
    I0223 06:37:38 12067 volume_server_handlers_read.go:75] request /661,33289a3ecd5edd with unmaching cookie seen: 1053646557 expected: 1674074305 from 192.1
    68.254.8:37448 agent Apache-HttpClient/4.5.3 (Java/1.8.0_131)
    I0223 06:37:38 12067 volume_server_handlers_read.go:75] request /661,33289a3ecd5edd with unmaching cookie seen: 1053646557 expected: 1674074305 from 127.0
    .0.1:43628 agent Dalvik/2.1.0 (Linux; U; Android 6.0.1; OPPO A57 Build/MMB29M)
    I0223 06:38:26 12067 volume_server_handlers_read.go:69] read error: Not Found /767,2b718ceb08e455_1
    I0223 06:38:26 12067 volume_server_handlers_read.go:75] request /767,2b718ceb08e455 with unmaching cookie seen: 3943228501 expected: 3983295080 from 192.1
    68.254.8:37652 agent Apache-HttpClient/4.5.3 (Java/1.8.0_131)
    I0223 06:38:26 12067 volume_server_handlers_read.go:75] request /767,2b718ceb08e455 with unmaching cookie seen: 3943228501 expected: 3983295080 from 127.0
    .0.1:43766 agent Mozilla/5.0 (Linux; Android 6.0.1; vivo Y53 Build/MMB29M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.2987.132 MQQ
    Browser/6.2 TBS/043906 Mobile Safari/537.36 MicroMessenger/6.6.3.1240(0x26060339) NetType/4G Language/zh_CN
    I0223 06:39:02 12067 volume_server_handlers_read.go:69] read error: Not Found /661,33289a3ecd5edd_1
    I0223 06:39:02 12067 volume_server_handlers_read.go:75] request /661,33289a3ecd5edd with unmaching cookie seen: 1053646557 expected: 1674074305 from 192.1
    68.254.8:37816 agent Apache-HttpClient/4.5.3 (Java/1.8.0_131)
    I0223 06:39:02 12067 volume_server_handlers_read.go:75] request /661,33289a3ecd5edd with unmaching cookie seen: 1053646557 expected: 1674074305 from 127.0
    .0.1:43874 agent Dalvik/2.1.0 (Linux; U; Android 6.0.1; OPPO A57 Build/MMB29M)
    I0223 06:40:00 12067 volume_server_handlers_read.go:75] request /765/36b2ca065ca535.png with unmaching cookie seen: 106734901 expected: 3831334744 from 12
    7.0.0.1:44042 agent Mozilla/5.0 (iPhone; CPU iPhone OS 11_2_6 like Mac OS X) AppleWebKit/604.5.6 (KHTML, like Gecko) Version/11.0 Mobile/15D100 Safari/604
    .1
    I0223 06:41:07 12067 volume_server_handlers_read.go:75] request /765/2b717f6f406291.png with unmaching cookie seen: 1866490513 expected: 2413760694 from 1
    27.0.0.1:44234 agent Mozilla/5.0 (iPhone; CPU iPhone OS 11_2_5 like Mac OS X) AppleWebKit/604.5.6 (KHTML, like Gecko) Mobile/15D60 MicroMessenger/6.6.3 Ne
    tType/WIFI Language/zh_CN
    I0223 06:41:37 12067 volume_server_handlers_read.go:69] read error: Not Found /661,33289a3ecd5edd_1
    I0223 06:41:37 12067 volume_server_handlers_read.go:75] request /661,33289a3ecd5edd with unmaching cookie seen: 1053646557 expected: 1674074305 from 192.1
    68.254.8:38434 agent Apache-HttpClient/4.5.3 (Java/1.8.0_131)
    I0223 06:41:37 12067 volume_server_handlers_read.go:75] request /661,33289a3ecd5edd with unmaching cookie seen: 1053646557 expected: 1674074305 from 127.0
    .0.1:44318 agent CH999/2.9.9 CFNetwork/889.9 Darwin/17.2.0
    I0223 06:42:11 12067 volume_server_handlers_read.go:75] request /659/334073b80964a9.jpg with unmaching cookie seen: 3087623337 expected: 387060166 from 12
    7.0.0.1:44422 agent Mozilla/5.0 (Linux; Android 7.0; SM-G9508 Build/NRD90M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.2987.132 MQ
    QBrowser/6.2 TBS/043906 Mobile Safari/537.36 MicroMessenger/6.6.3.1260(0x26060339) NetType/WIFI Language/zh_CN
    I0223 06:42:11 12067 volume_server_handlers_read.go:75] request /660/31c4f2ae71121f.jpg with unmaching cookie seen: 2926645791 expected: 269826002 from 12
    7.0.0.1:44422 agent Mozilla/5.0 (Linux; Android 7.0; SM-G9508 Build/NRD90M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.2987.132 MQ
    QBrowser/6.2 TBS/043906 Mobile Safari/537.36 MicroMessenger/6.6.3.1260(0x26060339) NetType/WIFI Language/zh_CN
    I0223 06:42:31 12067 volume_server_handlers_read.go:75] request /765/36b2ca065ca535.png with unmaching cookie seen: 106734901 expected: 3831334744 from 12
    7.0.0.1:44480 agent Mozilla/5.0 (iPhone; CPU iPhone OS 11_2_6 like Mac OS X) AppleWebKit/604.5.6 (KHTML, like Gecko) Version/11.0 Mobile/15D100 Safari/604
    .1
    I0223 06:42:34 12067 volume_server_handlers_read.go:75] request /659/334073b80964a9.jpg with unmaching cookie seen: 3087623337 expected: 387060166 from 12
    7.0.0.1:44480 agent Mozilla/5.0 (Linux; Android 7.0; SM-G9508 Build/NRD90M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.2987.132 MQ
    QBrowser/6.2 TBS/043906 Mobile Safari/537.36 MicroMessenger/6.6.3.1260(0x26060339) NetType/WIFI Language/zh_CN
    I0223 06:42:34 12067 volume_server_handlers_read.go:75] request /660/31c4f2ae71121f.jpg with unmaching cookie seen: 2926645791 expected: 269826002 from 12
    7.0.0.1:44480 agent Mozilla/5.0 (Linux; Android 7.0; SM-G9508 Build/NRD90M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.2987.132 MQ
    QBrowser/6.2 TBS/043906 Mobile Safari/537.36 MicroMessenger/6.6.3.1260(0x26060339) NetType/WIFI Language/zh_CN
    I0223 06:44:28 12067 volume_server_handlers_read.go:75] request /656/375439177051ee.jpg with unmaching cookie seen: 393236974 expected: 3021359726 from 12
    7.0.0.1:44804 agent Mozilla/5.0 (Linux; Android 6.0; PLK-TL01H Build/HONORPLK-TL01H) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.0.0 Mobile Safari/
    537.36
    I0223 06:44:44 12067 volume_server_handlers_read.go:75] request /661/33289a3ecd5edd.jpg with unmaching cookie seen: 1053646557 expected: 1674074305 from 1
    27.0.0.1:44804 agent Mozilla/5.0 (iPhone; CPU iPhone OS 11_1_2 like Mac OS X) AppleWebKit/604.3.5 (KHTML, like Gecko) Mobile/15B202 9ji/2.5.0/iPhone 6s
    I0223 06:47:22 12067 volume_server_handlers_read.go:75] request /657/38381fbac1a584.jpg with unmaching cookie seen: 3133252996 expected: 1543146961 from 1
    27.0.0.1:45338 agent Mozilla/5.0 (Linux; U; Android 7.1.2; zh-CN; MI 5X Build/N2G47H) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.2987.
    108 UCBrowser/11.8.8.968 Mobile Safari/537.36
    I0223 06:47:42 12067 volume_server_handlers_read.go:75] request /765/2b717f6f406291.png with unmaching cookie seen: 1866490513 expected: 2413760694 from 1
    27.0.0.1:45390 agent Mozilla/5.0 (iPhone; CPU iPhone OS 10_3_3 like Mac OS X) AppleWebKit/603.3.8 (KHTML, like Gecko) Mobile/14G60 MicroMessenger/6.6.3 Ne
    tType/WIFI Language/zh_CN
    I0223 06:47:44 12067 volume_server_handlers_read.go:75] request /655/3469e5e28fd023.jpg with unmaching cookie seen: 3801075747 expected: 234264574 from 12
    7.0.0.1:45390 agent Mozilla/5.0 (iPhone 6sp; CPU iPhone OS 11_1 like Mac OS X) AppleWebKit/604.3.5 (KHTML, like Gecko) Version/11.0 MQQBrowser/8.0.2 Mobil
    e/15B87 Safari/8536.25 MttCustomUA/2 QBWebViewType/1 WKType/1
    I0223 06:47:52 12067 volume_server_handlers_read.go:75] request /657/38381fbac1a584.jpg with unmaching cookie seen: 3133252996 expected: 1543146961 from 1
    27.0.0.1:45390 agent Mozilla/5.0 (Linux; U; Android 7.1.2; zh-CN; MI 5X Build/N2G47H) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.2987.
    108 UCBrowser/11.8.8.968 Mobile Safari/537.36
    I0223 06:47:52 12067 volume_server_handlers_read.go:75] request /655/3469e5e28fd023.jpg with unmaching cookie seen: 3801075747 expected: 234264574 from 12
    7.0.0.1:45390 agent Mozilla/5.0 (iPhone 6sp; CPU iPhone OS 11_1 like Mac OS X) AppleWebKit/604.3.5 (KHTML, like Gecko) Version/11.0 MQQBrowser/8.0.2 Mobil
    e/15B87 Safari/8536.25 MttCustomUA/2 QBWebViewType/1 WKType/1
    I0223 06:48:04 12067 volume_server_handlers_read.go:75] request /659/334073b80964a9.jpg with unmaching cookie seen: 3087623337 expected: 387060166 from 12
    7.0.0.1:45390 agent Mozilla/5.0 (Linux; U; Android 7.0; zh-Hans-CN; SM-G9500 Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.
    2987.108 Quark/2.4.1.985 Mobile Safari/537.36
    I0223 06:48:04 12067 volume_server_handlers_read.go:75] request /660/31c4f2ae71121f.jpg with unmaching cookie seen: 2926645791 expected: 269826002 from 12
    7.0.0.1:45390 agent Mozilla/5.0 (Linux; U; Android 7.0; zh-Hans-CN; SM-G9500 Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.
    2987.108 Quark/2.4.1.985 Mobile Safari/537.36
    I0223 06:48:11 12067 volume_server_handlers_read.go:75] request /661/33289a3ecd5edd.jpg with unmaching cookie seen: 1053646557 expected: 1674074305 from 1
    27.0.0.1:45390 agent Mozilla/5.0 (Linux; Android 6.0.1; OPPO R9s Plus Build/MMB29M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/48.0.256
    4.116 Mobile Safari/537.36 T7/10.3 baiduboxapp/10.3.6.13 (Baidu; P1 6.0.1)
    I0223 06:48:12 12067 volume_server_handlers_read.go:75] request /659/334073b80964a9.jpg with unmaching cookie seen: 3087623337 expected: 387060166 from 12
    7.0.0.1:45390 agent Mozilla/5.0 (Linux; U; Android 7.0; zh-Hans-CN; SM-G9500 Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.
    2987.108 Quark/2.4.1.985 Mobile Safari/537.36
    I0223 06:48:12 12067 volume_server_handlers_read.go:75] request /660/31c4f2ae71121f.jpg with unmaching cookie seen: 2926645791 expected: 269826002 from 12
    7.0.0.1:45390 agent Mozilla/5.0 (Linux; U; Android 7.0; zh-Hans-CN; SM-G9500 Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.
    2987.108 Quark/2.4.1.985 Mobile Safari/537.36
    I0223 06:48:22 12067 volume_server_handlers_read.go:75] request /661/33289a3ecd5edd.jpg with unmaching cookie seen: 1053646557 expected: 1674074305 from 1
    27.0.0.1:45390 agent Mozilla/5.0 (Linux; Android 6.0.1; OPPO R9s Plus Build/MMB29M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/48.0.256
    4.116 Mobile Safari/537.36 T7/10.3 baiduboxapp/10.3.6.13 (Baidu; P1 6.0.1)
    I0223 06:48:22 12067 volume_server_handlers_read.go:75] request /659/334073b80964a9.jpg with unmaching cookie seen: 3087623337 expected: 387060166 from 12
    7.0.0.1:45390 agent Mozilla/5.0 (Linux; U; Android 7.0; zh-Hans-CN; SM-G9500 Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.
    2987.108 Quark/2.4.1.985 Mobile Safari/537.36
    I0223 06:48:22 12067 volume_server_handlers_read.go:75] request /660/31c4f2ae71121f.jpg with unmaching cookie seen: 2926645791 expected: 269826002 from 12
    7.0.0.1:45390 agent Mozilla/5.0 (Linux; U; Android 7.0; zh-Hans-CN; SM-G9500 Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.
    2987.108 Quark/2.4.1.985 Mobile Safari/537.36
    I0223 06:48:32 12067 volume_server_handlers_read.go:75] request /659/334073b80964a9.jpg with unmaching cookie seen: 3087623337 expected: 387060166 from 12
    7.0.0.1:45390 agent Mozilla/5.0 (Linux; U; Android 7.0; zh-Hans-CN; SM-G9500 Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.
    2987.108 Quark/2.4.1.985 Mobile Safari/537.36
    I0223 06:48:32 12067 volume_server_handlers_read.go:75] request /660/31c4f2ae71121f.jpg with unmaching cookie seen: 2926645791 expected: 269826002 from 12
    7.0.0.1:45390 agent Mozilla/5.0 (Linux; U; Android 7.0; zh-Hans-CN; SM-G9500 Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.
    2987.108 Quark/2.4.1.985 Mobile Safari/537.36
    I0223 06:48:41 12067 volume_server_handlers_read.go:75] request /659/334073b80964a9.jpg with unmaching cookie seen: 3087623337 expected: 387060166 from 12
    7.0.0.1:45390 agent Mozilla/5.0 (Linux; U; Android 7.0; zh-Hans-CN; SM-G9500 Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.
    2987.108 Quark/2.4.1.985 Mobile Safari/537.36
    I0223 06:48:41 12067 volume_server_handlers_read.go:75] request /660/31c4f2ae71121f.jpg with unmaching cookie seen: 2926645791 expected: 269826002 from 12
    7.0.0.1:45390 agent Mozilla/5.0 (Linux; U; Android 7.0; zh-Hans-CN; SM-G9500 Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.
    2987.108 Quark/2.4.1.985 Mobile Safari/537.36
    I0223 06:49:16 12067 volume_server_handlers_read.go:69] read error: Not Found /661,33289a3ecd5edd_1
    I0223 06:49:16 12067 volume_server_handlers_read.go:75] request /661,33289a3ecd5edd with unmaching cookie seen: 1053646557 expected: 1674074305 from 192.1
    68.254.8:40632 agent Apache-HttpClient/4.5.3 (Java/1.8.0_131)
    I0223 06:49:16 12067 volume_server_handlers_read.go:75] request /661,33289a3ecd5edd with unmaching cookie seen: 1053646557 expected: 1674074305 from 127.0
    .0.1:45652 agent Dalvik/2.1.0 (Linux; U; Android 8.0.0; MHA-AL00 Build/HUAWEIMHA-AL00)
    I0223 06:49:23 12067 volume_server_handlers_read.go:69] read error: Not Found /661,33289a3ecd5edd_1
    I0223 06:49:23 12067 volume_server_handlers_read.go:75] request /661,33289a3ecd5edd with unmaching cookie seen: 1053646557 expected: 1674074305 from 192.1
    68.254.8:40642 agent Apache-HttpClient/4.5.3 (Java/1.8.0_131)
    I0223 06:49:23 12067 volume_server_handlers_read.go:75] request /661,33289a3ecd5edd with unmaching cookie seen: 1053646557 expected: 1674074305 from 127.0
    .0.1:45652 agent Dalvik/2.1.0 (Linux; U; Android 8.0.0; MHA-AL00 Build/HUAWEIMHA-AL00)
    I0223 06:49:36 12067 volume_server_handlers_read.go:75] request /659/334073b80964a9.jpg with unmaching cookie seen: 3087623337 expected: 387060166 from 12
    7.0.0.1:45706 agent Mozilla/5.0 (Linux; U; Android 7.0; zh-Hans-CN; SM-G9500 Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.
    2987.108 Quark/2.4.1.985 Mobile Safari/537.36
    I0223 06:49:36 12067 volume_server_handlers_read.go:75] request /660/31c4f2ae71121f.jpg with unmaching cookie seen: 2926645791 expected: 269826002 from 12
    7.0.0.1:45706 agent Mozilla/5.0 (Linux; U; Android 7.0; zh-Hans-CN; SM-G9500 Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.
    2987.108 Quark/2.4.1.985 Mobile Safari/537.36
    I0223 06:49:43 12067 volume_server_handlers_read.go:75] request /659/2af2e996acd9ce.jpg with unmaching cookie seen: 2527910350 expected: 810789430 from 12
    7.0.0.1:45706 agent Mozilla/5.0 (Linux; U; Android 7.0; zh-cn; MI 5s Plus Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/53.0.278
    5.146 Mobile Safari/537.36 XiaoMi/MiuiBrowser/9.4.11
    I0223 06:49:51 12067 volume_server_handlers_read.go:69] read error: Not Found /661,33289a3ecd5edd_1
    I0223 06:49:51 12067 volume_server_handlers_read.go:75] request /661,33289a3ecd5edd with unmaching cookie seen: 1053646557 expected: 1674074305 from 192.1
    68.254.8:40772 agent Apache-HttpClient/4.5.3 (Java/1.8.0_131)
    I0223 06:49:51 12067 volume_server_handlers_read.go:75] request /661,33289a3ecd5edd with unmaching cookie seen: 1053646557 expected: 1674074305 from 127.0
    .0.1:45706 agent Dalvik/2.1.0 (Linux; U; Android 8.0.0; MHA-AL00 Build/HUAWEIMHA-AL00)
    I0223 06:49:52 12067 volume_server_handlers_read.go:69] read error: Not Found /661,33289a3ecd5edd_1
    I0223 06:49:52 12067 volume_server_handlers_read.go:75] request /661,33289a3ecd5edd with unmaching cookie seen: 1053646557 expected: 1674074305 from 192.1
    68.254.8:40776 agent Apache-HttpClient/4.5.3 (Java/1.8.0_131)
    I0223 06:49:52 12067 volume_server_handlers_read.go:75] request /661,33289a3ecd5edd with unmaching cookie seen: 1053646557 expected: 1674074305 from 127.0
    .0.1:45706 agent Dalvik/2.1.0 (Linux; U; Android 8.0.0; MHA-AL00 Build/HUAWEIMHA-AL00)
    I0223 06:50:16 12067 volume_server_handlers_read.go:75] request /655/2f47f8dc2842e1.jpg with unmaching cookie seen: 3693626081 expected: 590474922 from 12
    7.0.0.1:45834 agent Dalvik/2.1.0 (Linux; U; Android 7.1.2; M6 Note Build/N2G47H)
    I0223 06:50:21 12067 volume_server_handlers_read.go:69] read error: Not Found /655,29509f2e850bb0_1
    I0223 06:50:21 12067 volume_server_handlers_read.go:75] request /655,29509f2e850bb0 with unmaching cookie seen: 780471216 expected: 773671554 from 192.168
    [[email protected] seaweedfs]# 
    
    
    opened by yourchanges 68
  • Possible performance problems on some platforms

    Possible performance problems on some platforms

    As i said, we develop .NET client for seaweedFS but we see a pretty low performance on our testing deployment - so we did some tests.

    We tried some testing environments, but mostly the test results are almost same. We tried launching everything on localhost with:

    .\weed.exe master -mdir="D:\SeaweedFs\Master" .\weed.exe volume -mserver="localhost:9333" -dir="D:\SeaweedFs\VData2" -port=9101 .\weed.exe volume -mserver="localhost:9333" -dir="D:\SeaweedFs\VData2" -port=9102 .\weed.exe volume -mserver="localhost:9333" -dir="D:\SeaweedFs\VData2" -port=9103

    Also we, tried simmilar setup on 3 Windows computers (one master and volume, two just volumes) Also we, tried simmilar setup on 7 Linux computers (one master and volume, two just volumes)

    and we did tried benchmarking it:

    The results are

    Concurrency Level: 16 Time taken for tests: 486.478 seconds Complete requests: 1048576 Failed requests: 0 Total transferred: 1106793636 bytes Requests per second: 2155.44 [#/sec] Transfer rate: 2221.79 [Kbytes/sec]

    Connection Times (ms) min avg max std Total: 1.4 7.4 94.5 2.7

    Percentage of the requests served within a certain time (ms) 50% 6.9 ms 66% 8.0 ms 75% 8.5 ms 80% 9.0 ms 90% 11.0 ms 95% 12.5 ms 98% 14.5 ms 99% 16.0 ms 100% 94.5 ms

    And the readtest:

    Concurrency Level: 16 Time taken for tests: 237.319 seconds Complete requests: 1048576 Failed requests: 0 Total transferred: 1106781421 bytes Requests per second: 4418.42 [#/sec] Transfer rate: 4554.38 [Kbytes/sec]

    We do have major problem on that 7 linux machines setup since the write speed was ~320 items/sec and reading was almost ~700 items/sec. Its 2core machines, but we did some monitoring and... CPU is idling, HDD is idling, RAM is idling, Networking is idling... there is no reason for that slow performance... but yet it is that slow. This does not corresopnd with benchmark presented here on a single computer (not even close), so we must be doing something wrong.

    We tried plenty of parameters of -c and -size in benchmark also we did some experiments with -max in volumes but the results are almost same.

    We also notice, that pretty much every volume except one is doing totally nothing during benchmark, which is, if I understand it correctly, the bad thing because the volume server should be targeted by random. We are using latest 0.70 version. We can reproduce this results almost everytime. Is there some kind of utility to see whats blocking SeaweedFs going more... faster?

    opened by LightCZ 49
  • Slow performance in replication mode 010 while executing volume.fix.replication

    Slow performance in replication mode 010 while executing volume.fix.replication

    Hello We have a cluster in replication mode 010 (before that there was one rack with 000, which we decided to expand). At the moment, there are about 150,000 volumes, with a limit of 1000 megabytes. Now we are going through the fix.replication stage to replicate the data from the first rack to the second. We have huge problems with the speed of data access, especially during replication. Also, with a large data flow on the second rack, errors occur: requests.exceptions.ReadTimeout: HTTPConnectionPool(host='node2.example.com', port=11009): Read timed out Our cluster: 3 master nodes 2 racks of 18 raid arrays (12 terabytes each), each 70% full on the first rack The wizards are launched with the command: weed master -defaultReplication="010" -port="9333" -peers=node1.example.com:9333,node2.example.com:9333,node3.example.com:9333 -volumeSizeLimitMB="1024" -ip="node2.example.com" -metrics.address="node2.example.com:9091" Volumes are launched with the command: weed volume -dataCenter="netrack" -rack="node2" -port="11001" -port.public="11001" -mserver="node1.example.com:9333,node2.example.com:9333,node3.example.com:9333" -ip="node2.example.com" -max=10000 -dir=/mnt/disk1/swfs

    PS: At the moment, about 40% are replicated and it is impossible to work further

    opened by vankosa 47
  • Lots of /5,1001e1b02c1b01" errors in our production server log files">

    Lots of "volume_server_handlers.go:75] read error: /5,1001e1b02c1b01" errors in our production server log files

    Here is the

    Deploy structure

    10.252.130.159:9333(master)         10.252.130.159:5088(haproxy)
                                       /                                      \ 
                        10.252.133.22:5083 (volume1)               10.252.135.207:5084(volume2)  
    
    Replication Stratage: 001
    

    So, it always gets the content via haproxy from the volume server.

    But after running few days, we get a 404 error sometimes from the 10.252.130.159:5088 for a special fid such as :5,1001e1b02c1b01.

    And we did a check and found that one of volume server(different fid on the random different server) always output the following logs:

    I0303 15:44:23 21828 volume_server_handlers.go:75] read error: <nil> /5,1001e1b02c1b01
    I0303 15:45:14 21828 volume_server_handlers.go:75] read error: <nil> /5,1001e1b02c1b01
    I0303 15:52:52 21828 volume_server_handlers.go:75] read error: <nil> /5,1001e1b02c1b01
    I0303 15:52:52 21828 volume_server_handlers.go:75] read error: <nil> /5,1001e1b02c1b01
    I0303 15:53:05 21828 volume_server_handlers.go:75] read error: <nil> /5,1001e1b02c1b01
    I0303 15:53:06 21828 volume_server_handlers.go:75] read error: <nil> /5,1001e1b02c1b01
    I0303 15:53:07 21828 volume_server_handlers.go:75] read error: <nil> /5,1001e1b02c1b01
    I0303 15:53:07 21828 volume_server_handlers.go:75] read error: <nil> /5,1001e1b02c1b01
    I0303 15:53:18 21828 volume_server_handlers.go:75] read error: <nil> /5,1001e1b02c1b01
    

    It means that the file for [fid:5,1001e1b02c1b01] has be damaged?

    And how to fix this?

    How could be happened? any suggestions?

    opened by yourchanges 45
  • Corrupted Files in Cluster

    Corrupted Files in Cluster

    Describe the bug Files copied to FUSE filer mount are being corrupted.

    System Setup

    • OS version Debian GNU/Linux 11 (bullseye)
    • output of weed version version 8000GB 3.11 d4ef06cdcf320f8b8b17279586e0738894869eff linux amd64
    [filer.options]
    # with http DELETE, by default the filer would check whether a folder is empty.
    # recursive_delete will delete all sub folders and files, similar to "rm -Rf"
    recursive_delete = true
    
    [leveldb3]
    # similar to leveldb2.
    # each bucket has its own meta store.
    enabled = true
    dir = "/opt/seaweedfs/REDACTED/filer/filerldb3"                    # directory to store level db files
    

    Expected behavior Files copied via FUSE mount should match md5sum like any other mount.

    Additional context fstab entry:

    fuse /mnt/swfs/localshared fuse.weed filer='REDACTED:8889,REDACTED:8889,REDACTED:8889',filer.path=/,nofail 0 0
    
    root:~# md5sum /var/lib/pve/local-btrfs/images/526/vm-526-disk-0/disk.raw 
    fba3db35d8d631a8014bbf89dbb3e9ca  /var/lib/pve/local-btrfs/images/526/vm-526-disk-0/disk.raw
    root:~# cp /var/lib/pve/local-btrfs/images/526/vm-526-disk-0/disk.raw /mnt/swfs/localshared/
    root:~# md5sum /mnt/swfs/localshared/disk.raw 
    f8555d7c8bc4e0081dde769255ddc815  /mnt/swfs/localshared/disk.raw
    
    opened by ProjectInitiative 40
  • Hot, warm, cold storage [feature]

    Hot, warm, cold storage [feature]

    I'm currently running a setup with 3 dedicated masters/filers and 4 volume servers 10 TB HDD each. Our app is writing data to daily buckets. When workload is kind of write-only, everything performs quite well. But if random reading kicks in, performance seem to suffer a lot. I'm currently investigating which side is suffering, our app or the storage. But I want to clear this out for myself: is it possible to organize hot, warm and cold tiers inside one cluster?

    I mean, create new buckets on hot storage, for example NVMe SSD based volume servers, and later move it by a call to not so often accessed HDD volume servers (warm). I've read about cloud tier uploads, so it would work for cold phase I guess. But what about hot to warm transition?

    Or maybe I'm missing something and I have just misconfigured something so I can really speed up my cluster without any extra abstractions?

    opened by divanikus 40
  • brain split happened when network interrupts between dc

    brain split happened when network interrupts between dc

    weed vesion:1.15

    1. weed master: deployed in 3 DC , 3 nodes in dc1 + 2 in dc2 +in 2 dc3 ,
    2. volumer server : 3 nodes in dc1 + 3 in dc2 + 3 in dc3
    3. when the network between dc3 and dc2 interrupt , dc3 and dc1 also interrupt ,but network between dc1 and dc2 is ok , so dc3 is a network isolated island 。
    4. the original leader in dc1 , but when network issue happened , the second leader selected in dc3 and dc3's volumer server connect to the dc3 new leader. Forming two clusters , dc1 and dc2 is a cluster ,dc3 is a cluster
    5. when the network issue between dc3 and dc1、dc2 solved , also two leader in whole cluster util restart the dc3's leader.
    opened by accumulatepig 34
  • [bug:filer] Continues to stick not to the leader raft.Server: Not current leader (critical)

    [bug:filer] Continues to stick not to the leader raft.Server: Not current leader (critical)

    Describe the bug after shutdown one volume server only one filler lost leader

    Jan 13, 2022 @ 09:10:35.681 | I0113 04:10:35     1 common.go:69] response method:PUT URL:/buckets/reports/report_631214642010_e674c402-c87e-4443-b942-d39d6417225c.pdf with httpStatus:500 and JSON:{"error":"rpc error: code = Unknown desc = raft.Server: Not current leader"}
    -- | --
    
      | Jan 13, 2022 @ 09:10:35.681 | E0113 04:10:35     1 s3api_object_handlers.go:421] upload to filer error: rpc error: code = Unknown desc = raft.Server: Not current leader
    
      | Jan 13, 2022 @ 09:10:35.681 | E0113 04:10:34     1 filer_server_handlers_write.go:43] failing to assign a file id: rpc error: code = Unknown desc = raft.Server: Not current leader
    
      | Jan 13, 2022 @ 09:10:35.681 | E0113 04:10:35     1 filer_server_handlers_write.go:43] failing to assign a file id: rpc error: code = Unknown desc = raft.Server: Not current leader
    
      | Jan 13, 2022 @ 09:10:35.681 | E0113 04:10:35     1 filer_server_handlers_write_upload.go:172] upload error: rpc error: code = Unknown desc = raft.Server: Not current leader
    
      | Jan 13, 2022 @ 09:10:34.680 | E0113 04:10:34     1 filer_server_handlers_write_upload.go:172] upload error: rpc error: code = Unknown desc = raft.Server: Not current leader
    
      | Jan 13, 2022 @ 09:10:34.680 | I0113 04:10:34     1 filer_notify.go:103] log write failed /topics/.system/log/2022-01-13/00-01.cec7f54d: AssignVolume: rpc error: code = Unknown desc = raft.Server: Not current leader
    

    System Setup weed version

    version 30GB 2.85 ea8e4ec2 linux amd64
    

    Additional context

    logs

    
    Jan 13, 2022 @ 05:02:29.653 | E0113 00:01:24     1 filer_grpc_server_sub_meta.go:133] processed to 2022-01-13 00:01:23.092106505 +0000 UTC: rpc error: code = Unavailable desc = transport is closing
    -- | --
    
      | Jan 13, 2022 @ 05:02:29.653 | I0113 00:01:23     1 filer_grpc_server_sub_meta.go:226] => client filer:10.106.65.20:[email protected]:47818: rpc error: code = Unavailable desc = transport is closing
    
      | Jan 13, 2022 @ 05:02:29.653 | I0113 00:02:18     1 filer_notify.go:103] log write failed /topics/.system/log/2022-01-13/00-01.cec7f54d: AssignVolume: rpc error: code = Unknown desc = raft.Server: Not current leader
    
      | Jan 13, 2022 @ 05:02:29.653 | I0113 00:01:23     1 filer_grpc_server_sub_meta.go:226] => client filer:10.106.65.121:[email protected]:53630: rpc error: code = Unavailable desc = transport is closing
    
      | Jan 13, 2022 @ 05:02:29.653 | E0113 00:01:24     1 filer_grpc_server_sub_meta.go:133] processed to 2022-01-13 00:01:23.092106505 +0000 UTC: rpc error: code = Unavailable desc = transport is closing
    
      | Jan 13, 2022 @ 05:02:29.653 | I0113 00:01:24     1 filer_grpc_server_sub_meta.go:255] - listener filer:10.106.65.121:[email protected]:53630
    
      | Jan 13, 2022 @ 05:02:29.653 | I0113 00:01:24     1 filer_grpc_server_sub_meta.go:255] - listener filer:10.106.65.20:[email protected]:47818
    
      | Jan 13, 2022 @ 05:02:30.493 | I0113 00:02:19     1 filer_notify.go:103] log write failed /topics/.system/log/2022-01-13/00-01.cec7f54d: AssignVolume: rpc error: code = Unknown desc = raft.Server: Not current leader
    
      | Jan 13, 2022 @ 05:02:30.493 | I0113 00:02:19     1 filer_notify.go:103] log write failed /topics/.system/log/2022-01-13/00-01.cec7f54d: AssignVolume: rpc error: code = Unknown desc = raft.Server: Not current leader
    
      | Jan 13, 2022 @ 05:02:30.494 | I0113 00:02:22     1 filer_notify.go:103] log write failed /topics/.system/log/2022-01-13/00-01.cec7f54d: AssignVolume: rpc error: code = Unknown desc = raft.Server: Not current leader
    
      | Jan 13, 2022 @ 05:02:30.494 | I0113 00:02:21     1 filer_notify.go:103] log write failed /topics/.system/log/2022-01-13/00-01.cec7f54d: AssignVolume: rpc error: code = Unknown desc = raft.Server: Not current leader
    
      | Jan 13, 2022 @ 05:02:30.494 | I0113 00:02:23     1 filer_notify.go:103] log write failed /topics/.system/log/2022-01-13/00-01.cec7f54d: AssignVolume: rpc error: code = Unknown desc = raft.Server: Not current leader
    
      | Jan 13, 2022 @ 05:02:30.494 | I0113 00:02:20     1 filer_notify.go:103] log write failed /topics/.system/log/2022-01-13/00-01.cec7f54d: AssignVolume: rpc error: code = Unknown desc = raft.Server: Not current leader
    
      | Jan 13, 2022 @ 05:02:30.494 | I0113 00:02:22     1 filer_notify.go:103] log write failed /topics/.system/log/2022-01-13/00-01.cec7f54d: AssignVolume: rpc error: code = Unknown desc = raft.Server: Not current leader
    
      | Jan 13, 2022 @ 05:02:30.494 | I0113 00:02:24     1 filer_notify.go:103] log write failed /topics/.system/log/2022-01-13/00-01.cec7f54d: AssignVolume: rpc error: code = Unknown desc = raft.Server: Not current leader
    
    
    opened by kmlebedev 32
  • Filer hangs or restarts on deleting large buckets

    Filer hangs or restarts on deleting large buckets

    Describe the bug I'm running single master, single filer setup with s3 gateway. Filer's store is leveldb2.

    I store lots of small files (up to 1MB) in separate per day buckets. Files are stored in nested directories (not a one dir for all). It works pretty well, unless I'm trying to drop bucket, both through API or bucket.delete. Filer might just hang with other components spitting rpc error: code = Unavailable desc = transport is closing or simply do a restart. Deleting collections is breezily fast though. Am I missing something so I can painlessly delete a whole bucket in one operation? Or I should move to another filer store?

    System Setup

    • Debian 10
    • version 30GB 2.16 6912bf9 linux amd64
    opened by divanikus 30
  • [Emergency] Cluster failed after upgrading weedfs to version 2.62

    [Emergency] Cluster failed after upgrading weedfs to version 2.62

    Hi, I upgrade masters and volume servers from v 2.40 8000G to 2.62 8000G but cluster doesn't work Log on masters:

    Aug 10 15:06:27 weed-master-1 seaweedfs-master[1145]: I0810 15:06:27  1145 volume_layout.go:425] Volume 1180 becomes crowded
    Aug 10 15:06:27 weed-master-1 seaweedfs-master[1145]: I0810 15:06:27  1145 volume_layout.go:425] Volume 308 becomes crowded
    Aug 10 15:06:27 weed-master-1 seaweedfs-master[1145]: I0810 15:06:27  1145 volume_layout.go:425] Volume 1182 becomes crowded
    Aug 10 15:06:27 weed-master-1 seaweedfs-master[1145]: I0810 15:06:27  1145 volume_layout.go:425] Volume 309 becomes crowded
    Aug 10 15:06:27 weed-master-1 seaweedfs-master[1145]: I0810 15:06:27  1145 volume_layout.go:425] Volume 1179 becomes crowded
    Aug 10 15:06:27 weed-master-1 seaweedfs-master[1145]: I0810 15:06:27  1145 volume_layout.go:425] Volume 1181 becomes crowded
    Aug 10 15:06:27 weed-master-1 seaweedfs-master[1145]: I0810 15:06:27  1145 volume_layout.go:425] Volume 307 becomes crowded
    Aug 10 15:06:27 weed-master-1 seaweedfs-master[1145]: I0810 15:06:27  1145 volume_layout.go:425] Volume 1177 becomes crowded
    Aug 10 15:06:27 weed-master-1 seaweedfs-master[1145]: I0810 15:06:27  1145 volume_layout.go:425] Volume 220 becomes crowded
    Aug 10 15:06:27 weed-master-1 seaweedfs-master[1145]: I0810 15:06:27  1145 volume_layout.go:425] Volume 312 becomes crowded
    Aug 10 15:06:27 weed-master-1 seaweedfs-master[1145]: I0810 15:06:27  1145 volume_layout.go:425] Volume 310 becomes crowded
    Aug 10 15:06:27 weed-master-1 seaweedfs-master[1145]: I0810 15:06:27  1145 volume_layout.go:425] Volume 311 becomes crowded
    Aug 10 15:06:27 weed-master-1 seaweedfs-master[1145]: I0810 15:06:27  1145 volume_layout.go:425] Volume 219 becomes crowded
    Aug 10 15:06:27 weed-master-1 seaweedfs-master[1145]: I0810 15:06:27  1145 volume_layout.go:425] Volume 222 becomes crowded
    Aug 10 15:06:27 weed-master-1 seaweedfs-master[1145]: I0810 15:06:27  1145 volume_layout.go:425] Volume 1178 becomes crowded
    Aug 10 15:06:33 weed-master-1 snapd[1002]: stateengine.go:150: state ensure error: Get https://api.snapcraft.io/api/v1/snaps/sections: dial tcp: lookup api.snapcraft.io on 172.30.100.3:53: server misbehaving
    

    Other master:

    Aug 10 15:06:20 weed-master-2 seaweedfs-master[1154]: I0810 15:06:20  1154 common.go:50] response method:GET URL:/dir/lookup?volumeId=370 with httpStatus:404 and JSON:{"volumeId":"370","error":"volume id 370 not found"}
    Aug 10 15:06:20 weed-master-2 seaweedfs-master[1154]: I0810 15:06:20  1154 common.go:50] response method:GET URL:/dir/lookup?volumeId=1067 with httpStatus:404 and JSON:{"volumeId":"1067","error":"volume id 1067 not found"}
    Aug 10 15:06:20 weed-master-2 seaweedfs-master[1154]: I0810 15:06:20  1154 common.go:50] response method:GET URL:/dir/lookup?volumeId=1234 with httpStatus:404 and JSON:{"volumeId":"1234","error":"volume id 1234 not found"}
    Aug 10 15:06:20 weed-master-2 seaweedfs-master[1154]: I0810 15:06:20  1154 common.go:50] response method:GET URL:/dir/lookup?volumeId=1236 with httpStatus:404 and JSON:{"volumeId":"1236","error":"volume id 1236 not found"}
    Aug 10 15:06:20 weed-master-2 seaweedfs-master[1154]: I0810 15:06:20  1154 common.go:50] response method:GET URL:/dir/lookup?volumeId=609 with httpStatus:404 and JSON:{"volumeId":"609","error":"volume id 609 not found"}
    Aug 10 15:06:20 weed-master-2 seaweedfs-master[1154]: I0810 15:06:20  1154 common.go:50] response method:GET URL:/dir/lookup?volumeId=24 with httpStatus:404 and JSON:{"volumeId":"24","error":"volume id 24 not found"}
    Aug 10 15:06:20 weed-master-2 seaweedfs-master[1154]: I0810 15:06:20  1154 common.go:50] response method:GET URL:/dir/lookup?volumeId=83 with httpStatus:404 and JSON:{"volumeId":"83","error":"volume id 83 not found"}
    Aug 10 15:06:20 weed-master-2 seaweedfs-master[1154]: I0810 15:06:20  1154 common.go:50] response method:GET URL:/dir/lookup?volumeId=320 with httpStatus:404 and JSON:{"volumeId":"320","error":"volume id 320 not found"}
    Aug 10 15:06:20 weed-master-2 seaweedfs-master[1154]: I0810 15:06:20  1154 common.go:50] response method:GET URL:/dir/lookup?volumeId=1356 with httpStatus:404 and JSON:{"volumeId":"1356","error":"volume id 1356 not found"}
    Aug 10 15:06:20 weed-master-2 seaweedfs-master[1154]: I0810 15:06:20  1154 common.go:50] response method:GET URL:/dir/lookup?volumeId=1352 with httpStatus:404 and JSON:{"volumeId":"1352","error":"volume id 1352 not found"}
    
    

    Master03:

    Aug 10 15:06:18 weed-master-3 seaweedfs-master[1149]: I0810 15:06:18  1149 volume_layout.go:376] Volume 210 has 0 replica, less than required 2
    Aug 10 15:06:18 weed-master-3 seaweedfs-master[1149]: I0810 15:06:18  1149 topology_event_handling.go:79] Removing Volume 387 from the dead volume server weed-volume-012:8086
    Aug 10 15:06:18 weed-master-3 seaweedfs-master[1149]: I0810 15:06:18  1149 volume_layout.go:376] Volume 387 has 0 replica, less than required 2
    Aug 10 15:06:18 weed-master-3 seaweedfs-master[1149]: I0810 15:06:18  1149 topology_event_handling.go:79] Removing Volume 537 from the dead volume server weed-volume-012:8086
    Aug 10 15:06:18 weed-master-3 seaweedfs-master[1149]: I0810 15:06:18  1149 volume_layout.go:376] Volume 537 has 0 replica, less than required 2
    Aug 10 15:06:18 weed-master-3 seaweedfs-master[1149]: I0810 15:06:18  1149 topology_event_handling.go:79] Removing Volume 1362 from the dead volume server weed-volume-012:8086
    Aug 10 15:06:18 weed-master-3 seaweedfs-master[1149]: I0810 15:06:18  1149 volume_layout.go:376] Volume 1362 has 0 replica, less than required 2
    Aug 10 15:06:18 weed-master-3 seaweedfs-master[1149]: I0810 15:06:18  1149 topology_event_handling.go:79] Removing Volume 759 from the dead volume server weed-volume-012:8086
    Aug 10 15:06:18 weed-master-3 seaweedfs-master[1149]: I0810 15:06:18  1149 volume_layout.go:376] Volume 759 has 1 replica, less than required 2
    Aug 10 15:06:18 weed-master-3 seaweedfs-master[1149]: I0810 15:06:18  1149 topology_event_handling.go:79] Removing Volume 801 from the dead volume server weed-volume-012:8086
    
    
    opened by hamidreza-hosseini 29
  • 迁移问题

    迁移问题

    有2台服务器进行迁移,迁移不成功需要指导,最终需要实现的目的就是把mysql里的数据迁移到redis里面。 参照https://github.com/chrislusf/seaweedfs/wiki/Async-Replication-to-another-Filer#replicate-existing-files

    配置如下: 主机 数据库 192.168.20.51 mysql 192.168.20.55 redis

    192.168.20.51的配置如下 启了一个master,三个volume,一个filer: nohup /home/seaweedfs/weed master -mdir=/home/seaweedfs/data/master -port=9333 -ip 192.168.20.51 -defaultReplication=010 & /home/seaweedfs/weed volume -dir=/home/seaweedfs/volume/volume1 -max=100 -mserver=192.168.20.51:9333 -port=9001 -ip=192.168.20.51 -dataCenter=dc1 -rack=rack1 & >> /home/seaweedfs/logs/vol1.log & /home/seaweedfs/weed volume -dir=/home/seaweedfs/volume/volume2 -max=100 -mserver=192.168.20.51:9333 -port=9002 -ip=192.168.20.51 -dataCenter=dc1 -rack=rack2 & >> /home/seaweedfs/logs/vol2.log & /home/seaweedfs/weed volume -dir=/home/seaweedfs/volume/volume3 -max=100 -mserver=192.168.20.51:9333 -port=9003 -ip=192.168.20.51 -dataCenter=dc1 -rack=rack3 & >> /home/seaweedfs/logs/vol3.log & /home/seaweedfs/weed filer -master=192.168.20.51:9333 -ip=192.168.20.51 & >> /home/seaweedfs/logs/filer.log &

    cd /etc/seaweedfs,配置如下 cat filer.toml [mysql] enabled = true hostname = "192.168.20.51" port = 3306 username = "root" password = "123456" database = "filer" # create or use an existing database connection_max_idle = 2 connection_max_open = 100

    cat notification.toml [notification.kafka] enabled = true hosts = [ "localhost:9092" ] topic = "seaweedfs_filer"

    cat replication.toml [source.filer] enabled = true grpcAddress = "192.168.20.51:18888" directory = "/"

    [sink.filer] enabled = true grpcAddress = "192.168.20.55:18888" directory = "/" replication = "" collection = "" ttlSec = 0

    192.168.20.55的配置如下 启了一个master,三个volume,一个filer: nohup /home/seaweedfs/weed master -mdir=/home/seaweedfs/data/master -port=9333 -ip 192.168.20.55 -defaultReplication=010 & /home/seaweedfs/weed volume -dir=/home/seaweedfs/volume/volume1 -max=100 -mserver=192.168.20.55:9333 -port=9001 -ip=192.168.20.55 -dataCenter=dc1 -rack=rack1 & >> /home/seaweedfs/logs/vol1.log & /home/seaweedfs/weed volume -dir=/home/seaweedfs/volume/volume2 -max=100 -mserver=192.168.20.55:9333 -port=9002 -ip=192.168.20.55 -dataCenter=dc1 -rack=rack2 & >> /home/seaweedfs/logs/vol2.log & /home/seaweedfs/weed volume -dir=/home/seaweedfs/volume/volume3 -max=100 -mserver=192.168.20.55:9333 -port=9003 -ip=192.168.20.55 -dataCenter=dc1 -rack=rack3 & >> /home/seaweedfs/logs/vol3.log & /home/seaweedfs/weed filer -master=192.168.20.55:9333 -ip=192.168.20.55 & >> /home/seaweedfs/logs/filer.log &

    cd /etc/seaweedfs cat filer.toml [redis] enabled = true address = "localhost:6379" password = "" db = 0

    然后在192.168.20.51上 1).先启动了kafka 2).启动weed filer.replicate 3).启动weed filer 都成功,最后执行了此命令echo 'fs.meta.notify' | weed shell后,都是如下错误

    image

    最后的表现为: image image

    目录都过去了,为什么目录下面的数据却复制不过去,而且voulme里面的数据也没有迁移过去?

    我重新上传数据,两边却很快就同步了 image image

    总结:两边配置以后,新上传的数据,可以同步,然后之前的老数据却同步不过去,求指导,为啥?

    opened by gitlihua 28
  • adjust filer startup

    adjust filer startup

    What problem are we solving?

    During the master election, the filer port cannot be started.

    How are we solving the problem?

    Remove unnecessary block.

    How is the PR tested?

    Checks

    • [ ] I have added unit tests if possible.
    • [ ] I will add related wiki document changes and link to this PR after merging.
    opened by zemul 4
  • Add useRaftHashicorp flag in helm chart

    Add useRaftHashicorp flag in helm chart

    Happy New Year!

    What problem are we solving?

    In my case, it was necessary to use the implementation of the Raft algorithm from Hashicorp. Helm chart did not allow setting the required parameter for the master server

    How are we solving the problem?

    Added the useRaftHashicorp flag to the Helm chart, which is disabled by default

    How is the PR tested?

    Enable useRaftHashicorp in your values.yaml

    Checks

    • [ ] I have added unit tests if possible.
    • [ ] I will add related wiki document changes and link to this PR after merging.
    opened by Programmeris 3
  • mount: can not  ensure ordered file handle lock and unlock by orderedMutex

    mount: can not ensure ordered file handle lock and unlock by orderedMutex

    Describe the bug Seaweedfs can not make IO in ordered by file handle lock when I test. The emaphore.NewWeighted is set to math.MaxInt64. The code is below in func newFileHandle(https://github.com/seaweedfs/seaweedfs/commit/22064c342585bddaa7ebdb21e39cac7db87826df) fh := &FileHandle{ fh: handleId, counter: 1, inode: inode, wfs: wfs, fh: handleId, counter: 1, inode: inode, wfs: wfs, orderedMutex: semaphore.NewWeighted(int64(math.MaxInt64)), }

    I think it is a bug. I think semaphore.NewWeighted should be set to 1. Only in this way can we guarantee to prevent concurrency.

    System Setup

    OS version is ubuntu18.04 weed version : 3.34 ( the fault also happened in latset version )

    opened by lizhengui007 5
  • Add S3 ACL support (already passed tests)

    Add S3 ACL support (already passed tests)

    What problem are we solving?

    Add S3 ACL support

    This pr merged the previous pr #3842 #3843 #3844 #3845 #3846 #3847 #3848 #3849 and passed the integration test

    How is the PR tested?

    使用python脚本测试,测试用例 1500+,测试用例图

    Checks

    • [ ] I have added unit tests if possible.
    • [ ] I will add related wiki document changes and link to this PR after merging.
    opened by shichanglin5 3
  • It is not possible to request a file via master if the volume is in a read-only state

    It is not possible to request a file via master if the volume is in a read-only state

    I have one master server and one volume server, I have moved one of the volumes to the read-only state. The server has lost information about volumes that are in the read-only state. The file can be requested only from the volume server.

    Procedure of actions:

    1. Uploading a file
    2. I make the volume read-only
    lock
    volume.marknode 172.16.202.6:8080 -volumeId 3 -readonly
    unlock
    
    1. Trying to upload a file
    wget 172.16.202.6:9333/3,013c6a774d
    --2022-12-27 11:52:18-- http://172.16.202.6:9333/3,013c6a774d
    Connecting to 172.16.202.6:9333... connected.
    HTTP request sent, awaiting response... 404 Not Found
    2022-12-27 11:52:18 ERROR 404: Not Found.
    

    System Setup ./weed master -ip=172.16.202.6 ./weed -v=4 volume -index=leveldb -pprof=true -max=100 -mserver="172.16.202.6:9333" -port=8080 -dir=/storage

    • ubuntu 22/04
    • The problem was found on version 3.37, 3.36, 3.28, the latest version on which it works is 3.26
    • version 3.27 have problem with set read-only from shell
    volume.mark -node 172.16.202.6:8080 -volumeId 3 -readonly
    error: rpc error: code = Unknown desc = grpc VolumeMarkReadonly with master: 172.16.202.6:9333%!(EXTRA *errors.errorString=set volume 3 to read only on master: rpc error: code = Unimplemented desc = method VolumeMarkReadonly not implemented)
    

    Expected behavior I expect that the master will allow requesting files from volumes that have moved to the read-only state.

    opened by kyklaed 0
  • Can not upgrade from 2.xx to 3.xx

    Can not upgrade from 2.xx to 3.xx

    Describe the bug

    I1227 07:33:51.513225 volume_grpc_client_to_master.go:43 checkWithMaster localhost:9333: get master localhost:9333 configuration: rpc error: code = Unavailable desc = connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
    I1227 07:34:13.307527 volume_grpc_client_to_master.go:43 checkWithMaster localhost:9333: get master localhost:9333 configuration: rpc error: code = Unavailable desc = connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
    

    I couldn't Seaweedfs upgrade from 2.xx to 3.xx

    System Setup This is my command:

    sudo docker run --name weed_master11 --restart always -d \
    --net=host \
    -p 9333:9333 -p 19333:19333 \
    -v /home/seaweedfs/certs/weed:/etc/seaweedfs/certs \
    -v /home/seaweedfs/data/master:/data/seaweedfs/master \
    -v /home/seaweedfs/security.toml:/etc/seaweedfs/security.toml \
    chrislusf/seaweedfs master -mdir=/data/seaweedfs/master \
    -ip=(curl -s https://ipinfo.io/ip) \
    -defaultReplication=010
    
    sudo docker run --name weed_volume11 --restart always -d \
    --net=host \
    -p 8080:8080 -p 18080:18080 \
    -v /home/seaweedfs/data/volume:/data/seaweedfs/volume \
    -v /home/seaweedfs/security.toml:/etc/seaweedfs/security.toml \
    -v /home/seaweedfs/certs/weed:/etc/seaweedfs/certs \
    chrislusf/seaweedfs volume -dataCenter=dc1 -rack=rack1 -dir=/data/seaweedfs/volume -max=0 \
    -ip=(curl -s https://ipinfo.io/ip) \
    -mserver=localhost:9333
    
    sudo docker run --name weed_filer11 --restart always -d \
    --net=host \
    -p 8888:8888 -p 18888:18888 -p 8333:8333 \
    -v /home/seaweedfs/s3config.json:/s3config.json \
    -v /home/seaweedfs/filer.toml:/etc/seaweedfs/filer.toml \
    -v /home/seaweedfs/filer.conf:/etc/seaweedfs/filer.conf \
    -v /home/seaweedfs/data/filer:/data/ \
    -v /home/seaweedfs/security.toml:/etc/seaweedfs/security.toml \
    -v /home/seaweedfs/certs/weed:/etc/seaweedfs/certs \
    chrislusf/seaweedfs filer -s3 -s3.config=/s3config.json \
    -ip=(curl -s https://ipinfo.io/ip) \
    -master=localhost:9333
    

    ubuntu 20.04 Can not use command weed shell. It looks like we can't connect to weed master It works fine when I downgraded to the old version (8 months ago).

    opened by meotimdihia 0
Releases(3.38)
Owner
Chris Lu
https://github.com/chrislusf/seaweedfs SeaweedFS the distributed file system and object store for billions of small files ...
Chris Lu
A user-space file system for interacting with Google Cloud Storage

gcsfuse is a user-space file system for interacting with Google Cloud Storage. Current status Please treat gcsfuse as beta-quality software. Use it fo

Google Cloud Platform 1.6k Dec 29, 2022
JuiceFS is a distributed POSIX file system built on top of Redis and S3.

JuiceFS is a high-performance POSIX file system released under GNU Affero General Public License v3.0. It is specially optimized for the cloud-native

Juicedata, Inc 7.2k Jan 1, 2023
A distributed key value store in under 1000 lines. Used in production at comma.ai

minikeyvalue Fed up with the complexity of distributed filesystems? minikeyvalue is a ~1000 line distributed key value store, with support for replica

George Hotz 2.5k Jan 9, 2023
A FileSystem Abstraction System for Go

A FileSystem Abstraction System for Go Overview Afero is a filesystem framework providing a simple, uniform and universal API interacting with any fil

Steve Francia 4.9k Dec 31, 2022
Cross-platform file system notifications for Go.

File system notifications for Go fsnotify utilizes golang.org/x/sys rather than syscall from the standard library. Ensure you have the latest version

fsnotify 7.7k Jan 2, 2023
The Swift Virtual File System

*** This project is not maintained anymore *** The Swift Virtual File System SVFS is a Virtual File System over Openstack Swift built upon fuse. It is

OVHcloud 374 Dec 11, 2022
Goofys is a high-performance, POSIX-ish Amazon S3 file system written in Go

Goofys is a high-performance, POSIX-ish Amazon S3 file system written in Go Overview Goofys allows you to mount an S3 bucket as a filey system. It's a

Ka-Hing Cheung 4.5k Jan 8, 2023
Cross-platform file system notifications for Go.

File system notifications for Go fsnotify utilizes golang.org/x/sys rather than syscall from the standard library. Ensure you have the latest version

alan liu 0 Aug 7, 2017
Go bindings to systemd socket activation, journal, D-Bus, and unit files

go-systemd Go bindings to systemd. The project has several packages: activation - for writing and using socket activation from Go daemon - for notifyi

CoreOS 2.1k Dec 30, 2022
Git extension for versioning large files

Git Large File Storage Git LFS is a command line extension and specification for managing large files with Git. The client is written in Go, with pre-

Git LFS 11k Jan 5, 2023
SeaweedFS a fast distributed storage system for blobs, objects, files, and data lake, for billions of files

SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding.

Chris Lu 16.3k Jan 8, 2023
Azure Data Lake Storage Account Share 9p File System

dlfs Azure storage account (data lake) v2 as a 9p file system. Fork of abfs. Written in Go. Created during the 2021 MS hackathon. Build Currently the

Sean Hinchee 5 Oct 15, 2021
Tapestry is an underlying distributed object location and retrieval system (DOLR) which can be used to store and locate objects. This distributed system provides an interface for storing and retrieving key-value pairs.

Tapestry This project implements Tapestry, an underlying distributed object location and retrieval system (DOLR) which can be used to store and locate

Han Cai 1 Mar 16, 2022
Dev Lake is the one-stop solution that integrates, analyzes, and visualizes software development data

Dev Lake is the one-stop solution that integrates, analyzes, and visualizes software development data throughout the software development life cycle (SDLC) for engineering teams.

Merico 78 Dec 30, 2022
Golang implementation of distributed mutex on Azure lease blobs

Distributed Mutex on Azure Lease Blobs This package implements distributed lock available for multiple processes. Possible use-cases include exclusive

YouScan 11 Jul 31, 2022
Signature-server - stores transaction blobs and uses predefined secret key to sign and verify those transactions

Signature Server Signature server stores transaction blobs and uses predefined s

Rezoan Tamal 0 Feb 14, 2022
The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your container orchestrator

fortress-csi The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your co

Fortress 0 Jan 23, 2022
Distributed reliable key-value store for the most critical data of a distributed system

etcd Note: The master branch may be in an unstable or even broken state during development. Please use releases instead of the master branch in order

etcd-io 42.2k Dec 28, 2022
Distributed reliable key-value store for the most critical data of a distributed system

etcd Note: The master branch may be in an unstable or even broken state during development. Please use releases instead of the master branch in order

etcd-io 42.2k Dec 30, 2022