系统配置

Seaweedfs使用指南

微信扫一扫,分享到朋友圈

Seaweedfs使用指南
收藏 0 0

seaweedfs是一个高度可扩展的分布式文件存储系统,其主要目标1、存储数十亿文件;2、快速提取文件。

最初使用seaweedfs的目的是用于网站图片的存储,现在看来其是非常便捷且可靠的。

seaweedfs github地址:https://github.com/chrislusf/seaweedfs

seaweedfs releases地址:https://github.com/chrislusf/seaweedfs/releases

这里我采用最新版本的seaweedfs进行配置。

一、Weed 的使用

我们可以通过./weed -h 查看其帮助文档,对于seaweedfs支持的功能通过此命令都可以查看到。

root@trojansun:/application/seaweedfs# ./weed -h

SeaweedFS: store billions of files and serve them fast!

Usage:

	weed command [arguments]

The commands are:

    benchmark   benchmark on writing millions of files and read out
    backup      incrementally backup a volume to local folder
    compact     run weed tool compact on volume file
    filer.copy  copy one or a list of files to a filer folder
    download    download files by file id
    export      list or export files from one volume data file
    filer       start a file server that points to a master server, or a list of master servers
    filer.cat   copy one file to local
    filer.meta.tail see recent changes on a filer
    filer.replicate replicate file changes to another destination
    filer.sync  resumeable continuous synchronization between two active-active or active-passive SeaweedFS clusters
    fix         run weed tool fix on index file if corrupted
    master      start a master server
    mount       mount weed filer to a directory as file system in userspace(FUSE)
    s3          start a s3 API compatible server that is backed by a filer
    msgBroker   start a message queue broker
    scaffold    generate basic configuration files
    server      start a master server, a volume server, and optionally a filer and a S3 gateway
    shell       run interactive administrative commands
    upload      upload one or a list of files
    version     print SeaweedFS version
    volume      start a volume server
    webdav      start a webdav server that is backed by a filer

Use "weed help [command]" for more information about a command.

For Logging, use "weed [logging_options] [command]". The logging options are:
  -alsologtostderr
    	log to standard error as well as files (default true)
  -log_backtrace_at value
    	when logging hits line file:N, emit a stack trace
  -logdir string
    	If non-empty, write log files in this directory
  -logtostderr
    	log to standard error instead of files
  -stderrthreshold value
    	logs at or above this threshold go to stderr
  -v value
    	log level for V logs
  -vmodule value
    	comma-separated list of pattern=N settings for file-filtered logging

在我的项目使用中,主要是用到的命令就是mastervolumes3filer以及日志相关的参数配置,比如-v-logdir等。

二、Master 的使用

如果要使用seaweedfs,则必须要先启动master。master的参数有很多,可以通过./weed master -h进行查看。

root@trojansun:/application/seaweedfs# ./weed master -h
Example: weed master -port=9333
Default Usage:
  -cpuprofile string
    	cpu profile output file
  -defaultReplication string
    	Default replication type if not specified. (default "000")
  -disableHttp
    	disable http requests, only gRPC operations are allowed.
  -garbageThreshold float
    	threshold to vacuum and reclaim spaces (default 0.3)
  -ip string
    	master <ip>|<server> address (default "192.168.31.198")
  -ip.bind string
    	ip address to bind to (default "0.0.0.0")
  -mdir string
    	data directory to store meta data (default "/tmp")
  -memprofile string
    	memory profile output file
  -metrics.address string
    	Prometheus gateway address <host>:<port>
  -metrics.intervalSeconds int
    	Prometheus push interval in seconds (default 15)
  -peers string
    	all master nodes in comma separated ip:port list, example: 127.0.0.1:9093,127.0.0.1:9094,127.0.0.1:9095
  -port int
    	http listen port (default 9333)
  -resumeState
    	resume previous state on start master server
  -volumePreallocate
    	Preallocate disk space for volumes.
  -volumeSizeLimitMB uint
    	Master stops directing writes to oversized volumes. (default 30000)
  -whiteList string
    	comma separated Ip addresses having write permission. No limit if empty.
Description:
  start a master server to provide volume=>location mapping service and sequence number of file ids

	The configuration file "security.toml" is read from ".", "$HOME/.seaweedfs/", "/usr/local/etc/seaweedfs/", or "/etc/seaweedfs/", in that order.

	The example security.toml configuration file can be generated by "weed scaffold -config=security"

通过帮助命令可以看到,其默认有一个启动示例Example: weed master -port=9333,对于master来说,默认的端口是9333。一般情况下,我们是不会对外提供这个端口的,所以默认也可以。

2.1 defaultReplication

master启动的时候可以指定其默认的复制规则(完整规则可以查看Replication),这个视服务器实际的情况而定,如果volume不进行区分的话,但是这里指定了复制规则,在上传文件的时候会报错。

2.2 mdir

数据元数据的存储目录,可以配置为自己的目录。

2.3 metrics.address

此参数是配置监控地址的,我们可以看下seaweedfs wiki的介绍。

这里面有两个非seaweedfs的软件,一个是Prometheus,用来存储metrics的数据,一个是grafana,用来展现数据。如果还没有配置这两个软件,可以不配置metrics.address参数。

2.4 peers

如果有多个Master节点,需要在这里配置好所有的节点清单。

2.5 port

配置master的端口,如果有多个master或者需要更改默认的master端口,配置此项即可。

2.6 resumeState

在启动主服务器上恢复先前的状态(没有明白这个参数的意思,恢复先前的状态?什么状态?)

2.7 volumePreallocate

卷分配,如果不指定的话,可以不配置,默认即可。

2.8 volumeSizeLimitMB

默认分配的卷的大小,这个参数是很重要的,特别在使用s3的时候,如果你的服务器很小,这个参数有可能一不小心就撑爆你的服务器,特别是默认分配空间的时候。调整可以参考 Seaweedfs Amazon-S3-API Wiki

2.9 security.toml

seaweedfs支持通过配置一个security.toml,来使用其独有的安全设置,具体可参考 Seaweedfs Security Wiki,如果你是多台服务器,且是通过外网进行对接的,最好是配置这个。

2.10 master.service

如果你执行./weed master之后断掉了远程的连接,weed是可以继续保持后台运行的,但是考虑到管理的便捷性,我这里还是做了一个master.serivce来进行服务开机启动、启动、重启、停止管理的。

[Unit]
Description=Weed Master 9333
After=network.target

[Service]
Type=simple
User=seaweedfs
Group=seaweedfs

PermissionsStartOnly=true
ExecStart=/application/seaweedfs/weed -logdir /application/seaweedfs/logs master -mdir /application/seaweedfs/master
ExecStop=/bin/kill -s TERM $MAINPID
Restart=always
LimitNOFILE=1000000
LimitNPROC=1000000
LimitCORE=1000000

[Install]
WantedBy=multi-user.target

这个时候就可以启动master了,当master启动完成之后,我们可以访问下master web地址http://youip:9333。

三、Volume 的使用

volume是seaweedfs的实际存储的位置,我们可以通过./weed volume -h来查看下其帮助文档。

root@trojansun:/application/seaweedfs# ./weed volume -h
Example: weed volume -port=8080 -dir=/tmp -max=5 -ip=server_name -mserver=localhost:9333
Default Usage:
  -compactionMBps int
    	limit background compaction or copying speed in mega bytes per second
  -cpuprofile string
    	cpu profile output file
  -dataCenter string
    	current volume server's data center name
  -dir string
    	directories to store data files. dir[,dir]... (default "/tmp")
  -dir.idx string
    	directory to store .idx files
  -fileSizeLimitMB int
    	limit file size to avoid out of memory (default 256)
  -idleTimeout int
    	connection idle seconds (default 30)
  -images.fix.orientation
    	Adjust jpg orientation when uploading.
  -index string
    	Choose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance. (default "memory")
  -ip string
    	ip or server name (default "192.168.31.198")
  -ip.bind string
    	ip address to bind to (default "0.0.0.0")
  -max string
    	maximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured. (default "8")
  -memprofile string
    	memory profile output file
  -metricsPort int
    	Prometheus metrics listen port
  -minFreeSpacePercent string
    	minimum free disk space (default to 1%). Low disk space will mark all volumes as ReadOnly. (default "1")
  -mserver string
    	comma-separated master servers (default "localhost:9333")
  -port int
    	http listen port (default 8080)
  -port.public int
    	port opened to public
  -pprof
    	enable pprof http handlers. precludes --memprofile and --cpuprofile
  -preStopSeconds int
    	number of seconds between stop send heartbeats and stop volume server (default 10)
  -publicUrl string
    	Publicly accessible address
  -rack string
    	current volume server's rack name
  -read.redirect
    	Redirect moved or non-local volumes. (default true)
  -whiteList string
    	comma separated Ip addresses having write permission. No limit if empty.
Description:
  start a volume server to provide storage spaces

通过帮助命令可以看到,其默认有一个启动示例 Example: weed volume -port=8080 -dir=/tmp -max=5 -ip=server_name -mserver=localhost:9333

3.1 dataCenter

可以配置的数据中心,数据中心配置可以为masterreplication提供规则,如果是多地区或者你想配置多地区的形式,可以设置此参数。

3.2 rack

可以配置的机架,机架可以为mastertreplication提供规则,如果是多机架(不同中心的相同机架名称也是作为同机架配置的),可以设置此参数。

3.3 dir

设置volume存储的位置。

3.4 dir.idx

idx的存储位置,如果不设置此参数,则与dir设置的存储位置保存在一起。

3.5 index

索引方式,seaweedfs提供了多个索引方式memory、leveldb、leveldbMedium、leveldbLarge,默认的是memory也就是内存方式,如果内存比较小的话,可以使用leveldb,官方也说明了leveldb比memory慢的影响很小。

3.6 max

volume的最大数量,如果你的节点很少,但是你还要用到s3,那么这个值就要注意了,其默认的是8个,一个s3的bucket是分配7个,当我在使用mac的时候其默认文件还会占用一个volume,那么8个正好就用光了,当你创建第二个bucket的时候,你就会发现文件上传不上去,其实配置是没有问题的,只是volume的数量用完了,所以这里建议配置大一些。

3.7 mserver

默认是localhost:9333

3.8 port

端口默认是8080,如果是多volume的话,那肯定是要改的了。

3.9 port.public

对外的端口

3.10 publicUrl

对外的URL地址

3.11 volume.service

如果你执行./weed volume之后断掉了远程的连接,weed是可以继续保持后台运行的,但是考虑到管理的便捷性,我这里还是做了一个volume.serivce来进行服务开机启动、启动、重启、停止管理的。

[Unit]
Description=Weed Volume Data 1
After=network.target

[Service]
Type=simple
User=seaweedfs
Group=seaweedfs

PermissionsStartOnly=true
ExecStart=/application/seaweedfs/weed -logdir /application/seaweedfs/logs volume -max 100 -index leveldb -port 8011 -dir /application/seaweedfs/volume/data1 -dataCenter bj -rack v4
ExecStop=/bin/kill -s TERM $MAINPID
Restart=always
LimitNOFILE=1000000
LimitNPROC=1000000
LimitCORE=1000000

[Install]
WantedBy=multi-user.target

四、Filer 的使用

可以理解为是一个文件管理器,有了它之后,我们就可以把数据的关系存储到数据库中。

可以通过./weed filer -h查看一下filer的帮助文档。

root@trojansun:/application/seaweedfs# ./weed filer -h
Example: weed filer -port=8888 -master=<ip:port>[,<ip:port>]*
Default Usage:
  -collection string
    	all data will be stored in this collection
  -dataCenter string
    	prefer to read and write to volumes in this data center
  -defaultReplicaPlacement string
    	default replication type. If not specified, use master setting.
  -defaultStoreDir string
    	if filer.toml is empty, use an embedded filer store in the directory (default ".")
  -dirListLimit int
    	limit sub dir listing size (default 100000)
  -disableDirListing
    	turn off directory listing
  -disableHttp
    	disable http request, only gRpc operations are allowed
  -encryptVolumeData
    	encrypt data on volume servers
  -ip string
    	filer server http listen ip address (default "192.168.31.198")
  -ip.bind string
    	ip address to bind to (default "0.0.0.0")
  -master string
    	comma-separated master servers (default "localhost:9333")
  -maxMB int
    	split files larger than the limit (default 32)
  -metricsPort int
    	Prometheus metrics listen port
  -peers string
    	all filers sharing the same filer store in comma separated ip:port list
  -port int
    	filer server http listen port (default 8888)
  -port.readonly int
    	readonly port opened to public
  -rack string
    	prefer to write to volumes in this rack
  -s3
    	whether to start S3 gateway
  -s3.allowEmptyFolder
    	allow empty folders
  -s3.cert.file string
    	path to the TLS certificate file
  -s3.config string
    	path to the config file
  -s3.domainName string
    	suffix of the host name in comma separated list, {bucket}.{domainName}
  -s3.key.file string
    	path to the TLS private key file
  -s3.port int
    	s3 server http listen port (default 8333)
  -saveToFilerLimit int
    	files smaller than this limit will be saved in filer store
  -webdav
    	whether to start webdav gateway
  -webdav.cacheCapacityMB int
    	local cache capacity in MB (default 1000)
  -webdav.cacheDir string
    	local cache directory for file chunks (default "/tmp")
  -webdav.cert.file string
    	path to the TLS certificate file
  -webdav.collection string
    	collection to create the files
  -webdav.key.file string
    	path to the TLS private key file
  -webdav.port int
    	webdav server http listen port (default 7333)
Description:
  start a file server which accepts REST operation for any files.

	//create or overwrite the file, the directories /path/to will be automatically created
	POST /path/to/file
	//get the file content
	GET /path/to/file
	//create or overwrite the file, the filename in the multipart request will be used
	POST /path/to/
	//return a json format subdirectory and files listing
	GET /path/to/

	The configuration file "filer.toml" is read from ".", "$HOME/.seaweedfs/", "/usr/local/etc/seaweedfs/", or "/etc/seaweedfs/", in that order.
	If the "filer.toml" is not found, an embedded filer store will be craeted under "-defaultStoreDir".

	The example filer.toml configuration file can be generated by "weed scaffold -config=filer"

4.1 filer.toml 示例

如果不配置filer.toml,那么系统默认的是使用的leveldb,其数据库的文件会保存在启动filer的同目录下,但是一般我们为了便于管理是不建议这么做的(leveldb大牛除外),所以通常情况下是使用mysql或者postgresql,鉴于项目的特性,我这里使用的是postgresql。

PS:filer可以同时启动s3,但是我有多个s3需要管理,所有我都是单独起s3的。

默认的filer文件可以通过scaffold获取。

[postgres] # or cockroachdb, YugabyteDB
# CREATE TABLE IF NOT EXISTS filemeta (
#   dirhash     BIGINT,
#   name        VARCHAR(65535),
#   directory   VARCHAR(65535),
#   meta        bytea,
#   PRIMARY KEY (dirhash, name)
# );
enabled = true
hostname = "192.168.31.198"
port = 5432
username = "seaweedfs"
password = "1234qwertQ"
database = "seaweedfs"          # create or use an existing database
schema = "public"
sslmode = "disable"
connection_max_idle = 100
connection_max_open = 100

4.2 collection

collection可以理解为是一个集合,如果配置了此参数,那么调用此filer的时候都会写入到这个集合。

4.3 dataCenter

数据中心,如果指定了,则调用此filer的时候都会写入到这个数据中心。

4.4 rack

机架,如果指定了,则调用此filer的时候都会写入到这个机架。

4.5 encryptVolumeData

加密数据,如果是机密文件的话,可以加密,我的都是图片,所以这个没有配置。

4.6 master

master 主节点

4.7 peers

所有的filer集合,多个用英文逗号(,)分隔

4.8 filer.service

如果你执行./weed filer之后断掉了远程的连接,weed是可以继续保持后台运行的,但是考虑到管理的便捷性,我这里还是做了一个filer.serivce来进行服务开机启动、启动、重启、停止管理的。

[Unit]
Description=Weed Filer
After=network.target

[Service]
Type=simple
User=seaweedfs
Group=seaweedfs

PermissionsStartOnly=true
ExecStart=/application/seaweedfs/weed -logdir /application/seaweedfs/logs filer
ExecStop=/bin/kill -s TERM $MAINPID
Restart=always
LimitNOFILE=1000000
LimitNPROC=1000000
LimitCORE=1000000

[Install]
WantedBy=multi-user.target

五、 s3 的使用

如果要启用类似于s3的使用方式,可以通过./weed s3来进行单独启用。

我们也可以通过./weed s3 -h来查看一下s3的帮助文档。

root@trojansun:/application/seaweedfs# ./weed s3 -h
Example: weed s3 [-port=8333] [-filer=<ip:port>] [-config=</path/to/config.json>]
Default Usage:
  -allowEmptyFolder
    	allow empty folders
  -cert.file string
    	path to the TLS certificate file
  -config string
    	path to the config file
  -domainName string
    	suffix of the host name in comma separated list, {bucket}.{domainName}
  -filer string
    	filer server address (default "localhost:8888")
  -key.file string
    	path to the TLS private key file
  -metricsPort int
    	Prometheus metrics listen port
  -port int
    	s3 server http listen port (default 8333)
Description:
  start a s3 API compatible server that is backed by a filer.

	By default, you can use any access key and secret key to access the S3 APIs.
	To enable credential based access, create a config.json file similar to this:

{
  "identities": [
    {
      "name": "anonymous",
      "actions": [
        "Read"
      ]
    },
    {
      "name": "some_admin_user",
      "credentials": [
        {
          "accessKey": "some_access_key1",
          "secretKey": "some_secret_key1"
        }
      ],
      "actions": [
        "Admin",
        "Read",
        "List",
        "Tagging",
        "Write"
      ]
    },
    {
      "name": "some_read_only_user",
      "credentials": [
        {
          "accessKey": "some_access_key2",
          "secretKey": "some_secret_key2"
        }
      ],
      "actions": [
        "Read"
      ]
    },
    {
      "name": "some_normal_user",
      "credentials": [
        {
          "accessKey": "some_access_key3",
          "secretKey": "some_secret_key3"
        }
      ],
      "actions": [
        "Read",
        "List",
        "Tagging",
        "Write"
      ]
    },
    {
      "name": "user_limited_to_bucket1",
      "credentials": [
        {
          "accessKey": "some_access_key4",
          "secretKey": "some_secret_key4"
        }
      ],
      "actions": [
        "Read:bucket1",
        "List:bucket1",
        "Tagging:bucket1",
        "Write:bucket1"
      ]
    }
  ]
}

5.1 config

s3的配置文件,主要用来进行鉴权使用。

5.2 domainName

如果绑定了域名,那么访问这个域名的时候,就是{bucket}.{domainName}。

这里有一个部署经验分享,如果后端的bucket不是domainName的前缀,那么可以通过以下方法进行配置(均采用NGINX)。

5.2.1 proxy_set_header方法

使用domainName参数

proxy_set_header Host tjs.trojansun.com;

5.2.2 proxy_pass方法

不使用domainName参数

proxy_pass http://trojansun/tjs;

5.3 cert.file

如果需要配置域名的https,这个是证书cert。

5.4 key.file

如果需要配置域名的https,这个是证书key。

5.5 port

启动s3的端口

5.6 s3.service

[Unit]
Description=Weed S3
After=network.target

[Service]
Type=simple
User=seaweedfs
Group=seaweedfs

PermissionsStartOnly=true
ExecStart=/application/seaweedfs/weed -logdir /application/seaweedfs/logs s3
ExecStop=/bin/kill -s TERM $MAINPID
Restart=always
LimitNOFILE=1000000
LimitNPROC=1000000
LimitCORE=1000000

[Install]
WantedBy=multi-user.target

六、scaffold 的使用

当我们在使用seaweedfs的配置文的时候,大多数情况下我们是通过help和wiki了解到的,但是对于配置文件来说,通过help是没有办法获取到的,官方给出了一个方法scaffold。

我们可以通过./weed scaffold -h看下scaffold的帮助文档。

root@trojansun:/application/seaweedfs# ./weed scaffold -h
Example: weed scaffold -config=[filer|notification|replication|security|master]
Default Usage:
  -config string
    	[filer|notification|replication|security|master] the configuration file to generate (default "filer")
  -output string
    	if not empty, save the configuration file to this directory
Description:
  Generate filer.toml with all possible configurations for you to customize.

	The options can also be overwritten by environment variables.
	For example, the filer.toml mysql password can be overwritten by environment variable
		export WEED_MYSQL_PASSWORD=some_password
	Environment variable rules:
		* Prefix the variable name with "WEED_"
		* Upppercase the reset of variable name.
		* Replace '.' with '_'

我们可以通过此命令获取到相应的各种配置文件,如果添加-output,那么可以把文件输出出来,下面主要是为了演示,就不输出,直接打印到控制台了。

6.1 获取Master

6.1.1 操作命令

./weed scaffold --config master

6.1.2 操作结果

root@trojansun:/application/seaweedfs# ./weed scaffold --config master

# Put this file to one of the location, with descending priority
#    ./master.toml
#    $HOME/.seaweedfs/master.toml
#    /etc/seaweedfs/master.toml
# this file is read by master

[master.maintenance]
# periodically run these scripts are the same as running them from 'weed shell'
scripts = """
  lock
  ec.encode -fullPercent=95 -quietFor=1h
  ec.rebuild -force
  ec.balance -force
  volume.balance -force
  volume.fix.replication
  unlock
"""
sleep_minutes = 17          # sleep minutes between each script execution

[master.filer]
default = "localhost:8888"    # used by maintenance scripts if the scripts needs to use fs related commands


[master.sequencer]
type = "raft"     # Choose [raft|etcd] type for storing the file id sequence
# when sequencer.type = etcd, set listen client urls of etcd cluster that store file id sequence
# example : http://127.0.0.1:2379,http://127.0.0.1:2389
sequencer_etcd_urls = "http://127.0.0.1:2379"


# configurations for tiered cloud storage
# old volumes are transparently moved to cloud for cost efficiency
[storage.backend]
	[storage.backend.s3.default]
	enabled = false
	aws_access_key_id     = ""     # if empty, loads from the shared credentials file (~/.aws/credentials).
	aws_secret_access_key = ""     # if empty, loads from the shared credentials file (~/.aws/credentials).
	region = "us-east-2"
	bucket = "your_bucket_name"    # an existing bucket
	endpoint = ""

# create this number of logical volumes if no more writable volumes
# count_x means how many copies of data.
# e.g.:
#   000 has only one copy, copy_1
#   010 and 001 has two copies, copy_2
#   011 has only 3 copies, copy_3
[master.volume_growth]
copy_1 = 7                # create 1 x 7 = 7 actual volumes
copy_2 = 6                # create 2 x 6 = 12 actual volumes
copy_3 = 3                # create 3 x 3 = 9 actual volumes
copy_other = 1            # create n x 1 = n actual volumes

# configuration flags for replication
[master.replication]
# any replication counts should be considered minimums. If you specify 010 and
# have 3 different racks, that's still considered writable. Writes will still
# try to replicate to all available volumes. You should only use this option
# if you are doing your own replication or periodic sync of volumes.
treat_replication_as_minimums = false

6.2 获取Filer

6.2.1 操作命令

./weed scaffold --config filer

6.2.2 操作结果

root@trojansun:/application/seaweedfs# ./weed scaffold --config filer

# A sample TOML config file for SeaweedFS filer store
# Used with "weed filer" or "weed server -filer"
# Put this file to one of the location, with descending priority
#    ./filer.toml
#    $HOME/.seaweedfs/filer.toml
#    /etc/seaweedfs/filer.toml

####################################################
# Customizable filer server options
####################################################
[filer.options]
# with http DELETE, by default the filer would check whether a folder is empty.
# recursive_delete will delete all sub folders and files, similar to "rm -Rf"
recursive_delete = false
# directories under this folder will be automatically creating a separate bucket
buckets_folder = "/buckets"

####################################################
# The following are filer store options
####################################################

[leveldb2]
# local on disk, mostly for simple single-machine setup, fairly scalable
# faster than previous leveldb, recommended.
enabled = true
dir = "./filerldb2"					# directory to store level db files

[leveldb3]
# similar to leveldb2.
# each bucket has its own meta store.
enabled = false
dir = "./filerldb3"					# directory to store level db files

[rocksdb]
# local on disk, similar to leveldb
# since it is using a C wrapper, you need to install rocksdb and build it by yourself
enabled = false
dir = "./filerrdb"					# directory to store rocksdb files

[mysql]  # or memsql, tidb
# CREATE TABLE IF NOT EXISTS filemeta (
#   dirhash     BIGINT         COMMENT 'first 64 bits of MD5 hash value of directory field',
#   name        VARCHAR(1000)  COMMENT 'directory or file name',
#   directory   TEXT           COMMENT 'full path to parent directory',
#   meta        LONGBLOB,
#   PRIMARY KEY (dirhash, name)
# ) DEFAULT CHARSET=utf8;

enabled = false
hostname = "localhost"
port = 3306
username = "root"
password = ""
database = ""              # create or use an existing database
connection_max_idle = 2
connection_max_open = 100
connection_max_lifetime_seconds = 0
interpolateParams = false

[mysql2]  # or memsql, tidb
enabled = false
createTable = """
  CREATE TABLE IF NOT EXISTS %s (
    dirhash BIGINT, 
    name VARCHAR(1000), 
    directory TEXT, 
    meta LONGBLOB, 
    PRIMARY KEY (dirhash, name)
  ) DEFAULT CHARSET=utf8;
"""
hostname = "localhost"
port = 3306
username = "root"
password = ""
database = ""              # create or use an existing database
connection_max_idle = 2
connection_max_open = 100
connection_max_lifetime_seconds = 0
interpolateParams = false

[postgres] # or cockroachdb, YugabyteDB
# CREATE TABLE IF NOT EXISTS filemeta (
#   dirhash     BIGINT,
#   name        VARCHAR(65535),
#   directory   VARCHAR(65535),
#   meta        bytea,
#   PRIMARY KEY (dirhash, name)
# );
enabled = false
hostname = "localhost"
port = 5432
username = "postgres"
password = ""
database = "postgres"          # create or use an existing database
schema = ""
sslmode = "disable"
connection_max_idle = 100
connection_max_open = 100

[postgres2]
enabled = false
createTable = """
  CREATE TABLE IF NOT EXISTS %s (
    dirhash   BIGINT, 
    name      VARCHAR(65535), 
    directory VARCHAR(65535), 
    meta      bytea, 
    PRIMARY KEY (dirhash, name)
  );
"""
hostname = "localhost"
port = 5432
username = "postgres"
password = ""
database = "postgres"          # create or use an existing database
schema = ""
sslmode = "disable"
connection_max_idle = 100
connection_max_open = 100

[cassandra]
# CREATE TABLE filemeta (
#    directory varchar,
#    name varchar,
#    meta blob,
#    PRIMARY KEY (directory, name)
# ) WITH CLUSTERING ORDER BY (name ASC);
enabled = false
keyspace="seaweedfs"
hosts=[
	"localhost:9042",
]
username=""
password=""
# This changes the data layout. Only add new directories. Removing/Updating will cause data loss.
superLargeDirectories = []

[hbase]
enabled = false
zkquorum = ""
table = "seaweedfs"

[redis2]
enabled = false
address  = "localhost:6379"
password = ""
database = 0
# This changes the data layout. Only add new directories. Removing/Updating will cause data loss.
superLargeDirectories = []

[redis_cluster2]
enabled = false
addresses = [
    "localhost:30001",
    "localhost:30002",
    "localhost:30003",
    "localhost:30004",
    "localhost:30005",
    "localhost:30006",
]
password = ""
# allows reads from slave servers or the master, but all writes still go to the master
readOnly = false
# automatically use the closest Redis server for reads
routeByLatency = false
# This changes the data layout. Only add new directories. Removing/Updating will cause data loss.
superLargeDirectories = []

[etcd]
enabled = false
servers = "localhost:2379"
timeout = "3s"

[mongodb]
enabled = false
uri = "mongodb://localhost:27017"
option_pool_size = 0
database = "seaweedfs"

[elastic7]
enabled = false
servers = [
    "http://localhost1:9200",
    "http://localhost2:9200",
    "http://localhost3:9200",
]
username = ""
password = ""
sniff_enabled = false
healthcheck_enabled = false
# increase the value is recommend, be sure the value in Elastic is greater or equal here
index.max_result_window = 10000



##########################
##########################
# To add path-specific filer store:
#
# 1. Add a name following the store type separated by a dot ".". E.g., cassandra.tmp
# 2. Add a location configuraiton. E.g., location = "/tmp/"
# 3. Copy and customize all other configurations. 
#     Make sure they are not the same if using the same store type!
# 4. Set enabled to true
#
# The following is just using cassandra as an example
##########################
[redis2.tmp]
enabled = false
location = "/tmp/"
address  = "localhost:6379"
password = ""
database = 1

6.3 获取Security

6.3.1 操作命令

./weed scaffold --config security

6.3.2 操作结果

root@trojansun:/application/seaweedfs# ./weed scaffold --config security

# Put this file to one of the location, with descending priority
#    ./security.toml
#    $HOME/.seaweedfs/security.toml
#    /etc/seaweedfs/security.toml
# this file is read by master, volume server, and filer

# the jwt signing key is read by master and volume server.
# a jwt defaults to expire after 10 seconds.
[jwt.signing]
key = ""
expires_after_seconds = 10           # seconds

# jwt for read is only supported with master+volume setup. Filer does not support this mode.
[jwt.signing.read]
key = ""
expires_after_seconds = 10           # seconds

# all grpc tls authentications are mutual
# the values for the following ca, cert, and key are paths to the PERM files.
# the host name is not checked, so the PERM files can be shared.
[grpc]
ca = ""

[grpc.volume]
cert = ""
key  = ""

[grpc.master]
cert = ""
key  = ""

[grpc.filer]
cert = ""
key  = ""

[grpc.msg_broker]
cert = ""
key  = ""

# use this for any place needs a grpc client
# i.e., "weed backup|benchmark|filer.copy|filer.replicate|mount|s3|upload"
[grpc.client]
cert = ""
key  = ""


# volume server https options
# Note: work in progress!
#     this does not work with other clients, e.g., "weed filer|mount" etc, yet.
[https.client]
enabled = true
[https.volume]
cert = ""
key  = ""
锦城虽云乐,不如早还家。
下一篇

Linux 解决no acceptable C compiler found in $PATH

你也可能喜欢

发表评论

您的电子邮件地址不会被公开。 必填项已用 * 标注

提示:点击验证后方可评论!

插入图片