Administrator
发布于 2023-02-20 / 48 阅读
0
0

MinioNote

Install Doc

refer to : https://min.io/docs/minio/linux/index.html

Install the MinIO Server

wget https://dl.min.io/server/minio/release/linux-amd64/archive/minio-20221202191922.0.0.x86_64.rpm -O minio.rpm
sudo dnf install minio.rpm

Launch the MinIO Server

mkdir ~/minio
minio server ~/minio --console-address :9090

The mkdir command creates the folder explicitly at the specified path.

The minio server command starts the MinIO server. The path argument ~/minio identifies the folder in which the server operates.

open port

firewall-cmd --add-port=9090/tcp --permanent    # Console
firewall-cmd --add-port=9000/tcp --permanent    # API

firewall-cmd --reload

Connect Your Browser to the MinIO Server

Open http://127.0.0.1:9000 in a web browser to access the MinIO Console. You can alternatively enter any of the network addresses specified as part of the server command output. For example, console: http://192.0.2.10:9090 http://127.0.0.1:9090 in the example output indicates two possible addresses to use for connecting to the Console.

While the port 9000 is used for connecting to the API, MinIO automatically redirects browser access to the MinIO Console.

Log in to the Console with the RootUser and RootPass user credentials displayed in the output. These default to minioadmin | minioadmin.

# 在浏览器中,输入9000或者9090,都可以。如果输入9000,那么会redirects to 9090
http://1.13.175.185:9000


http://1.13.175.185:9090

You can use the MinIO Console for general administration tasks like Identity and Access Management, Metrics and Log Monitoring, or Server Configuration. Each MinIO server includes its own embedded MinIO Console.

(Optional) Install the MinIO Client

The MinIO Client allows you to work with your MinIO server from the commandline.

Download the mc client and install it to a location on your system PATH such as /usr/local/bin. You can alternatively run the binary from the download location.

wget https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc
sudo mv mc /usr/local/bin/mc

Use mc alias set to create a new alias associated to your local deployment. You can run mc commands against this alias:

mc alias set local http://127.0.0.1:9000 minioadmin minioadmin
mc admin info local

The mc alias set takes four arguments:

  • The name of the alias
  • The hostname or IP address and port of the MinIO server
  • The Access Key for a MinIO user
  • The Secret Key for a MinIO user

The example above uses the root user.

执行mc alias set,有以下提示信息:

[root@VM-0-14-opencloudos ~]# mc alias set local http://127.0.0.1:9000 minioadmin minioadmin
mc: Configuration written to `/root/.mc/config.json`. Please update your access credentials.
mc: Successfully created `/root/.mc/share`.
mc: Initialized share uploads `/root/.mc/share/uploads.json` file.
mc: Initialized share downloads `/root/.mc/share/downloads.json` file.
Added `local` successfully.

Deploy MinIO: Single-Node Single-Drive

The following procedure deploys MinIO consisting of a single MinIO server and a single drive or storage volume.

Network File System Volumes Break Consistency Guarantees

MinIO’s strict read-after-write and list-after-write consistency model requires local drive filesystems.

MinIO cannot provide consistency guarantees if the underlying storage volumes are NFS or a similar network-attached storage volume.

For deployments that require using network-attached storage, use NFSv4 for best results.

1) Download the MinIO Server

wget https://dl.min.io/server/minio/release/linux-amd64/archive/minio-20221202191922.0.0.x86_64.rpm -O minio.rpm
#也可以通过下面的网址,进行下载
wget http://dltest.minio.org.cn/server/minio/release/linux-amd64/archive/minio-20221202191922.0.0.x86_64.rpm -O minio.rpm

sudo dnf install minio.rpm

2) Create the systemd Service File

The .deb or .rpm packages install the following systemd service file to /etc/systemd/system/minio.service. For binary installations, create this file manually on all MinIO hosts:

[Unit]
Description=MinIO
Documentation=https://min.io/docs/minio/linux/index.html
Wants=network-online.target
After=network-online.target
AssertFileIsExecutable=/usr/local/bin/minio

[Service]
WorkingDirectory=/usr/local

User=minio-user
Group=minio-user
ProtectProc=invisible

EnvironmentFile=-/etc/default/minio
ExecStartPre=/bin/bash -c "if [ -z \"${MINIO_VOLUMES}\" ]; then echo \"Variable MINIO_VOLUMES not set in /etc/default/minio\"; exit 1; fi"
ExecStart=/usr/local/bin/minio server $MINIO_OPTS $MINIO_VOLUMES

# Let systemd restart this service always
Restart=always

# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65536

# Specifies the maximum number of threads this process can create
TasksMax=infinity

# Disable timeout logic and wait until process is stopped
TimeoutStopSec=infinity
SendSIGKILL=no

[Install]
WantedBy=multi-user.target

# Built for ${project.name}-${project.version} (${project.name})

上面是默认的配置,为了让console的端口,默认在9090端口提供服务,下面修改下,增加--console-address :9090

[Unit]
Description=MinIO
Documentation=https://min.io/docs/minio/linux/index.html
Wants=network-online.target
After=network-online.target
AssertFileIsExecutable=/usr/local/bin/minio

[Service]
WorkingDirectory=/usr/local

User=minio-user
Group=minio-user
ProtectProc=invisible

EnvironmentFile=-/etc/default/minio
ExecStartPre=/bin/bash -c "if [ -z \"${MINIO_VOLUMES}\" ]; then echo \"Variable MINIO_VOLUMES not set in /etc/default/minio\"; exit 1; fi"
ExecStart=/usr/local/bin/minio server --console-address :9090 $MINIO_OPTS $MINIO_VOLUMES

# Let systemd restart this service always
Restart=always

# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65536

# Specifies the maximum number of threads this process can create
TasksMax=infinity

# Disable timeout logic and wait until process is stopped
TimeoutStopSec=infinity
SendSIGKILL=no

[Install]
WantedBy=multi-user.target

# Built for ${project.name}-${project.version} (${project.name})

The minio.service file runs as the minio-user User and Group by default. You can create the user and group using the groupadd and useradd commands. The following example creates the user, group, and sets permissions to access the folder paths intended for use by MinIO. These commands typically require root (sudo) permissions.

groupadd -r minio-user
useradd -M -r -g minio-user minio-user
passwd minio-user
chown minio-user:minio-user /mnt/disk1 /mnt/disk2 /mnt/disk3 /mnt/disk4


visudo 
minio-user ALL=(ALL) NOPASSWD: ALL

The specified drive paths are provided as an example. Change them to match the path to those drives intended for use by MinIO.

Alternatively, change the User and Group values to another user and group on the system host with the necessary access and permissions.

MinIO publishes additional startup script examples on github.com/minio/minio-service.

3) Create the Environment Variable File

Create an environment variable file at /etc/default/minio. For Windows hosts, specify a Windows-style path similar to C:\minio\config. The MinIO Server container can use this file as the source of all environment variables.

The following example provides a starting environment file:

# MINIO_ROOT_USER and MINIO_ROOT_PASSWORD sets the root account for the MinIO server.
# This user has unrestricted permissions to perform S3 and administrative API operations on any resource in the deployment.
# Omit to use the default values 'minioadmin:minioadmin'.
# MinIO recommends setting non-default values as a best practice, regardless of environment

MINIO_ROOT_USER=weipeng
MINIO_ROOT_PASSWORD=weipeng@minio

# MINIO_VOLUMES sets the storage volume or path to use for the MinIO server.

MINIO_VOLUMES="/mnt/data"


Include any other environment variables as required for your local deployment.

4) Start the MinIO Service

Issue the following command on the local host to start the MinIO SNSD deployment as a service:

su minio-user
sudo systemctl start minio.service

Use the following commands to confirm the service is online and functional:

sudo systemctl status minio.service
journalctl -f -u minio.service

MinIO may log an increased number of non-critical warnings while the server processes connect and synchronize. These warnings are typically transient and should resolve as the deployment comes online.

Changed in version RELEASE.2023-02-09T05-16-53Z: MinIO starts if it detects enough drives to meet the write quorum for the deployment.

If any drives remain offline after starting MinIO, check and cure any issues blocking their functionality before starting production workloads.

The journalctl output should resemble the following:

Status:         1 Online, 0 Offline.
API: http://192.168.2.100:9000  http://127.0.0.1:9000
RootUser: myminioadmin
RootPass: minio-secret-key-change-me
Console: http://192.168.2.100:9090 http://127.0.0.1:9090
RootUser: myminioadmin
RootPass: minio-secret-key-change-me

Command-line: https://min.io/docs/minio/linux/reference/minio-mc.html
   $ mc alias set myminio http://10.0.2.100:9000 myminioadmin minio-secret-key-change-me

Documentation: https://min.io/docs/minio/linux/index.html

The API block lists the network interfaces and port on which clients can access the MinIO S3 API. The Console block lists the network interfaces and port on which clients can access the MinIO Web Console.

5) Connect to the MinIO Service

MinIO Console

You can access the MinIO Console by entering any of the hostnames or IP addresses from the MinIO server Console block in your preferred browser, such as http://localhost:9090.

Log in with the MINIO_ROOT_USER and MINIO_ROOT_PASSWORD configured in the environment file specified to the container.

Core Operational Concepts

refer to : https://min.io/docs/minio/linux/operations/concepts.html#id5

What are the components of a MinIO Deployment?

A MinIO deployment consists of a set of storage and compute resources running one or more minio server nodes that together act as a single object storage repository.

A standalone instance of MinIO consists of a single Server Pool with a single minio server node. Standalone instances are best suited for initial development and evaluation.

A MinIO deployment can run directly on a physical device in a bare metal or non-virtualized infrastructure. Or, MinIO might run within a virtual machine on a cloud service, such as using Docker, Podman, or Kubernetes. MinIO can run locally, on a private cloud, or in any of the many public clouds available on the market.

The specific way you design, architect, and build your system is called the system’s topology.

What system topologies does MinIO support?

MinIO can deploy to three types of topologies:

  1. Single Node Single Drive, one MinIO server with a single drive or folder for data

    For example, testing on a local PC using a folder on the computer’s hard drive.

  2. Single Node Multi Drive, one MinIO server with multiple mounted drives or folders for data

    For example, a single container with two or more mounted volumes.

  3. Multi Node Multi Drive, multiple MinIO servers with multiple mounted drives or volumes for data

    For example, a production deployment using Ansible, Terraform, or manual processes

How does a distributed MinIO deployment work?

A distributed deployment makes use of the resources of more than one physical or virtual machine’s compute and storage resources. In modern situations, this often means running MinIO in a private or public cloud environment, such as with Amazon Web Services, the Google Cloud Platform, Microsoft’s Azure platform, or many others.

How does MinIO manage multiple virtual or physical servers?

While testing MinIO may only involve a single drive on a single computer, most production MinIO deployments use multiple compute and storage devices to create a high availability environment. A server pool a set of minio server nodes that pool their drives and resources to support object storage write and retrieval requests.

MinIO supports adding one or more server pools to existing MinIO deployments for horizontal expansion. When MinIO has multiple server pools available, an individual object always writes to the same server pool. If one server pool goes down, objects on other pools remain accessible.

The HOSTNAME argument passed to the minio server command represents a Server Pool:

Consider the following example startup command, which creates a single Server Pool with 4 minio server nodes of 4 drives each for a total of 16 drives.

minio server https://minio{1...4}.example.net/mnt/disk{1...4}

             |                    Server Pool                |

Starting server pools in the same minio server startup command enables awareness of all server pool peers.

See minio server for complete syntax and usage.

A cluster refers to an entire MinIO deployment consisting of one or more Server Pools.

Consider the command below that creates a cluster consisting of two Server Pools, each with 4 minio server nodes and 4 drives per node for a total of 32 drives.

minio server https://minio{1...4}.example.net/mnt/disk{1...4} \
             https://minio{5...8}.example.net/mnt/disk{1...4}

             |                    Server Pool                |

Within a cluster, MinIO always stores each unique object and all versions of that object on the same Server Pool.

MinIO strongly recommends production clusters consist of a minimum of 4 minio server nodes in a Server Pool for proper high availability and durability guarantees.

Can I change the size of an existing MinIO deployment?

MinIO distributed deployments support expansion and decommissioning as functions to increase or decrease the available storage.

Expansion consists of adding one or more server pools to an existing deployment. Each server pool consists of dedicated nodes and storage that contribute to the overall capacity of the deployment.

See Expand a MinIO deployment for more information

For deployments which have multiple server pools, you can decommission the older pools and migrate that data to the newer pools in the deployment. Once started, decommissioning cannot be stopped. MinIO intends decommissioning for use with removing older pools with aged hardware, and not as an operation performed regularly within any deployment.

How do I manage one or more MinIO instances or clusters?

There are several options to manage your MinIO deployments and clusters:

How do I manage object distribution across a MinIO deployment?

MinIO optimizes storage of objects across available pools by writing new objects (that is, objects with no existing versions) to the server pool with the most free space compared total amount of free space on all available server pools. MinIO does not perform the costly action of rebalancing objects from older pools to newer pools. Instead, new objects typically route to the new pool as it has the most free space. As that pool fills, new write operations eventually balance out across all pools in the deployment. For more information on write preference calculation logic, see Writing Files below.

Rebalancing data across all pools after an expansion is an expensive operation that requires scanning the entire deployment and moving objects between pools. This may take a long time to complete depending on the amount of data to move.

Starting with MinIO Client version RELEASE.2022-11-07T23-47-39Z, you can manually initiate a rebalancing operation across all server pools using mc admin rebalance.

Rebalancing does not block ongoing operations and runs in parallel to all other I/O. This can result in reduced performance of regular operations. Consider scheduling rebalancing operations during non-peak periods to avoid impacting production workloads. You can start and stop rebalancing at any time

How do I upload objects to MinIO?

You can use any S3-compatible SDK to upload objects to a MinIO deployment. Each SDK performs the equivalent of a PUT operation which transmits the object to MinIO for storage.

MinIO also implements support for multipart uploads, where clients can split an object into multiple parts for better throughput and reliability of transmission. MinIO reassembles these parts until it has a completed object, then stores that object at the specified path.

How does MinIO provide availability, redundancy, and reliability?

MinIO Uses Erasure Coding for Data Redundancy and Reliability

MinIO Erasure Coding is a data redundancy and availability feature that allows MinIO deployments with multiple drives to automatically reconstruct objects on-the-fly despite the loss of multiple drives or nodes in the cluster. Erasure Coding provides object-level healing with significantly less overhead than adjacent technologies such as RAID or replication.

MinIO Implements Bit Rot Healing to Protect Data At Rest

Bit rot is the random, silent corruption to data that can happen on any storage device. Bit rot corruption is not prompted by any activity from a user, nor does the system’s operating system alone have awareness of the corruption to notify a user or administrator about a change to the data.

Some common reasons for bit rot include:

  • ageing drives
  • current spikes
  • bugs in drive firmware
  • phantom writes
  • misdirected reads/writes
  • driver errors
  • accidental overwrites

MinIO uses a hashing algorithm to confirm the integrity of an object. If an object becomes corrupted by bit rot, MinIO can automatically heals the object depending on the system topology and availability.

MinIO Distributes Data Across Erasure Sets for High Availability and Resiliency

An erasure set is a group of multiple drives that supports MinIO Erasure Coding. Erasure Coding provides high availability, reliability, and redundancy of data stored on a MinIO deployment.

MinIO divides objects into chunks — called shards — and evenly distributes them among each drive in the Erasure Set. MinIO can continue seamlessly serving read and write requests despite the loss of any single drive. At the highest redundancy levels, MinIO can serve read requests with minimal performance impact despite the loss of up to half (�/2) of the total drives in the deployment.

MinIO calculates the size and number of Erasure Sets in a Server Pool based on the total number of drives in the set and the number of minio servers in the set. See Erasure Sets for more information.

MinIO Automatically Heals Corrupt or Missing Data On-the-fly

Healing is MinIO’s ability to restore data after some event causes data loss. Data loss can come from bit rot, drive loss, or node loss.

Erasure coding provides continued read and write access if an object has been partially lost.

MinIO Writes Data Protection at the Object Level with Parity

A MinIO deployment with multiple drives divides the available drives into data drives and parity drives. MinIO Erasure Coding adds additional hashing information about the contents of an object to the parity drives when writing an object. MinIO uses the parity information to confirm the integrity of an object and, if necessary, to restore a lost, missing, or corrupted object shard on a given drive or set of drives.

MinIO can tolerate losing up to the total number of drives equal to the number of parity devices available in the erasure set while still providing full access to an object.

Deliver Read and Write Functions with Quorum

A minimum number of drives that must be available to perform a task. MinIO has one quorum for reading data and a separate quorum for writing data.

Typically, MinIO requires a higher number of available drives to maintain the ability to write objects than what is required to read objects.

Policy

refer to :https://min.io/docs/minio/linux/administration/identity-access-management/policy-based-access-control.html

policy describe two things:

  • action
  • resource

in short , policy describe perform action for resource

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::*"
            ]
        }
    ]
}
Group Policy Members
Operations readwrite on finance bucket readonly on audit bucket john.doe, jane.doe
Auditing readonly on audit bucket jen.doe, joe.doe
Admin admin:* greg.doe, jen.doe

默认的policy主要包括:

  • consoleAdmin
  • readonly
  • writeonly
  • readwrite

consoleAdmin 这个policy,如下:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "admin:*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::*"
            ]
        }
    ]
}

而readwrite这个policy,如下:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::*"
            ]
        }
    ]
}

2种policy的对比,可以看出consoleAdmin policy能对更多的resource ,perform more action

除了上面默认的policy,还可以创建自定义的policy

权益定义,refer to : https://min.io/docs/minio/linux/administration/identity-access-management/policy-based-access-control.html

https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-arn-format.html

https://docs.aws.amazon.com/AmazonS3/latest/userguide/list_amazons3.html

resource

https://awspolicygen.s3.amazonaws.com/policygen.html

The following common Amazon Resource Name (ARN) format identifies resources in AWS:

arn:partition:service:region:namespace:relative-id

For information about ARNs, see Amazon Resource Names (ARNs) in the AWS General Reference.

For information about resources, see IAM JSON Policy Elements: Resource in the IAM User Guide.

An Amazon S3 ARN excludes the AWS Region and namespace, but includes the following:

  • Partitionaws is a common partition name. If your resources are in the China (Beijing) Region, aws-cn is the partition name.
  • Services3.
  • Relative IDbucket-name or a bucket-name/object-key. You can use wild cards.

The ARN format for Amazon S3 resources reduces to the following:

arn:aws:s3:::bucket_name/key_name

condition

https://min.io/docs/minio/linux/administration/identity-access-management/policy-based-access-control.html

https://min.io/docs/minio/linux/administration/identity-access-management/policy-based-access-control.html#minio-selected-conditional-actions

https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging-and-policies.html

https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_condition.html

https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html

https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-policies.html

https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-best-practices.html

https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazon-s3-policy-keys.html

"Condition" : { "{condition-operator}" : { "{condition-key}" : "{condition-value}" }}
"Condition" : { "StringEquals" : { "aws:username" : "johndoe" }}
"Condition" : { "StringEqualsIgnoreCase" : { "aws:username" : "johndoe" }}

example

folder

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketLocation"
            ],
            "Resource": [
                "arn:aws:s3:::czsw-bucket"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::czsw-bucket"
            ],
			"Condition": {
                "StringEquals": {
                    "s3:prefix": ["cloud-graph/"],
                    "s3:delimiter": ["/"]
                }
            }
        }
    ]
}
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketLocation"
            ],
            "Resource": [
                "arn:aws:s3:::czsw-bucket/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::czsw-bucket"
            ],
            "Condition": {
                "StringLike": {
                    "s3:prefix": [
                        "cloud-graph/*"
                    ]
                }
            }
        }
    ]
}

具体的文件

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketLocation"
            ],
            "Resource": [
                "arn:aws:s3:::czsw-bucket"
            ]
        },
		 {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::czsw-bucket"
            ],
			"Condition": {
                "StringEquals": {
                    "s3:prefix": ["cloud-graph/a.txt"],
                    "s3:delimiter": ["/"]
                }
            }
        }
    ]
}

看下面这个示例,注意这个示例中,resource带有/*,不能省略。

这样设置,是为了,能够对bucket中的文件,也有操作的权限

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketLocation",
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::czsw-bucket/*"
            ]
        }
    ]
}
{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Action": [
				"s3:GetBucketLocation"
			],
			"Resource": [
				"arn:aws:s3:::czsw-bucket/*"
			]
		},
		{
			"Effect": "Allow",
			"Action": [
				"s3:ListBucket"
			],
			"Resource": [
				"arn:aws:s3:::czsw-bucket"
			],
			"Condition": {
				"StringLike": {
                    "s3:prefix": ["cloud-graph/*"]
                }
			}
		},
		{
			"Effect": "Allow",
			"Action": [
				"s3:GetObject",
				"s3:GetObjectVersion"
			],
			"Resource": [
				"arn:aws:s3:::czsw-bucket/*"
			]
		}
	]
}
{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Action": [
				"s3:GetBucketLocation"
			],
			"Resource": [
				"arn:aws:s3:::czsw-bucket/*"
			]
		},
		{
			"Effect": "Allow",
			"Action": [
				"s3:ListBucket"
			],
			"Resource": [
				"arn:aws:s3:::czsw-bucket"
			],
			"Condition": {
				"StringLike": {
                    "s3:prefix": ["cloud-graph/*"]
                }
			}
		},
		{
			"Effect": "Allow",
			"Action": [
				"s3:GetObject",
				"s3:GetObjectVersion"
			],
			"Resource": [
				"arn:aws:s3:::czsw-bucket/cloud-graph/a.txt*"
			]
		}
	]
}
{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Principal": {
				"AWS": [
					"*"
				]
			},
			"Action": [
				"s3:ListBucketMultipartUploads",
				"s3:GetBucketLocation",
				"s3:ListBucket"
			],
			"Resource": [
				"arn:aws:s3:::czsw-bucket"
			]
		},
		{
			"Effect": "Allow",
			"Principal": {
				"AWS": [
					"*"
				]
			},
			"Action": [
				"s3:ListMultipartUploadParts",
				"s3:PutObject",
				"s3:AbortMultipartUpload",
				"s3:DeleteObject",
				"s3:GetObject"
			],
			"Resource": [
				"arn:aws:s3:::czsw-bucket/*"
			]
		}
	]
}

generator

https://awspolicygen.s3.amazonaws.com/policygen.html

MinIO Client & MinIO Admin Client

refer to : https://min.io/docs/minio/linux/reference/minio-mc.html

查看alias

https://min.io/docs/minio/linux/reference/minio-mc/mc-alias.html

[root@VM-0-14-opencloudos ~]# mc alias list 
gcs  
  URL       : https://storage.googleapis.com
  AccessKey : YOUR-ACCESS-KEY-HERE
  SecretKey : YOUR-SECRET-KEY-HERE
  API       : S3v2
  Path      : dns

local
  URL       : http://127.0.0.1:9000
  AccessKey : minioadmin
  SecretKey : minioadmin
  API       : s3v4
  Path      : auto

play 
  URL       : https://play.min.io
  AccessKey : Q3AM3UQ867SPQQA43P2F
  SecretKey : zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG
  API       : S3v4
  Path      : auto

s3   
  URL       : https://s3.amazonaws.com
  AccessKey : YOUR-ACCESS-KEY-HERE
  SecretKey : YOUR-SECRET-KEY-HERE
  API       : S3v4
  Path      : dns

查看bucket list

https://min.io/docs/minio/linux/reference/minio-mc/mc-ls.html

mc ls [--recursive] ALIAS/PATH


[root@VM-0-14-opencloudos ~]# mc ls  local
[2023-02-15 16:08:53 CST]     0B czsw-bucket/
[2023-02-15 16:09:16 CST]     0B hxhb-bucket/



[root@VM-0-14-opencloudos ~]# mc ls  local/czsw-bucket
[2023-02-15 16:34:04 CST]  48KiB STANDARD fbox.postman_collection.json

创建bucket

mc mb --with-versioning local/weipeng-bucket

评论