In this tutorial, I will explain how to set up a MinIO server for storage architecture usage. As anyone who not already know what MinIO is: it is a high performance, distributed object storage system. It is software-defined, runs on industry-standard hardware, and is 100% open source. It is purposely built to serve objects as a single-layer architecture to achieves all of the necessary functionality without compromise. The result is seen as a cloud-native object server that is simultaneously scalable and lightweight.

As the world of Cloud engineering has been emerging more and more matured, things come into mind why we need MinIO in the first place?

Take into account that if when you serve your solution in the cloud, you may end up using solution storage like AWS S3, Azure Blob Storage, and Alibaba OSS. The same goes for the concept if your solution still remains in on-premise as Minio serves as an alternative to the storage architecture the same as the cloud storage service provided.

Setting up MinIO server for storage architecture usage linux

1. How does it work

In a simple concept, Minio comes in 2 parts – the client portion and the server portion. This concept also includes a dashboard via web-ui or file-browser. Each client and server-side are relatively easy to set up and if you’re familiar with CLI (Command Line Interface), you would find it easy to grasp.

Yet when we design it into a Production level, everything must be distributed which means the solution provided must ensure to perform well on a large scale, self-expand growth, and high availability ready. As taken this into account, minio has it’s own concept called Distributed Erasure Code.

Setting up MinIO server for storage architecture usage linux

This concept is a reliable approach to shard data across multiple drives and fetches it back, even when a few of the drives are not available. By using this concept, you can lose by half of the drives and still be guaranteed of your data

For this tutorial, I’ll show you how to install and configure MinIO server as a distributed erasure code. After that, we’ll take a quick look on the client-side on how to utilize MinIO service as end-user.

2. Installation Phase

For the installation phase, I’ll configure 2 servers as minio cluster to prepare the configuration of distributed erasure code.

Now we’ll list out 4 disk drive that we’ll be used to partition it as block device for minio usage. As our architecture decided to set up multiple servers, the minimum drive that needs to be for a server is 2 yet if you are using a single server, the minimum requirement of the drives is 1. Detail requirements needed for erasure code design can be seen here .

Below are the steps:

 [[email protected] ~]# fdisk -l 

Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: dos

Disk identifier: 0x000a4fd8

Device Boot Start End Blocks Id System

/dev/sda1 * 2048 2099199 1048576 83 Linux

/dev/sda2 2099200 209715199 103808000 8e Linux LVM

Disk /dev/sdb: 8589 MB, 8589934592 bytes, 16777216 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdc: 8589 MB, 8589934592 bytes, 16777216 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdd: 8589 MB, 8589934592 bytes, 16777216 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sde: 8589 MB, 8589934592 bytes, 16777216 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/centos-root: 104.1 GB, 104144568320 bytes, 203407360 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

As you can see above, on our end there are 4 drives mounted in our server with 8gb of size each.

Next, we will create a partition from each drive, then create a dedicated directory that will mount to each partition that we’ll be created. Below are the steps.

 

[[email protected] ~]# fdisk /dev/sdb

Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.

Be careful before using the write command.

Device does not contain a recognized partition table

Building a new DOS disklabel with disk identifier 0x4217c4d9.

Command (m for help): p

Disk /dev/sdb: 8589 MB, 8589934592 bytes, 16777216 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: dos

Disk identifier: 0x4217c4d9

Device Boot Start End Blocks Id System

Command (m for help): n

Partition type:

p primary (0 primary, 0 extended, 4 free)

e extended

Select (default p): p

Partition number (1-4, default 1):

First sector (2048-16777215, default 2048):

Using default value 2048

Last sector, sectors or size{K,M,G} (2048-16777215, default 16777215):

Using default value 16777215

Partition 1 of type Linux and of size 8 GiB is set

Command (m for help): p

Disk /dev/sdb: 8589 MB, 8589934592 bytes, 16777216 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: dos

Disk identifier: 0x4217c4d9

Device Boot Start End Blocks Id System

/dev/sdb1 2048 16777215 8387584 83 Linux

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

Command (m for help): q

[[email protected] ~]# ls /dev/sdb*

/dev/sdb /dev/sdb1

[[email protected] ~]# mkfs.xfs -f /dev/sdb1

meta-data=/dev/sdb1 isize=512 agcount=4, agsize=524224 blks

= sectsz=512 attr=2, projid32bit=1

= crc=1 finobt=0, sparse=0

data = bsize=4096 blocks=2096896, imaxpct=25

= sunit=0 swidth=0 blks

naming =version 2 bsize=4096 ascii-ci=0 ftype=1

log =internal log bsize=4096 blocks=2560, version=2

= sectsz=512 sunit=0 blks, lazy-count=1

realtime =none extsz=4096 blocks=0, rtextents=0

[[email protected] ~]#

[[email protected] ~]# mkdir -p /opt/drive1

[[email protected] ~]# mkdir -p /opt/drive2

[[email protected] ~]# mkdir -p /opt/drive3

[[email protected] ~]# mkdir -p /opt/drive4

[[email protected] ~]#

[[email protected] ~]# mount /dev/sdb1 /opt/drive1

[[email protected] ~]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/centos-root 97G 3.8G 94G 4% /

devtmpfs 1.9G 0 1.9G 0% /dev

tmpfs 1.9G 0 1.9G 0% /dev/shm

tmpfs 1.9G 8.6M 1.9G 1% /run

tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup

/dev/sda1 1014M 145M 870M 15% /boot

tmpfs 379M 0 379M 0% /run/user/0

/dev/sdb1 8.0G 33M 8.0G 1% /opt/drive1

[[email protected] ~]#

Once done, repeat the same process to create a partition on the remaining drives then mount it to each directory we’ve created. As final result, you should finally the output like below :-

 [[email protected] ~]# mount /dev/sdb1 /opt/drive1 

[[email protected] ~]# mount /dev/sdc1 /opt/drive2

[[email protected] ~]# mount /dev/sdd1 /opt/drive3

[[email protected] ~]# mount /dev/sde1 /opt/drive4

[[email protected] ~]#

[[email protected] ~]#

[[email protected] ~]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/centos-root 97G 3.8G 94G 4% /

devtmpfs 1.9G 0 1.9G 0% /dev

tmpfs 1.9G 0 1.9G 0% /dev/shm

tmpfs 1.9G 8.6M 1.9G 1% /run

tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup

/dev/sda1 1014M 145M 870M 15% /boot

tmpfs 379M 0 379M 0% /run/user/0

/dev/sdb1 8.0G 33M 8.0G 1% /opt/drive1

/dev/sdc1 8.0G 33M 8.0G 1% /opt/drive2

/dev/sdd1 8.0G 33M 8.0G 1% /opt/drive3

/dev/sde1 8.0G 33M 8.0G 1% /opt/drive4

Alright, as a prerequisite on drives is done for server 1, repeat the same configuration on server 2 as above.

3. Configuration Phase

Now as both server configurations are done, let’s continue to install the minio service. First, download the minio package as per shown below:

 [[email protected] ~]# wget https://dl.min.io/server/minio/release/linux-amd64/minio && chmod  x minio 

--2019-09-29 22:23:57-- https://dl.min.io/server/minio/release/linux-amd64/minio

Resolving dl.min.io (dl.min.io)... 178.128.69.202

Connecting to dl.min.io (dl.min.io)|178.128.69.202|:443... connected.

HTTP request sent, awaiting response... 200 OK

Length: 43831296 (42M) [application/octet-stream]

Saving to: ‘minio’

3% [=> ] 1,335,296 106KB/s eta 6m 33s

Now repeat the same as above on server 2.

As everything is done, let’s start the minio configuration. We will define the MINIO_ACCESS_KEY and MINIO_SECRET_KEY as the authentication access. The configuration are as per below :-

 [[email protected] ~]# export MINIO_ACCESS_KEY=shahril && export MINIO_SECRET_KEY=shahril123 

[[email protected] ~]# ./minio server http://10.124.12.{141..142}:9000/opt/drive{1..4}

Waiting for a minimum of 4 disks to come online (elapsed 0s)

Waiting for a minimum of 4 disks to come online (elapsed 2s)

Waiting for a minimum of 4 disks to come online (elapsed 3s)

Waiting for a minimum of 4 disks to come online (elapsed 3s)

Waiting for all other servers to be online to format the disks.

Status: 8 Online, 0 Offline.

Endpoint: http://10.124.12.141:9000 http://10.124.12.142:9000

AccessKey: shahril

SecretKey: shahril123

Browser Access:

http://10.124.12.141:9000 http://10.124.12.142:9000

Command-line Access: https://docs.min.io/docs/minio-client-quickstart-guide

$ mc config host add myminio http://10.124.12.141:9000 shahril shahril123

Object API (Amazon S3 compatible):

Go: https://docs.min.io/docs/golang-client-quickstart-guide

Java: https://docs.min.io/docs/java-client-quickstart-guide

Python: https://docs.min.io/docs/python-client-quickstart-guide

JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide

.NET: https://docs.min.io/docs/dotnet-client-quickstart-guide

Now the configuration is done on server 1, repeat the same step on server 2 for the configuration.

Once everything is done, we can proceed to test on the result

4. Testing Phase

As everything is done, let’s start to see the usability of minio services. As shown in the configuration above, we can access the dashboard of it’s UI via browser. For our example, let’s login to http://10.124.12.141:9000 with access key shahril and secret key shahril123 as configured.

The result will be shown as per below:Advertisement

Setting up MinIO server for storage architecture usage linux

Once done, it will redirect us to the bucket dashboard. Now let’s create our first bucket.

Click on the icon folder with the plus button and named our first bucket as mylove. Example as per shown below:

Setting up MinIO server for storage architecture usage linux

Setting up MinIO server for storage architecture usage linux

Once done, you will notice that a new bucket is created and shown on the left panel as below screenshot.

Setting up MinIO server for storage architecture usage linux Advertisement

Once done, you will notice that a new bucket is created and shown on the left panel as below screenshot.

Next, let’s add any files from your local side to insert into the bucket

Setting up MinIO server for storage architecture usage linux

You will notice that the new file successfully uploaded into the bucket as shown below.

Setting up MinIO server for storage architecture usage linux

To ensure that the concept of distributed are well implemented. Let’s make a simple test by accessing the minio dashboard via another server. The other server URL are http://10.124.12.142:9000

Setting up MinIO server for storage architecture usage linux

As expected the bucket and files that we’ve inserted are also existed in other servers URL as shown above.

Now, let’s make another test. This time we will use another workstation that will access our minio server using the client console called mc.

From the client-side, we will create a file then upload it into the existing bucket.

Then as a final result, we’ll expect to see from dashboard that the new file uploaded from the client-side automatically exists.

First, open the client workstation and download the minio client package. An example is shown below:

 [[email protected] ~]# wget https://dl.min.io/client/mc/release/linux-amd64/mc 

--2019-09-30 11:47:38-- https://dl.min.io/client/mc/release/linux-amd64/mc

Resolving dl.min.io (dl.min.io)... 178.128.69.202

Connecting to dl.min.io (dl.min.io)|178.128.69.202|:443... connected.

HTTP request sent, awaiting response... 200 OK

Length: 16592896 (16M) [application/octet-stream]

Saving to: ‘mc’

100%[==============================================================================>] 16,592,896 741KB/s in 1m 59s

2019-09-30 11:49:37 (137 KB/s) - ‘mc’ saved [16592896/16592896]

[[email protected] ~]# chmod x mc

Then, make the configuration from the client-side to access the dedicated bucket using create access key and secret. Example as per below:

 [[email protected] ~]# ./mc config host add myminio http://10.124.12.142:9000 shahril shahril123 

mc: Configuration written to `/root/.mc/config.json`. Please update your access credentials.

mc: Successfully created `/root/.mc/share`.

mc: Initialized share uploads `/root/.mc/share/uploads.json` file.

mc: Initialized share downloads `/root/.mc/share/downloads.json` file.

Added `myminio` successfully.

Once configured, you should manage to see the content inside the existing bucket. Example as per below:

 [[email protected] ~]# ./mc ls myminio 

[2019-09-30 11:16:25 08] 0B mylove/ [[email protected] ~]# ./mc ls myminio/mylove/

[2019-09-30 11:16:25 08] 55KiB myself.jpg

Now, create or upload any existing file from client side into the bucket. Example as per below :-

 [[email protected] ~]# ./mc cp new_file.txt myminio/mylove 

new_file.txt: 38 B / 38 B ???????????????????????????????????????????????????????????????? 100.00% 1.02 KiB/s 0s

[[email protected] ~]# [[email protected] ~]# ./mc ls myminio/mylove/

[2019-09-30 11:16:25 08] 55KiB myself.jpg

[2019-09-30 11:58:16 08] 38B new_file.txt

Once done, as expected when you refresh from dashboard side via any of the server URL, you should see the new file are shown there as per below .

Setting up MinIO server for storage architecture usage linux

You should see the full link of the image when you click the share icon on your right side as per below. This is the unique link of each object inside the bucket that you can use on the application side via curl or API.

Setting up MinIO server for storage architecture usage linux

Setting up MinIO server for storage architecture usage linux

Thumb’s up! now we’ve successfully setup and configure a self-hosted storage service on-premise using Minio. For more in depth detail you can check it’s documentation here