S3 replication is the facility provided by Amazon to replicate the data from one S3 bucket to another S3 bucket within or across the region. If replication is enabled in an S3 bucket, the data uploaded to the S3 bucket will automatically be moved to the destination S3 bucket. AWS provides different features for AWS S3 replication. Following are some features provided by AWS for S3 replication.

  • Cross Region Replication
  • Cross Account Replication
  • S3 Replication Time Control
  • Multi Destination Replication
  • Two-way Replication
  • Replication metrics and notifications

Cross Region Replication

In cross region replication, the source and destination S3 buckets are in different AWS regions and replication takes place across the region.

Cross Account Replication

In cross account replication, the source and destination S3 buckets are in different AWS accounts. The data is replicated from S3 bucket in one AWS account to another S3 bucket in another AWS account.

S3 Replication Time Control

S3 replication time control is the latest feature provided by AWS S3 to replicate 99.99% of all your data from source S3 bucket to destination S3 bucket within the first 15 minutes. It replicates billions of S3 objects from source bucket to destination bucket within the first 15 minutes.

Multi Destination Replication

In order to replicate data from one source S3 bucket to multiple destination S3 buckets, now AWS S3 provides multi destination replication capability. This capability can be used to replicate data within or across the regions into multiple buckets.

Two-way Replication

In two-way replication, AWS S3 provides replication of data from source to destination bucket and vice versa. The replication takes place in both the directions and this can be used to build a common dataset across the regions.

Replication Metrics and Notifications

S3 replication provides metrics and notifications regarding data replication. You can check the replication progress minute by minute in the console.

In this blog, we will discuss how we can enable replication on AWS S3 buckets to replicate the object across different S3 buckets.

Creating S3 Replication Rules on AWS S3

First of all, we need to create two buckets in the AWS S3 console, one as source and other as the destination. In order to enable S3 replication, it is necessary to enable versioning on both the S3 buckets. Visit the following link to learn how S3 bucket versioning can be configured.

https://linuxhint.com/configure-aws-s3-bucket-versioning/

For this demo, we have created two S3 buckets in different AWS regions as shown in the following image. Both the buckets have versioning enabled on them.

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2022/02/echo/1-12.jpg" data-lazy- height="149" src="data:image/svg xml,” width=”737″>

The source S3 bucket is in the us-east-1 (N. Virginia) region and the destination S3 bucket is in the southeast-2 (Sydney) region.

After creating the S3 bucket, now open the source bucket by clicking on it and go to the Management tab of the source S3 bucket.

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2022/02/echo/2-12.jpg" data-lazy- height="260" src="data:image/svg xml,” width=”714″>

In the Management tab, scroll down to the Replication rules section and click on the Create replication rule button.

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2022/02/echo/3-11.jpg" data-lazy- height="302" src="data:image/svg xml,” width=”879″>

It will open a new page asking for the details of the replication rule. Enter the name of the replication rule and select the Enabled button to enable the rule.

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2022/02/echo/4-10.jpg" data-lazy- height="398" src="data:image/svg xml,” width=”821″>

Scroll down and now it will ask for the source bucket configurations whether all the objects in S3 are going to be replicated or some specific objects. For this demo, we will apply the replication rule to all the S3 objects in the bucket.

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2022/02/echo/5-10.jpg" data-lazy- height="311" src="data:image/svg xml,” width=”835″>

For the destination bucket configuration, it will ask for the destination bucket in which the source bucket will replicate the objects. The destination bucket may be in the same AWS account or different AWS account. For this demo, we will select the destination bucket in the same account but different region.

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2022/02/echo/6-10.jpg" data-lazy- height="366" src="data:image/svg xml,” width=”791″>

After selecting the source and destination S3 buckets, now it is time to attach an IAM role to S3 which will allow S3 to replicate the data from source S3 bucket to destination S3 bucket. You can either select an existing role or create a new role. For this demo, we will create a new role.

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2022/02/echo/7-8.jpg" data-lazy- height="230" src="data:image/svg xml,” width=”812″>

After IAM role configuration, it will ask if you want to enable different features of S3 replication or not. We can enable replication time control, replication metrics and notification, delete marker replication and replica modification sync.

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2022/02/echo/8-6.jpg" data-lazy- height="405" src="data:image/svg xml,” width=”799″>

Now, leave all the options as default and click on the save button at the bottom of the page to create the replication rule. You can see the newly created replication rule on the Management tab.

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2022/02/echo/9-5.jpg" data-lazy- height="265" src="data:image/svg xml,” width=”773″>

After creating the rule, now go to the source S3 bucket and upload some data in it.

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2022/02/echo/10-5.jpg" data-lazy- height="392" src="data:image/svg xml,” width=”843″>

After uploading data into the source S3 bucket, go to the destination bucket and check if the data is replicated from source S3 bucket or not.

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2022/02/echo/11-6.jpg" data-lazy- height="404" src="data:image/svg xml,” width=”599″>

We can see the data is successfully replicated from source S3 bucket to destination S3 bucket across the region.

So now, check if deleting the file from source S3 bucket actually deletes the file from destination bucket or not. Delete the file from source S3 bucket from console and go to the destination bucket.

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2022/02/echo/12-3.jpg" data-lazy- height="404" src="data:image/svg xml,” width=”599″>

But the file in the destination bucket is still available and is not deleted. To check this behavior, click on the show versions toggle button in the source S3 bucket and it will show the delete marker on the deleted file. Actually, when a file is deleted from S3 bucket with versioning enabled, the file is not deleted but S3 puts a delete marker on the file. And in the destination bucket, it does not delete.

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2022/02/echo/13-3.jpg" data-lazy- height="269" src="data:image/svg xml,” width=”838″>

But if a file is updated in the source S3 bucket, the change will be replicated from source S3 bucket to destination S3 bucket.

Conclusion

AWS S3 provides us with a number of functionalities to manage our data replication across S3 buckets within or different AWS regions and accounts. We can analyze data replication by using the replication metrics, if enabled, in the S3 console. In this demo, we discussed how we can configure S3 replication across different S3 buckets within or across the regions.

About the author

<img data-del="avatar" data-lazy-src="https://kirelos.com/wp-content/uploads/2022/02/echo/Zain-150×150.jpg6215f5f03dbfc.jpg" height="112" src="data:image/svg xml,” width=”112″>

Zain Abideen

A DevOps Engineer with expertise in provisioning and managing servers on AWS and Software delivery lifecycle (SDLC) automation. I’m from Gujranwala, Pakistan and currently working as a DevOps engineer.