Should you use S3 Alternative for object storage? – CloudSavvy IT

Shutterstock / Gorodenkov

Amazon’s Simple Storage Service (S3) provides a very useful interface for storing items in useless cloud storage, where you don’t have to worry about basic hardware. As well as being a service offered by Amazon, it is also an industry standard API, and there are many services that are compatible with it.

What is S3 compatible?

In many cases, if you go to another cloud provider, you will have to redo many of your requests. But, if you are using S3 as your object storage backend, you will be able to move to many other services without interruption.

This is because the S3 is an open API standard. AWS Simple Storage Service is the only implementation of this standard. It’s local, and obviously has the best support, but there are other services that will offer acceptable performance and stability, often at a lower cost.

Switching to these services is easy – you’ll need to change the URL and point of use of your application, and it’s usually good to go after some minor changes in key handling. You have to transfer your data with rclone, but it is not a difficult process, in some cases it is just a long process.

It’s no secret that AWS is expensive. The S3 is no different, and while Storage Files are too cheap to access. Live files are usually offered to users in the heavy workload of reading / writing, storing files is usually cheaper. The highest costs are actually AWS data transfer charges and S3 application charges:

Seeing the cost of the Explorer crash in this way, you may be tempted to consider a third party service that will be cheaper on data transfer charges for your workload.

The two main competitors of AWS S3 are from Google and Microsoft. Google has given its non-creative name “Cloud Storage” and Microsoft Azure has Azure Blob Storage. Google’s storage is compatible with the S3, and it’s relatively easy to move. Azure, on the other hand, is not compatible with S3, although there are tools like S3Proxy that can combine them together.

However, all storage services from the three major cloud providers will charge you a higher fee for data. They are designed for enterprise customers, and if you are a small business trying to reduce your costs, you should look elsewhere. There are other alternative cloud providers such as Digital Ocean and Voltaire that offer more seamless pricing models with the same standard service.

Digital ocean

Digital Ocean is a cloud provider designed to be simple. While it doesn’t offer as many features as major providers like AWS, it generally works according to the services it offers. One of these services is object storage, in which buckets are called spaces, and if you want to move away from AWS we recommend it.

Spaces are very easy. The base rate is $ 5 per month, and includes 250 GB of storage as well as whole TB of outbound data transfer. That’s the decent thing to do, and it should end there.

Additional data storage is 0.02 per GB, is quite standard compared to S3 (although you plan to use cheaper archive storage), and the cost of additional data has been moved to $ 0.01 per GB, which is 90% cheaper than AWS prices.

Of course, it does come with some limitations, and unfortunately there are a lot of downsides and strings attached to this great deal.

  • 750 requests, per IP address, all over your place.
  • 150 joint operations per second for any space, including no GET applications.
  • Total 240 operations including GET applications.
  • 5 PUT or copy requests per 5 minutes for any individual object in space.

While these rate limits aren’t very good, the limits are wide enough, and you probably won’t hit them. If you are close to going up, you can reduce their impact by placing multiple spaces. If you are not sure, you can enable BucketMatrix in S3 to check your current usage.

Related: How to activate and view application matrix for AWS S3 bucket in CloudWatch

Also, spaces with more than 3 million items, or functional with 1.5 million versions, may require “expected maintenance breaks” to ensure consistent performance. However, I personally have a bucket with over 2 million version items that have not experienced any significant downtime in 6 months, so this may not be a normal occurrence.

One of the major drawbacks of Spaces is the interface compared to the S3. Spaces are easy, and if you just want to upload your website content or store some basic files, the web interface will allow uploading, downloading, and editing most settings. However, if you’re storing a lot of files or you need advanced configuration, that’s obviously too bad, and you’ll have to work with it primarily on the S3 API.

For example, Spaces does not even have a web editor to select your lifecycle configuration, which is used to store older versions of items used as backups in the event of a user’s deletion. Handles This also means that there is no way to access or delete versioned items without listing versions through the API and accessing them directly through version ID.

They don’t even have much documents. To turn on versioning, for example, we had to consult S3’s own documentation to use the most overlooked PutBucketVersioning Endpoint, which, thankfully, was overlooked in DO’s documentation. Still supported on Spaces. You’ll need to enable it through this endpoint:

PUTĀ ?versioning

<VersioningConfiguration xmlns=""> 

And then enable version expiration:

PUT ?lifecycle

<LifecycleConfiguration xmlns="">

API keys are also very basic. You won’t have granular control over individual buckets, items, or anything that comes with AWS IAM. This can be a problem if you plan to give the keys to a third party.

Overall, the digital ocean experience is certainly not close to how good AWS’s S3 is. But, if you’re fine with the limits, and don’t mind using the API for some tasks, it can definitely save you a ton of money on bandwidth costs.

Self host

Since S3 is an open standard, it’s something you can host yourself, which would be great for a lot of people. There are many tools to do this, but one of the best is MinIO, which runs on Kubernetes (K8s).

Being on K8s means you can run it on any public cloud, including running serverless K8s services like AWS EKS. However, you will still be subject to bandwidth costs in this case.

Where MinIO really shines is running on dedicated servers, hybrid cloud solutions, and on-premises data centers. If you are paying for a dedicated network connection to a server, you will not be knocked out and demoted if you manage this connection. This can make host storage much cheaper if you plan to provide a lot of data to end users.

Also, running your own hardware is not subject to the same limitations as services like S3. You can host MinIO on high-speed servers and get better performance in the heavy workload of reading / writing (and you will not be charged for requests). Of course, you will have to pay hardware for this performance.

Where it falls flat it is redundant; because S3 stores your data in many different places, it is basically a guarantee that it will always work and never lose your data. Will, except for one large meteorite. On the other hand, MinIO can be hosted on a server or through a distributed deployment. If you are hosting on the same server, you will be disappointed if your instance decreases. This is the cheapest option, but we highly recommend backing up multiple servers in a cluster or at least some kind of S3.

MinIO is free to host under the GNU AGPL license, but you will not receive any support. Corporate licenses start at $ 1000 / month and provide 24/7 support as well as a “Panic Button”, with their data recovery engineers ready to help you fix serious server failures. Will

Add a Comment

Your email address will not be published. Required fields are marked *