AWS RDS Serverless 2

Serverless computing is gradually becoming the most desirable way of launching applications to cloud platforms. It started as a paradigm that allowed developers to build code for a standard functionality in a way that it could be deployed in the production environment without any dependencies. The dependencies would be supplied by the runtime platform and developers would only run their functional code on the available platform. The concept had several benefits, such as high scalability, smaller periods of resource reservation, high availability, and high performance. However, it was initially limited to programs developed in specific languages only. A couple of the most common use cases where it was used were in the maintenance of the infrastructure by running monitoring and control scripts, and for scripting and minifying images, large files, and code files. Interestingly, the concept of serverless is now available to the database services also.

Can a database work in serverless mode?

Software architects have always had this question on top of their minds while designing a solution architecture. Traditionally, databases have been the persistent component within a solution architecture. It represents the DBMS software configuration and its storage as a single unit. There are other solution components built around the database with the basic that assumption that the database would continue to exist at all times.

While talking about a serverless database, it is important to understand that the database itself is not serverless but the database cluster is serverless. In the case of AWS Aurora Serverless v2, it means that multiple cluster nodes of the Aurora database would be provisioned based on the rise in demand for database computing power. A Serverless database is ideal for cloud-based applications where there is a sudden spike in demand and transaction load for a short duration of time. Unlike the traditional style of database clustering, there are no servers dedicated to the database and no storage to monitor and manage regularly.

Aurora Serverless model provides database resources in units called Aurora Capacity Unit (ACU). Each ACU provides 2 GiB of memory with the associated virtual processor (vCPU). When the cluster needs to scale up, it adds up one or more ACUs to the cluster to support the load on the database. The ACUs are picked up from a pool of warm nodes available with AWS Aurora and assigned to the user cluster. When the transaction load calms down, the additional ACUs are returned to the pool.

The AWS Aurora serverless database offers some very thoughtful benefits over the traditional model.

  • Cost: Just like serverless computing, the pricing model of the serverless database is driven by usage, i.e. by the number of ACUs added to the cluster. This saves up to 90% cost in comparison to provisioning the capacity for the peak load in the traditional style.
  • Unlimited connections: The traditional databases were designed to accommodate a certain number of peak connections and long-running connections. In the case of Aurora Serverless, when more connections are demanded by the client application, the database cluster adds more ACU and tools such as RDS Proxy to reroute the connections to the newly added resources. It provides an unlimited connection capacity to the database.
  • Security and privacy: Databases have been the primary target of attacks. On the AWS cloud, databases are always protected behind several layers, especially within the private subnet of a VPC. AWS has extended the same to the serverless database.
  • Access API: Aurora Serverless does not have a persistent public end-point for accessing the data because there is no guarantee which node and how many nodes would be serving the cluster at a given point in time. It allows Web Data API for allowing data manipulation through a REST-based interface.

What are the cool features in v2?

With Aurora Serverless v2, AWS has introduced some very cool and usable features for its users.

  • Seamless auto-scaling: Autoscaling is the most important feature of Aurora Serverless. In v1, it allowed autoscaling by a standard mechanism of doubling the existing ACU of the database. Although it was better than having a statically provisioned size, it was still costly because the size of ACU provisioned during autoscaling was not necessarily in sync with the demand. In v2, AWS has introduced the mechanism of incrementing the number of ACUs based on demand. The cluster can now demand ACUs in exact increments needed to support the spike in the transactions load. This helps in extra cost savings for the users.
  • Logging with AWS CloudWatch: Error logging is enabled by default for the entire Aurora Serverless cluster. In the case of v2, the entire log is redirected to the Amazon CloudWatch and users can only view it there. Various types of logs are available on the CloudWatch, such as general logs, query logs, audit logs, and audit events.
  • Capacity monitoring: Aurora Serverless v2 has introduced a new metric in the Amazon CloudWatch for monitoring the capacity of the serverless ACUs, called ServerlessDatabaseCapacity. It provides a great way of showing the capacity of the serverless database cluster at every minute of usage.

Conclusion

AWS Aurora Serverless v2 is open for preview mode. It is going to create a huge buzz in the market with its instant scaling concept. It does not just allow faster addition of resources to the database cluster with very low latency and near-zero cold starts, it also reduces the cost of the entire service by allowing the addition of resources based on exact demand. It appears to be a great option for some of the commonly occurring use cases, like a sudden demand in chatbot responses during business hours of banks and other financial institutions, a sudden spike in transactions in flash sales on eCommerce websites, and more. It is time to wait for the GA of the AWS Aurora Serverless v2 to use these features in the production environment.