AWS Migration Considerations Series: Ongoing Migrations Within the Cloud

In our previous article, we discussed the history of pre-cloud migrations and the initial steps to migrating to the modern cloud. However, cloud migrations are an ongoing process, and there are opportunities to optimize workloads even after they’ve been transferred. These ongoing migrations can involve moving between cloud providers or transitioning from legacy systems to more modern architectures within the same cloud environment.

8 minutes

23rd of May, 2024

AI-generated abstract background featuring a human face

This article is the third of a series of blog posts showcasing Akkodis’ experience with AWS Cloud Migrations.

 

Migrating Compute within AWS: Classic Virtual Machines

Once you start an EC2 Instance, you have a good prospect of that same virtual machine running for years. It may not follow the Well-Architected principles to have a single, stand-alone Instance, but you can easily set and forget, which is fine.

  • You will not have to maintain the hypervisor that’s managed for you.
  • You will not have to replace the hardware that’s managed for you.
  • You will not have to replace the disks on the Instance (EBS) that’s managed for you.
  • But you may want to. And there are two main reasons why.

The first is easy to understand. Historically, the compute for an EC2 Instance, the memory and CPU provided, often has a new revision that comes out over time. Each wave of new CPU releases typically sees a new Instance Family defined.

While the apparent improvements in performance of the newer CPU are a given, they are not the primary driver for more compute migrations. It is the tactic that AWS uses to encourage the use of newer, more efficient hardware: a big lever called “price.”

Every one to two years, we see a rough 10% price reduction in computers between the current generation of instance types and the subsequent generation. This encourages more customers to migrate, at their leisure, to obtain the advantage. 

As the older equipment becomes idle, the existing space in the AWS data center can be reclaimed. It can be redeployed with newer hardware (or not, if the data center is being drained and prepared for retirement).

The second reason is legacy: older instance families are labeled “previous generation,” and while they remain available, they are not guaranteed to remain so forever.

Moving to a newer instance family is often simple: for a stand-alone EC2 instance, stop the instance (do not terminate). Once it is off, you can reconfigure its instance type and start it again. If your instance(s) are in an AutoScale group, you can potentially update the Launch Template (previously launched configuration) and initiate a replacement.

CPU Disruption: ARM

The one consideration we have seen is the support of the installed operating system for the new CPU type. While upgrades from older Intel to newer Intel are generally straightforward, we did see an issue with an older version of RedHat Linux. It did not have the required drivers in the kernel to operate on the new EC2 Instance family. In this case, the solution was an in-place upgrade of RedHat (older versions of RHEL do not support in-place upgrades).

There is a new-ish contender in this space. For a long time, the only options within the AWS EC2 environment were Intel x86 (32-bit and 64-bit) compatible CPUs, clearly those from Intel and also from AMD.

They are largely compatible with each other but have a price point differential. The new contender, however, is the ARM-based architecture CPUs, historically used in embedded devices and mobile phones but now full EC2 instances.

ARM-based instances require a fresh operating system specifically compiled for the ARM CPU Architecture. However, the power consumption (and thus cooling) is much lower, the pricing is much lower, and the performance has grown from underwhelming to impressive ten years ago.

To take a data point, the M6G.large instance is 1.65 times faster than the previous m5.large, but the on-demand pricing in Sydney drops from $US0.12 to US$0.096. This is a 20% price decrease and a 33% price/performance improvement.

Migrating Block Storage within AWS: EBS Volumes

When it comes to the storage, there is a nuance here: your operational hard disk drives, as provided by the Elastic Block Store service, run from a single availability zone (recall a Region consists of multiple availability zones, which availability zones are a cluster of data centers).

For example, that EBS volume that you use, a 30GB drive for C on a Windows Instance, is provided as a RAID-1 (mirrored) and managed disk. You only see the one disk on which you have your formatted file system.

However, should the actual storage used by EBS to provide you with your 30 GB need to be replaced, then that operation happens transparently without user impact or cost. 

You can happily keep using the same 30 GB EBS Volume for years. You will not be isolated from massive failures, but maintenance should be transparent.

EBS offers several underlying storage implementations, such as mechanical hard disk, general-purpose storage, and performance-optimized storage. In 2017, AWS announced the ability to live-modify EBS, both the storage type and the capacity provisioned.

But once again, you may want to migrate your Block storage. At AWS re:Invent 2020, a new general-purpose Solid-State Disk (SSD) option, GP3, was announced. This service represented the same baseline performance as its predecessor but with a few new capabilities and a 20% price drop.

Migrating Operating Systems within AWS: In-place or Replace

As part of the shared responsibility model of the AWS cloud, the deployed OS is the customer's (or partner's) responsibility to patch and update. There are two major approaches, as eluded to earlier:

  • In-place patching, updates, and major version upgrades
  • EC2 instance replacement

Older versions of RedHat Enterprise Linux and others did not support in-place updates from one major version to another. Other distributions, such as Debian, have supported in-place major version upgrades since the last millennium. Microsoft Windows Server on AWS is updated via an instance replacement approach: fresh server, reinstalled applications.

The advantage of an in-place upgrade is that most of your installed applications will hopefully be unaffected. The instance will maintain the same address; it will just reboot after an update.

The advantage of a replacement strategy is that you are re-starting from a known base image and removing any bit-rot that may have set into a long-lived Instance. As noted above, you may be unable to change instance families because of a lack of operating system support, so eventually, you will have to factor this activity in.

A key win on this is to move to a more DevOps approach and a full script and automate the deployment of your Instance fleets via mechanisms such as AutoScale groups.

Migrating Object Storage within AWS: S3

This is where the magic lives. The Simple Storage Service, as an object store, takes care of the entire lifecycle of the storage layer and is transparent to customer interaction. Once you have data in S3, you can leave it there and enjoy the ongoing durability of SLA. 

Any improvements to the service, including pricing improvements, generally trickle down automatically. While new storage tiers and other features may come along over time, you can choose to adopt them, or you can just keep using S3 in an ongoing manner.

S3 does automatically periodically re-checksum and validate the content it stores for you. And while you pay per GB for the service (regularly metered), multiple copies are stored in separate data centers. If any of these copies are detected to fail a checksum, that copy is discarded, and it is copied from another known-good data center.

Migrating Managed Other Platform Services within AWS

It’s worth reviewing for any ASW services that the long-term future upgrade roadmap looks like. Without a crystal ball, we can look at the history of how this has happened in the past for those services that have been around long enough. 

As an example, the Managed Elasticsearch Service: when launched, and for a period after, when a new major version of the service was available, customers have to rip and replace their deployment (and manage coping data from the old to the new). Then came in-place upgrades of versions, making upgrades a much simpler customer experience.

Databases via the Relational Database Service (RDS) are worth reviewing the upgrade path: the service does natively support an “auto minor version upgrade.” While “auto major version upgrade” is defined in some documentation for CloudFormation templates, as of 2020, it does not actually do that.

Diving deeper into the auto-minor version upgrade capability, it's worth asking when this applies. While customers can define a maintenance window (day of week and time window) that is least impactful to their business, that does not mean a new minor update release gets applied in the next maintenance window.

Our experience has shown that the trigger for the minor version upgrade to kick in is the deprecation of the in-use version, not the availability of a newer minor version. When this finally triggers, the upgrade path may be to the latest minor version or something in between.

When dealing with relational databases with read replicas, we have additional complications of version matching.

This level of understanding comes from the experience that Akkodis has developed with a deep understanding of the way that AWS has historically operated these services. That is not to say they will not change in future.

Akkodis has been an AWS Consulting Partner since 2013. Learn more about our AWS Practice and services.

By James Bromberger, VP Cloud Computing, Akkodis Australia