logo

Creating a Code Migration Strategy as part of a Modernization

Posted by Marbenz Antonio on August 30, 2022

Legacy System Modernization Approaches | A Rackspace Guide

One of two options exists when it comes to updating an existing code base: either start from scratch (basically building an entirely new application) or put in the effort to refactor your current code and configuration.

Rewriting is a major decision that depends on the enterprise’s resources, including time, skill, and budget. There will be another post in this series about feedback loops and opinionated workflows to assist drive product development, which may be of interest if there is a push to rewrite.

Changing an existing code base will be the main topic of this article.

By this time, the modernization effort’s desired future state has been identified. Moving away from an outdated Java version, for instance, or switching to a less expensive application server is a couple of examples.

The following two tactics, if the code is being updated, can be very advantageous regardless of the desired future state:

  • Write tests and improve the testability of the code.
  • A more container-friendly update to the code and configuration is needed.

These can be done in any sequence. It might be simple and non-breaking to make the code more container friendly. On the other hand, the code can be in a condition that makes it difficult to add test coverage (perhaps because it is manually tested). In this situation, getting the code running in a container environment for better feedback loops might make sense before making it more tested.

These strategies’ overarching objectives are:

  1. Ensure that the code may be changed without impacting functionality.
  2. To encourage innovation, make the apps more “feedback loop” friendly, which entails making it simpler to deploy and get feedback (logs and metrics)

Investigate these points carefully. Don’t worry if you are unfamiliar with containers; an explanation is provided below. Additionally, hopefully, some of the discussion points here will be useful if your modernization aim IS to move the code into containers (or at least reinforce your own project goals).

Simplifying: A method for enhancing the present situation

Refactoring: Improving the Design of Existing Code by Martin Fowler is the standard text on the subject. We advise everyone with an interest in this topic to read this book.

“Change made to the internal structure of software to make it easier to understand and cheaper to modify without changing the observable behavior,” Fowler, defines refactoring.

If done correctly, making a code base more testable and container-friendly will make the code simpler to comprehend and less expensive to modify. This is highly desirable since we have decided to modernize, or transition from the existing condition to a positive future one.

Code that is cleaner and more modular will result from the drive to make it more testable. This translates to code that is simpler to modify and simpler for new engineers to understand. Having test coverage means you can make changes and quickly determine in lower-level environments whether the changes have broken anything without the need for expensive manual testing teams.

Some recommended practices for making the code easier to read and deploy are also required to make the code container friendly (see 12FactorApp practices below). The application will be able to run on a container platform after the code is made container friendly, which opens the door to some intriguing operational options (which we’ll get into later) as well as enhanced feedback loops.

The importance of testing

Modernization entails change, and because the changes are being made to an existing application, verification is necessary to ensure that nothing is being broken by the changes.

One strategy is to put the refactored program into a lower-level environment and hire a large group of human testers to hammer it, reporting back on what functions and what doesn’t. This tactic’s disadvantages include cost, slowness, and time requirements. You also miss out on a ton of advantages that come with improving the test coverage of the code.

Testing is organized in a hierarchy, with each stage offering advantages. We put a lot of emphasis on developing unit tests in these articles. These tests were created by the developers as part of the code base and can be self-executed during the build phase.

Mocking can help deal with entanglement

We strongly believe that any functionality involved with the operations of downstream services should be mocked. If you’re unfamiliar with the word “mocking,” it refers to the process of simulating an external reliance to test our classes and functions by creating objects that behave like the external dependency.

As much as possible, third-party dependencies should be mocked in the self-running test cases when they exist. Teams writing tests will have a lot of flexibility as a result.

We once got into a disagreement with someone who insisted that a database be spun up to run test suites against while creating an artifact for an application. They suggested that the application shouldn’t be created if there was a problem with the database logic. Putting efficiency aside, it makes sense why this person would be worried. However, their sense of assurance regarding a functioning test could be unfounded unless the database they were testing against was an exact duplicate of the one in use. Not to add, should the database it tests against experience problems, the build may become unstable. There is a situation where an external service can be incorporated into the test, but before introducing this, let’s talk about mocking.

The database can still blow up even if you mock every aspect of downstream service activities, including simulating both desirable and undesirable outputs; however, if you have done your mocking and testing correctly, you will at least know your code can manage it.

For this, initiatives like Mockito are especially useful. They offer simple ways to capture your logic and return when used with a framework like the Spring Framework or Quarkus.

  • anticipated outcomes to examine the happy route,
  • test the unhappy path using inaccurate data, and
  • To test error handling and logging, use errors rather than outcomes.

To make sure that specific components are being called to run at the appropriate times, you can spy on them. Your tests can become portable and effective via mocking. To mock all the positive and negative answers from a third-party service, though, might be a lot of work.

Test Containers are a good middle ground between mocking and testing against a service. A lightweight, disposable instance of popular databases, Selenium web browsers, or anything else that can run in a Docker container is provided by the project, according to the developers, which characterizes it as a “Java library that enables JUnit tests.”

To put it simply, this means that you can quickly have a JUnit test start a container with an instance of a database or cache that is under your control, allowing you to configure it with a good (or poor) state depending on what you are wanting to test.

In terms of container administration and access to/from the service, the Test Containers team has pretty well-considered everything. Even components from current databases, caches, message brokers, and modes are included in test containers. The only requirement is that you create your application using Maven or Gradle. and your test framework should be JUnit or Spock.

Of course, creating mocks is easier said than done if your code base lacks tests or makes no use of Maven or Gradle. In a subsequent article where we cover certain testing techniques, we will return to this.

Code that is friendly to containers: Using the “Twelve-Factor App” approach

Making our code more container friendly is our second aim. As was already mentioned, your modernization project’s final goal may be to get the code base into containers. Then you can skip this part. The following lists some benefits that switching to containers will bring to any code base that will be changing, though, if it’s not already the case.

More “container-friendly” apps are typically simpler to deploy. As a result, you may launch a working version of the application rapidly, which is useful for testing functionality and gathering user feedback. The application can use a Kubernetes platform to truly supercharge our development feedback loop once it can launch and run in a container.

A Kubernetes platform (like Red Hat Openshift) significantly enhances both your ability to deploy and monitor the deployment because many of these systems offer simple workflows for obtaining logs and metrics. In response, we’ll go into greater detail about feedback loops and securely changing software.

The Twelve-Factor App is a highly helpful construction strategy for adaptable software. These guidelines can help developers create code that works in every environment (including containers).

In a later article, we’ll go into more depth about the Twelve-Factor App’s very helpful features. It should be noted, nevertheless, that attempting to adhere to all twelve principles is impracticable, particularly when dealing with old code. If you follow the factors closely enough, you can iterate on both the product and operations cycles and bring your application closer to its ideal state.

For instance, getting the application to deploy and run in the container, with some observability around it, may be sufficient if the goal is to make the code container-friendly so it can run in a Kubernetes distribution. Only a couple of the twelve factors are necessary to do this. As previously indicated, we’ll go into more detail about this in a later blog article.

After reviewing the Twelve-Factor App methodology, testing choices, and containers, you now have some helpful approaches to employ as you begin your project. Your work’s essence is beginning to take shape. But before you start, make sure that the crew has the necessary equipment so that it can function efficiently. The main topic of my upcoming article will be this.

Friendly blockers for containers: Tight coupling to middleware

A code base may occasionally be inextricably linked to middleware or unfriendly container service. For instance, you might discover that your code base contains annotations that connect it to that application server when pursuing a modernization target to get rid of an Enterprise Java application server that is too expensive. Through refactoring, simple issues can be resolved, such as the application server being utilized for connection pooling to a database or for accessing JNDI. However, it might be more difficult to separate from things like message-driven beans (MDB).

Here, the advice from our earlier post on choosing the best patterns to start with can be important in ensuring project success early on while also avoiding wasting valuable resources on tasks that cannot be completed in a timely way (or at all).

What about containers?

Applications can be packaged and isolated with their whole runtime environment—all required files—using Linux container technology. This facilitates switching between environments (dev, test, production, etc.) for the confined program while maintaining all of its capabilities simple.

Moving code to operate in containers can be viewed as a modernization objective because it makes Kubernetes-based container platforms, such as Red Hat OpenShift, more accessible. Teams working on modernization projects can benefit greatly from container platforms.

A brief overview of containers

Docker makes it easier

When Docker first appeared in 2013, it offered a quick and effective approach to controlling how the container environment is configured before an application runs there. The necessary tools for the majority of software projects now include Dockerfiles and image repositories like Dockerhub, which describe what should be included in the Linux image and how it should be generated.

Containers versus Virtual Machines

Containers offer the advantage of being able to fit more resources into a single host machine without the expense of the hypervisor and guest OS, unlike virtual machines (VMs), which need to be managed by a hypervisor and require an operating system (OS) to be set up before being useable.

Container orchestration platforms

Compared to VMs, containers are less dependable since they might fail or their host OS might crash, which would wipe out all the containers the host OS had created.

Numerous container-provisioning platforms have been developed to manage all the efforts involved in keeping container workloads up and running and controlling the traffic in and out to deal with the transient nature of containers. Red Hat OpenShift, Azure Kubernetes Service (AKS), and Google Kubernetes Engine are some examples of applications built on the Kubernetes project (GKE). Google Cloud Run and Amazon Elastic Container Service are alternatives to Kubernetes (ECS).

How do containers affect development?

Containers are transient. They can be scaled up, resulting in the sudden availability of three instances of the same task (each executing in a separate container), and they can be transferred from one resource pool to another to meet needs for redundancy.

This makes it extremely challenging, if not impossible, to get a program to run by manually carrying out a set of procedures (as might be done with apps running in a VM).

Additionally, some middleware that an application needs (such as specific application servers) could not function properly in a container. The Twelve-Factor App was developed as a set of guidelines for a development team to follow to assist in creating applications that will succeed in such an environment.

Stay focused on the goal

Here, we’ve talked about a few broad goals that can make a code base more adaptable. However, there is a trap in this place. Failure could come from a test coverage-heavy approach or an excessive rotation of the Twelve Factors. Ultimately, however how attractive passing tests may seem, the application needs to be moved toward the desired state (proving out the value promised).

The Project Lead will have to balance these high ideals with the practical effort required to advance the application to the intended future state. Putting together the correct team is important because it may not be simple.

 


Here at CourseMonster, we know how hard it may be to find the right time and funds for training. We provide effective training programs that enable you to select the training option that best meets the demands of your company.

For more information, please get in touch with one of our course advisers today or contact us at training@coursemonster.com

Verified by MonsterInsights