Applications Built For Tomorrow
Cloud-Native software development breaks away from the traditional model of building monolithic software applications, and focuses instead on creating applications that leverage cloud environments. These applications are built independent from any underlying architecture and are engineered to benefit from a cloud-native architecture. Within each cloud application are four underlying pillars: Microservices, Containers, Dynamic Orchestration, and Continuous Delivery.Take our Cloud-Native readiness assessment→
Microservices in cloud-native applications are services separated out to address a specific task. Since these services are loosely coupled with the underlying application, they can be updated or repaired without impacting other services.
This distinction allows microservices to be deployed and maintained more efficiently than traditional applications.
Just as virtual machines allow multiple operating systems to live on one machine, containers allow multiple apps to live on one virtual machine.
With containerization, applications are bundled with dependencies like libraries, configuration files, and binaries to expedite the process of deploying new applications and deploying updates to existing ones.
Orchestration was introduced to the software development process to streamline the process of manually provisioning and configuring applications.
With DevOps time constrained by setting up new web servers, databases, and load balancers, orchestration creates efficiency by automating DevOps processes.
Applications can easily be replicated to create a more efficient deployment workflow. While code can be updated through a code repository, the environment and configuration can just as easily be re-created by just cloning existing applications, and publishing them as needed.
From here, developers can use their individual workstations to write code, push to staging/development servers, and then ultimately have the code from development pushed to production.
Engineering new software for the cloud that encompasses the four pillars outlined above provides major benefits for both software and development.
In traditional software development, before any code is written servers need to be provisioned with compute resources, memory, and dedicated disk space. This practice has been a costly and inefficient method for deploying applications and has historically been a burden placed on IT departments as a result of expensive capex purchases.
In modern cloud-native development, applications can be dynamically allocated resources to fit the current demand. As an example, if you’re running a data warehousing server that runs load intensive queries in the evenings, it can be automatically allocated additional memory and compute resources it needs, then be relieved of those resources once it’s done to be brought back into a pool of available resources for other applications.
Gone are the days of having to request beefier hardware to handle application loads to ever increasing demand. The ability to easily scale is one of the main reasons the cloud-native applications stand out as a preferred method of software development. As the demand grows for an application, it can automatically scale to meet that demand an ensure availability.
Software applications have two main ways of scaling: up (vertical scaling) and out (horizontal scaling). Applications that require more power such as compute or memory resources will scale up, while applications requiring additional servers to handle load will scale out.
A major issue facing all applications is the risk of downtime. Application downtime can be a result of a variety of issues including misconfiguration, hardware failure, and unplanned upgrades.
The introduction of microservices has been a big win for increasing application availability by relying on loosely coupled services that can be modified or swapped out without affecting the larger application as a whole.
This, accompanied by cloud platform features provided by AWS, Pivotal, Google, and others, allows these applications to live on distributed hardware that is resilient to a single failure. Often these applications are running on resources from multiple physical locations to prevent not only the failure of hardware but the outage of an entire geological area.
Our Global B2B Ecommerce client sought a scalable wholesale order management portal for their EU business arm. Our client sports a very modern technical infrastructure, which is largely based on cloud-native practices.
What we did
Natural system boundaries are sought and aggressively identified. Where feasible, areas of responsibility are split off into separate services. This approach provided some important benefits.
For example, service separation facilitated the incremental transition of our client's server code from Scala to Go; some subsystems have already been implemented in Go, and these are already running smoothly alongside the older Scala services.
When deployed, each service runs in a Docker container, based on a Docker Image custom tailored to the specific service. From an organizational perspective, this simplifies the overall problem of maintaining a robust deployment and secure environment by breaking it into smaller problems.
Each service lives in a semi-isolated container whose boundaries are very clear and comprehensible, and which contains only the system dependencies necessary to accomplish a very specific purpose.
For example, the container that enables customers' browser experience is dead simple; it is very nearly a straight Alpine Linux server, with a creatively configured static web server. It actually CAN'T do anything else, which from a security and systems management perspective is a very good thing.
Given that our client's codebase has been separated into containerized services, it is natural to formalize the problem of how to ensure that the different pieces consistently work well together.
For example, the Orders service may experience a much heavier load at certain times of the year, while the need for the User Identity service may remain consistent.
For another example, we may want to ensure that a certain version of the Web (front-end) cannot be seen by consumers without a compatible version of the Carts service being available for Web to talk to.
Our client relies on Kubernetes to address these and many other types of concerns. Kubernetes provides a formal language for defining and enforcing the layout of a production deployment and the relationships between various services instances.
Each time a developer pushes source code to a given service, our client's continuous integration server publishes a fully deployable image for that service (provided that the build passes all the tests.) This means that there are very few barriers between development and deployment. The normal process of development produces the actual artifact that is deployed
Nearly all of our client's infrastructure is hosted on the Google Cloud platform. This ensures easy access to additional scale when necessary, and allows our client to effectively serve their customers in various continents (currently North America and Europe) with minimal administrative overhead
In the end, the benefits of the engagement were ten-fold. Our client got a global wholesale order management portal that is scalable, internationalized, and accessible with a React front-end that works smoothly within their cloud deployment environment for their custom catalogs, account management, order processing and customer support via API abstraction layer.
Leveraging cloud-native development enhanced the capabilities of their web-centric application; keep them running consistently, across continents, with less pain and less overall expense.