Microservices, Service-oriented architecture, scailing, and failure

At the heart of our enterprise package is a service-oriented architecture. We break down our service delivery to the underlying protocols and data structure and try to abstract that out where we can to keep a redundant auto-healing self-configuring cluster accessed as a single endpoint.

In other articles, I talk about the Load Balancers and SQL Servers, and now we talk about how it all falls together.

The benefit of our service-oriented architecture is that we can break down all the tasks it takes to run our application into core protocols and communication points. We then abstract that into service providing applications with shared configuration, data, and assets. Rather than run a mail server on your PHP node we utilize SMTP or HTTP API in our application to allow our mail delivery load to be handled by another independent system. Load balancers are the front door to this infrastructure making logical choices on where to send the traffic next. Within our environment, each component uses APIs and other hard or soft links to communicate data quickly and efficiently giving everyone from our first to our millionth visitor an unparalleled experience.

Beyond the ability to scale we also can add fault tolerance. From time to time even the best-designed system runs into an error, or need a reboot for an update. In a classic monolithic infrastructure, this will result in some downtime. Sometimes just a few minutes, but other times outages can last longer when something goes wrong and the path of troubleshooting and tracking a fault leads through a  tightly connected service layer with little to no documentation.

Our service level architecture is also compatible with a microservice architecture helping to further abstract application roles beyond service and into job-specific containers. We can automate the whole process and include recovery or failure detection elements. If we break our jobs into tasks that can run on different nodes or containers in parallel one block can fail and all traffic can be auto-routed around it. Our management system will detect, diagnose, and repair/replace the block limiting any potential outage to the smallest possible impact. With all architectures, data loss is unlikely because the storage node itself is a complex redundant system that exposes a single endpoint for all systems to access.

The largest downside of a microservice or service-oriented architecture is that instead of one system to work with it must be treated as an environment. Isolated and interconnected systems perform jobs, and the shift in the way things run can make some legacy systems and practices obsolete. Some call this the kittens vs cattle debate, where you treat your systems as either precious kittens that need to be protected, nurtured, watched, loved, etc, or like cattle, where they are bred to be slaughtered. You don't need to abuse your cattle, but you often don't individually name them or have much direct interaction. Instead, you focus on monitoring and managing the herd.

Each setup will be unique, but we can follow some shared principles and components. I will describe the common layers and how they interact below.

A) Load Balancer & DNS: This layer deals with directing traffic to and from the compute node and occasionally cached assets.  
B) Computer Nodes (PHP, Ruby, Python, Node.js, Go, etc): This layer listens to the client request and processes the response. We connect with assets, databases, and any other interconnected system we need, sometimes spanning multiple languages and API integrations.
C) Asset Storage:  This layer stores and distributes assets to the compute and other relevant nodes.  This is often performed by a SAN of sorts to provide rapid storage to any node within a data center.
D) DB Cluster: As with the load balancer we can create a database cluster that handles all our queries. Internally to the cluster, we have load balancers, compute, storage, and management, but visible to your application is a single database endpoint. Data can replicate across data centers for near real-time updates across a global network. 
E) mgmt/Cron system:  Unless we use a static configuration for our cloud we will need a sort of management and automation logic to monitor and react to our environment.

Contact our team and we can describe your application using these blocks and build you the exact infrastructure you need, and set it to breathe and live along with your application.

 

Need help?

If you need any assistance or have a question, please reach out to us. We can be reached in a variety of ways.

  • 0 Users Found This Useful
Was this answer helpful?

Related Articles

SQL Servers | MariaDB MySQL and Postgresql OH MY!

SQL is a very powerful language and syntax that can be leveraged to store access and manipulate...

Backups | Some things to think about

Paranoia is frowned upon, but with backups it is necessary. Planning for failure needs to be...

Load Balancers | WhenWhyHow to use them

A load balancer is not a WAF, nor is it a CDN, but it works well with both! A load balancer is a...

Getting Started::Enterprise Package

Registration Process To get started on an Enterprise package visit our website and pick a new...