In today’s rapidly evolving business landscape, digital transformation is not just a luxury, but a necessity for enterprises to thrive and stay competitive. With the constant emergence of new technologies and the ever-changing demands of customers, companies need to adapt quickly to maintain their edge.
However, for large-scale complex enterprises, this can be a daunting task. These organisations often grapple with a tangled mess of inhomogeneous legacy systems. In many cases, this is a direct consequence of mergers and acquisitions.
In other cases, it is a result of many years of constant development creating a long tail of legacy technology and varying business requirements. One way to tackle such complex environment and achieve a successful digital transformation is to introduce a decoupling layer of common backends.
System Decoupling and Common Backends
As said, the primary challenge of digital transformation in complex enterprises is the disorder and entanglement of legacy systems. We need a way to reboot the entire platform to lift ourselves out of “the mess”. We need to compose the digital foundation of the enterprise from the ground up and hence reach a unified backbone of business services that supports new, innovative frontends.
This is where the idea of common backends come into play. Acting as a decoupling layer, common backends provide a new backbone for the business, allowing the development of better frontends while gradually integrating all inhomogeneous legacy systems on the backside.
While the concept of common backends may seem simple, there are several crucial aspects to consider when you set out to build them. In this section, we will discuss those factors.
Architectural leadership and the domain decomposition structure
You need an influential and highly effective technical leader or architect (or group of such people) that has one job: to ensure that the domain decomposition structure is decided upon, communicated and enforced. This person will often be the CTO, CIO or enterprise architect.
The domain decomposition structure is one of the most important organisational factors in your digital transformation program as it provides the structure around which the common backends are organised and developed.
In essence, the domain decomposition structure dictates what common backends you will have, how they are named and what responsibility each will have (the so-called bounded context).
It is important to understand how difficult it can be to create the domain decomposition structure. The decomposition structure will have vast long-term consequences, most importantly, as we know from Conway’s law, the domain decomposition will dictate the long-term organisational structure in your IT organisation.
It is crucial to realise that there is no silver bullet in this matter. Any decomposition structure will be full of trade-offs and various people will have different opinions on what is the correct decomposition structure. Every time you create a separation between two sub-domains, you will find several problematic couplings between them and you will be tempted to join them back together.
Therefore, you need a technical leader with a strong intuition about what makes up a good domain decomposition structure. Only such a person can drive decisions forward regarding the decomposition.
You need a group of effective solution architects who can build up a rich, concise and complete set of design principles and drive architecture decisions from them.
Each of the common backends should have a dedicated solution architect to embody a clear vision of the particular common backend and to establish ownership and proper empowerment so that decision making can happen swiftly and effectively.
In addition to this, a solution architect will need to handle the following things:
- Business requirements: Solution architects work closely with business stakeholders, product owners, and development teams to gather and analyse business requirements, ensuring that the proposed technology solution aligns with the organisation’s goals and objectives.
- Technical design and architecture: They create detailed technical designs and specifications, outlining the components, interfaces, and interactions between different systems and services that comprise the solution. They also ensure that the proposed architecture adheres to best practices, standards, and guidelines established by the organisation.
- Technology selection: Solution architects evaluate and recommend technologies, tools, and frameworks that are best suited for the implementation of the proposed solution. They consider factors such as functionality, scalability, performance, and maintainability when making their selections. Clearly, the technology proposal should comply with the constraints set up by the CTO or leading enterprise architect.
- Implementation guidance: They collaborate with development teams to provide guidance and oversight during the implementation phase, ensuring that the solution is built according to the design specifications and that any challenges or issues are addressed in a timely manner.
- Integration and testing: Solution architects are responsible for ensuring that the developed solution integrates seamlessly with existing systems and that it undergoes thorough testing to validate its functionality, performance, and security.
In order to ensure a smooth deployment process and to focus on building the right business services, it is essential to utilise the power of modern cloud-native technologies such as containerisation, managed Kubernetes, Azure App Service, Amazon ECS, managed databases, load balancers, API gateways, Content Delivery Networks, Web Application Firewalls and more.
These platforms provide robust, resilient and scalable infrastructure, enabling you to concentrate on the development and management of your common backends without getting bogged down in infrastructure concerns.
Another benefit you will get from this approach is that each development team can be empowered and kept responsible not only to build a common backend, but also to ship it and run it.
An API gateway is a key component of your common backends, acting as a single-entry point for managing, securing, and routing API requests between frontends and backends and between backends and backends.
By implementing an API gateway, organisations can achieve several objectives that help streamline development, improve security, and enhance overall system performance. Some of the primary objectives of having an API gateway in front of your common backends are:
- Simplified API management: An API gateway consolidates the management of multiple APIs, making it easier for developers to handle versioning, documentation, and access control. This centralisation simplifies the API lifecycle management process and provides a consistent interface for clients to interact with the backend endpoints.
- Load balancing and routing: API gateways enable efficient load balancing and routing of API requests to the appropriate backends. This helps distribute the workload across multiple instances of a backend service, ensuring high availability and optimal resource utilisation, which are crucial for the performance and scalability of cloud-native applications.
- Security and access control: API gateways act as a security layer, protecting backend services from unauthorised access and potential threats. They can enforce authentication and authorisation policies, such as API keys, OAuth tokens, or JWT tokens, to verify the identity of clients and ensure they have the necessary permissions to access specific resources. Additionally, API gateways (possibly combined with some kind of Web Application Firewall) can provide protection against common security vulnerabilities, such as DDoS attacks or SQL injections.
- Rate limiting and throttling: To prevent excessive usage or abuse of API resources, API gateways can enforce rate limiting and throttling policies. These policies help maintain the stability and performance of the system by limiting the number of requests a client can make within a specific time frame.
- Monitoring and analytics: API gateways can gather valuable metrics and logs related to API usage, performance, and errors. This data can be used to monitor the health of the system, identify potential bottlenecks or issues, and make informed decisions about optimising the architecture.
- Request transformation and protocol translation: API gateways can handle request transformations and protocol translations, such as converting between REST and gRPC or modifying request/response payloads. This capability simplifies the integration of different clients and services, providing a consistent and seamless experience for developers and users. This capability should be used with some caution. Generally, it is advised to strive for what is called “dump pipes and smart endpoints”.
- Caching and response optimisation: API gateways can improve the performance and reduce the load on backend services by caching frequently requested data and optimising response payloads, such as compressing response data or removing unnecessary fields. Like for the previous bullet this capability should be used with caution as caching and response optimisation are violating the principle of “dump pipes” and often these things are better left for the applications to figure out.
Consider using API management tools provided by your chosen cloud platform, such as Azure API Management or similar offerings on AWS, to streamline and simplify the management of your APIs.
Logging and Monitoring
To maintain the health and performance of your common backends, invest in a robust logging and monitoring solution. Leading market technologies, such as the ELK Stack (Elasticsearch, Logstash, and Kibana) or Azure Application Insights, can provide valuable insights into your system’s performance and potential issues, allowing you to quickly address them and maintain optimal operations.
A comprehensive identity management solution is necessary for handling both B2C and B2B users. B2B (business-to-business) and B2C (business-to-end-customer) identity management both pertain to the process of authenticating and authorising users in a digital environment.
However, they cater to distinct user groups and address different use cases, which leads to differences in their implementation, features, and requirements. B2B identity management revolves around managing and securing access for users from different organisations, such as partners, vendors, or clients.
These users often need access to a company’s internal systems or resources, and the primary goal is to facilitate secure collaboration and information sharing among businesses. Key aspects of B2B identity management include:
- Federation: B2B identity management often requires federated identity systems that enable users from one organisation to access resources in another organisation securely, without the need for multiple credentials. This is achieved using protocols like SAML, OAuth, or OpenID Connect.
- Role-Based Access Control (RBAC): B2B scenarios involve assigning users different roles and permissions based on their relationship with the organisation. This ensures that users only have access to the specific resources they need to perform their tasks.
- Audit and compliance: Organisations need to monitor and track access to their resources, ensuring compliance with industry regulations and internal policies. B2B identity management systems should provide comprehensive auditing and reporting capabilities.
On the other hand, B2C identity management focuses on managing the identities of end customers who interact with a company’s products or services, such as customers using a web or mobile application. The primary goal is to provide a seamless and secure user experience that encourages customer engagement and loyalty. Key aspects of B2C identity management include:
- User-friendly registration and authentication: B2C scenarios require simple and convenient registration and authentication processes for users. This may include social media logins, single sign-on (SSO), password recovery, multi-factor authentication (MFA) and more.
- Scalability: B2C identity management systems must be capable of handling large numbers of users, often with unpredictable usage patterns. This requires a robust and scalable infrastructure that can accommodate fluctuations in demand.
- Privacy and data protection: Companies need to ensure that customers’ personal data is protected and handled according to data protection regulations, such as GDPR or CCPA. B2C identity management systems should provide tools and features to support data privacy and compliance requirements.
In summary, B2B identity management deals with secure access and collaboration among organisations, whereas B2C identity management focuses on providing a seamless and secure experience for end consumers. While both share some common goals, such as authentication and authorisation, their distinct use cases lead to differences in implementation, features, and requirements.
Data Privacy Compliance
Given the mega-trend of ever-growing data privacy regulations, it is essential to establish a strong foundation around data privacy. One approach is to implement a PII (Personally Identifiable Information) service, as explained in this blog post: https://sprintingretail.com/blog/why-you-need-a-pii-service-and-how-to-design-it/.
If you establish a PII service you can realise a data segregation strategy where the common backends never store any piece of personal data but only references to person records in the PII service. This will help your enterprise remain compliant with privacy regulations while providing secure data handling and storage.
We refer to https://sprintingretail.com/blog/why-you-need-a-pii-service-and-how-to-design-it/ for a deeper understanding of this topic.
Lean and Agile Project Management
To efficiently manage multiple autonomous teams with minimal overhead, you should seek to adopt the leanest possible project management approach. This will allow your teams to focus on the substance of business requirements and get things done with a minimum of ceremony and formalism. The following virtues should be valued:
- Product ownership: Find a good product owner of each common backend that is able to keep a strong focus on the delivery of real features and improvements to customers and provide continuous feedback to the development team.
- Incremental development: Break work down into smaller, manageable tasks and deliver functionality in short cycles, allowing for frequent adjustments based on new information or changing requirements.
- Just-enough collaboration: Understand that communication between the common backend teams is only needed to the extend they have couplings that are not expressed in the contracts between them. Strive for true autonomous teams where the need for collaboration, communication and synchronisation is kept on a minimum.
- Continuous improvement: Find simple ways to judge the team’s performance and output. It should be the same approach across all the common backends so you can compare them.
- Healthy competition: Create a healthy competition between the common backends. This will increase the amount of self-organisation and self-motivation of the teams.
Finally, a solid collaborative knowledge-sharing platform, such as Confluence or similar, is vital for fostering collaboration, sharing best practices, and documenting processes among your teams. This will help ensure that all team members stay informed and aligned, ultimately contributing to the success of your common backends initiative.
Now let address some of the objections that will arise when talking about the idea of common backends.
Is this not just microservices in disguise?
You may wonder whether what is being proposed here is it not just old wine in new bottles. The story told here may seem like a revamp the promises of microservices. This is not really the case. Microservices and common backends are related concepts in the realm of software architecture, each addressing different aspects of system design and integration.
Let us examine the differences between the two:
Microservices is an architectural style that structures a complex software system or application as a collection of small, autonomous, and independently deployable services, each responsible for a specific business capability. These services communicate with each other through lightweight protocols, such as HTTP, gRPC, REST or messaging queues.
According to Microservice advocates, this style of architecture will yield scalability, resilience, and the ability to adapt to changes quickly, as each service can be developed, deployed, and scaled independently.
The concept of common backends refers to a break-down structure of the entire enterprise into a relatively small set of distinct backends, providing a unified layer of business capabilities that serves as a decoupling layer between the frontends and the legacy systems. Common backends act as a new backbone for the business, allowing the development of better frontends while gradually integrating all the disparate legacy systems on the backside.
In summary, microservices focus on the internal structure of software systems, breaking it down into smaller, independently deployable units, microservices, while common backends focus on the break-down structure of the entire enterprise into a uniform set of distinct business capabilities.
Is this not just the re-introduction of the Enterprise Service Bus?
Enterprise Service Bus (ESB) and Common Backends are both architectural approaches to integrating and harmonisation within an organisation. However, they have emerged in different eras and address different challenges, which leads to differences in their design principles, implementation, and use cases.
The Enterprise Service Bus (ESB) is an architectural pattern that became popular in the early 00s, primarily addressing the integration challenges faced by organisations with multiple, disparate systems. ESB aims to provide a centralised and standardised platform to connect, manage, and route messages between different systems and applications.
Key characteristics of ESB include:
- Centralisation: ESB acts as a central hub for managing communication and integration between various systems, often leading to a tightly-coupled architecture.
- Orchestration: ESB can coordinate and manage complex business processes by orchestrating the flow of messages and data between different systems and services.
- Message transformation: ESB provides capabilities for transforming messages and data between different formats and protocols, enabling seamless communication between systems with varying data structures and requirements.
- Routing and mediation: ESB handles the routing and mediation of messages between systems, allowing for content-based routing, protocol conversion, and other advanced routing capabilities.
The primary differences between ESB and Common Backends lie in their design principles, implementation, and use cases. ESB focuses on centralised integration, orchestration, transformation and routing done on a single coherent platform by a single team.
In contrast, Common Backends has an emphasise on the domain decomposition and the organisation around each sub-domain in autonomous teams, each delivering a common backend for their sub-domain. Common backends constitutes the smart endpoints and the API Gateway constitutes the dump pipes.
Is common backends equivalent to the idea of Data Mesh?
Data Mesh and Common Backends are both architectural concepts that address different aspects of modern, complex enterprise platforms. While they share some similarities in promoting a more decentralised and modular approach, they have distinct goals and focus areas.
Data Mesh, introduced by Thoughtworks, is a paradigm within data platform architecture that addresses the challenges of scaling and managing data in large, distributed organisations. It emphasises decentralisation, domain-oriented ownership, and self-serve data infrastructure, with the aim of making data more accessible, discoverable, and usable across the organisation.
Key aspects of Data Mesh include:
- Domain-oriented data ownership: Data Mesh proposes that data should be owned and managed by individual domain teams, who are responsible for producing, maintaining, and serving their data as a product.
- Self-serve data infrastructure: Data Mesh encourages the creation of a self-serve data platform that allows domain teams to easily discover, access, and use data from other domains, without relying on centralised data teams or bottlenecks.
- Standardised data exchange and governance: Data Mesh promotes the use of standardised data schemas, metadata, and protocols to facilitate data exchange between domains. It also emphasises the importance of implementing data governance practices to ensure data quality, privacy, and compliance.
- Decentralised architecture: Data Mesh advocates for a decentralised architecture, where domain teams can build, deploy, and maintain their own data pipelines, storage, and processing infrastructure, fostering greater agility and scalability.
Common Backends has many overlapping ideas with the ideas of Data Mesh. However, the focus and objectives are different. The starting point of the Common Backends approach is a strong architectural leadership that is able to create a clear well-defined domain decomposition structure that can serve as a structure around which the whole development organisation is organised in teams, each team developing a common backend. The common backends, once established, should make up a data mesh in the sense defined by Thoughtworks but it is a side-product, although a very valuable one.
In short: Data Mesh aims to improve data accessibility, discoverability, and usability in a decentralised manner, while Common Backends focus on enabling digital transformation programs via the creation of a clear domain decomposition structure and a corresponding organisation of the transformation program around it.
Digital transformation is crucial for the survival and success of complex enterprises. However, the path to transformation can be challenging, particularly when dealing with a myriad of inhomogeneous legacy software systems.
By implementing common backends as an organising principle, organisations can tackle the complexity of digital transformations via a divide-and-conquer approach, create a new business backbone and develop better frontends to drive innovation.