Skip to content

Commit d7a3115

Browse files
authored
Merge pull request #6922 from EnterpriseDB/ftouserkani-edb-patch-1
Update architecture.mdx
2 parents db48f34 + 96ca418 commit d7a3115

File tree

1 file changed

+10
-10
lines changed

1 file changed

+10
-10
lines changed

advocacy_docs/edb-postgres-ai/hybrid-manager/overview/architecture.mdx

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -6,26 +6,26 @@ description: An overview of the core components and design principles that enabl
66

77
![High-level diagram of Hybrid Manager's architecture](../images/hcp_high-level_light.svg)
88

9-
The outer-most containers of the diagram illustrate a Kubernetes cluster running on a Cloud Service Provider's infrastructure, such as [EKS on AWS](https://aws.amazon.com/eks/) or [GKE on GCP](https://cloud.google.com/kubernetes-engine), or on-premises infrastructure. "On-premises" in this context includes deployments on physical servers (bare metal), virtual machines, or private clouds hosted within an organization’s infrastructure.
9+
The outer-most containers of the diagram illustrate a Kubernetes cluster running on a Cloud Service Provider's infrastructure, such as [EKS on AWS](https://aws.amazon.com/eks/) or [GKE on GCP](https://cloud.google.com/kubernetes-engine), or on-premise infrastructure. "On-premise" in this context includes deployments on physical servers (bare metal), virtual machines, or private cloud(s) hosted within an organization’s infrastructure.
1010
Working inward through the diagram, there are three main logical groupings to note:
1111

1212
1. **The Kubernetes control plane**: The Kubernetes control plane is comprised of the core Kubernetes components, such as `kube-apiserver`, `etcd`, `kube-scheduler`, `kube-controller-manager`, and, if on a cloud setup, `cloud-controller-manager` and implemented by two or three Kubernetes control nodes.
1313

14-
2. **The Control Compute Grouping**: The Control Compute Grouping of HM is a logical grouping of 70+ pods running on 2 or 3 Kubernetes worker nodes that implement the HM's core management components, such as Grafana, Loki, Thanos, Prometheus for observability, Cert Manager for certification management, Istio for networking, a trust manager for securing software releases, and many others.
15-
The 70+ pods for the components are distributed across the worker nodes according to how the `kube-scheduler` places them, which is not as simply as pictured in the diagram.
14+
2. **The Control Compute Grouping**: The Control Compute Grouping of HM is a logical grouping of 70+ pods running on (2) or (3) Kubernetes worker nodes that implement the HM's core management components, such as Grafana, Loki, Thanos, Prometheus for observability, Cert Manager for certificates management, Istio for networking, a trust manager for securing software releases, and many others.
15+
The 70+ pods for the components are distributed across the worker nodes according to how the `kube-scheduler` places them, which is not as simple as what is illustrated in the diagram.
1616
If needed, more worker nodes can be added to support more database-as-a-service resources.
1717

18-
3. **The Data Compute Grouping**: The Data Compute Grouping of HM is a logical grouping of Postgres clusters organized by namespaces and running on a number of pods distributed across at least 3 Kubernetes worker nodes as the `kube-scheduler` places them, which may not be as simply as pictured.
19-
The number of worker nodes can be increased from three as more databases are added using HM, but maybe require either adding the worker nodes manually or configuring auto scaling alongside your infrastructure to do this automatically.
18+
3. **The Data Compute Grouping**: The Data Compute Grouping of HM is a logical grouping of Postgres clusters organized by namespaces and running on a number of pods distributed across at least (3) Kubernetes worker nodes as the `kube-scheduler` places them, which may not be as simple as what is illustrated in the diagram.
19+
The number of worker nodes can be increased from (3) as more databases are added using HM, but maybe require either adding the worker nodes manually or configuring auto scaling alongside your infrastructure to do this automatically.
2020

21-
Both the Control Compute Grouping's worker nodes and the and the Data Compute Grouping's worker nodes are backed up nightly to either Snapshots (on-premises using a SAN, or using EBS with EKS) or S3 compatible object store using Barman ([for the cloud](https://docs.pgbarman.org/release/3.12.0/user_guide/barman_cloud.html)).
21+
Both the Control Compute Grouping's worker nodes and the Data Compute Grouping's worker nodes are backed up nightly to either Snapshots (on-premises using a SAN, or using EBS with EKS) or S3 compatible object store using Barman for the cloud (https://docs.pgbarman.org/release/3.12.0/user_guide/barman_cloud.html).
2222

2323
### Scalability
2424

2525
To support scalability, HM integrates with Kubernetes' auto-scaling capabilities.
26-
This means that on platforms like AWS, worker nodes in the Data Compute Grouping can be automatically added to the cluster as needed using AWS's manged service [Auto Scaling groups](https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-groups.html).
26+
This means that on platforms like AWS, worker nodes in the Data Compute Grouping can be automatically added to the cluster as needed using AWS's managed service [Auto Scaling groups](https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-groups.html).
2727
However, the HM's Kubernetes auto-scaling capability can be disabled to control costs.
28-
In addition, it's important to note that enabling auto-scaling functionality for on-premises deployments requires additional infrastructure configuration since there is no out-of-the-box auto-scaling feature like AWS's Auto Scaling groups for on-premises deployments.
28+
In addition, it's important to note that enabling auto-scaling functionality for on-premise deployments requires additional infrastructure configuration since there is no out-of-the-box auto-scaling feature like AWS's Auto Scaling groups for on-premises deployments.
2929
Monitoring the distribution and resource usage of pods and worker nodes implementing the database clusters can be achieved through the [included Grafana dashboards](../using_hybrid_manager/cluster_management/manage-clusters/trace-clusters/#grafana-hardware-utilization-dashboard) or by [using Kubernetes `kubectl` commands](../using_hybrid_manager/cluster_management/manage-clusters/trace-clusters/#using-kubectl-to-check-underlying-cluster-resources).
3030

3131
### Backup architecture
@@ -40,9 +40,9 @@ For more robust, multi-availability zone recovery, Barman Cloud backups can be s
4040
Alternatively, local object stores might require additional configuration to ensure backups are replicated to remote sites.
4141
Stretch clusters mitigate these limitations by enabling data replication and redundancy across geographically dispersed locations.
4242

43-
### High Availability and resilience
43+
### High Availability and resiliency
4444

4545
HM leverages Kubernetes' proven technologies to ensure high availability through automated failover mechanisms.
4646
These mechanisms function within stretch clusters, across availability zones, or between zones in the same region.
4747
The architecture supports faster backups, via snapshot backups, as well as recovery for large datasets, though multi-site recovery capabilities depend on the underlying storage infrastructure.
48-
For customers with advanced storage configurations, such as SANs (Storage Area Networks) spanning multiple racks or data centers, HM can leverage the infrastructure to enhance resilience and ensure rapid failover. However, these configurations may not be common among all users.
48+
For customers with advanced storage configurations, such as SANs (Storage Area Networks) spanning multiple racks or data centers, HM can leverage the infrastructure to enhance resiliency and ensure rapid failover. However, these configurations may not be common among all enviroments.

0 commit comments

Comments
 (0)