Skip to content

Commit 3f2e4c2

Browse files
committed
Proposal document for improvement to accurate estimator for CRD scheduling
Signed-off-by: mszacillo <mszacillo@bloomberg.net>
1 parent c8acebc commit 3f2e4c2

File tree

3 files changed

+164
-0
lines changed

3 files changed

+164
-0
lines changed
Loading
Loading
Lines changed: 164 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,164 @@
1+
---
2+
title: CRD Component Scheduler Estimation
3+
authors:
4+
- "@mszacillo"
5+
- "@Dyex719"
6+
reviewers:
7+
- "@RainbowMango"
8+
- "@XiShanYongYe-Chang"
9+
- "@zhzhuang-zju"
10+
approvers:
11+
- "@RainbowMango"
12+
13+
create-date: 2024-06-17
14+
---
15+
# CRD Component Scheduler Estimation
16+
17+
## Summary
18+
19+
Users may want to use Karmada for resource-aware scheduling of Custom Resources (CRDs). This can be done
20+
if the CRD is comprised of a single podTemplate, which Karmada can already parse if the user defines
21+
the ReplicaRequirements with this in mind. Resource-aware scheduling becomes more difficult however,
22+
if the CRD is comprised of multiple podTemplates or pods of differing resource requirements.
23+
24+
In the case of [FlinkDeployments](https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/pod-template/), there are two podTemplates representing the jobManager and taskManagers. Both components can
25+
different resourceRequirements which Karmada cannot currently distinguish while making maxReplica estimates. This is due to a limitation
26+
in the API definition of ReplicaRequirements which assumes that all replicas scheduled by Karmada will have the same resource request.
27+
28+
We could technically add up all the individual component requirements and input those into the replicaRequirements, but Karmada would
29+
treat this like a "super replica", and try to find a node in the destination namespace that could fit the entire replica. In many cases,
30+
this is simply not possible.
31+
32+
For this proposal, we would like to enhance the accurate scheduler to account of complex CRDs with multiple podTemplates or components.
33+
34+
## Background on our Use-Case
35+
36+
Karmada will be used as an intelligent scheduler for FlinkDeployments. We aim to use the accurate estimator (with the
37+
ResourceQuota plugin enabled), to estimate whether a FlinkDeployment can be fully scheduled on the potential destination namespace.
38+
In order to make this estimation, we need to take into account all of the resource requirements of the components that will be
39+
scheduled by the Flink Operator. Once the CRD is scheduled by Karmada, the Flink Operator will take over the rest of the component
40+
scheduling as seen below.
41+
42+
![Karmada-Scheduler](Karmada-Scheduler.png)
43+
44+
In the case of Flink, these components are the JobManager(s) as well as the TaskManager(s). Both of these components can be comprised of
45+
multiple pods, and the JM and TM frequently do not have the same resource requirements.
46+
47+
## Motivation
48+
49+
Karmada currently provides 2 methods of scheduling estimation through:
50+
1. The general estimator (which analyzes total cluster resources to determine scheduling)
51+
2. The accurate estimator (which can inspect namespaced resource quotas and determine
52+
number of potential replicas via the ResourceQuota plugin)
53+
54+
This proposal aims to improve the 2nd method by allowing users to define components for their replica
55+
and provide precise resourceRequirements.
56+
57+
## Goals
58+
59+
- Provide a declarative pattern for defining the resourceRequests for individual replica components
60+
- Allow more accurate scheduling estimates for CRDs
61+
62+
## Design Details
63+
64+
### API change
65+
66+
The main changes of this proposal are to the API definition of the ReplicaRequirements struct. We currently include the replicaCount and
67+
replicaRequirements as root level attributes to the ResourceBindingSpec. The limitation here is that we are unable to define unique
68+
replicaRequirements in the case that the resource has more than one podTemplate.
69+
70+
To address this, we can move the concept of replicas and replicaRequirements into a struct related to the individual resource's `Components`.
71+
72+
Each `Component` will have a `Name`, the number of `Replicas`, and corresponding `replicaRequirements`.
73+
These basic fields are necessary to allow the accurate estimator to determine whether all components of the CRD replica
74+
will be able to fit on the destination namespace.
75+
76+
The definition of ReplicaRequirements will stay the same - with the drawback that the user will need to define how Karmada
77+
interprets the individual components of the CRD. Karmada should also support a default component which will use one of the resource's
78+
podTemplates to find requirements.
79+
80+
```go
81+
82+
type ResourceBindingSpec struct {
83+
84+
. . .
85+
86+
// The total number of replicas scheduled by this resource. Each replica will represented by exactly one component of the resource.
87+
TotalReplicas int32 `json:"totalReplicas,omitempty"`
88+
89+
// Defines the requirements of an individual component of the resource.
90+
// +optional
91+
Components []Components `json:"components,omitempty"`
92+
93+
. . .
94+
}
95+
96+
// A component is a unique representation of a resource's replica. For simple resources, like Deployments, there will only be
97+
// one component, associated with the podTemplate in the Deployment definition.
98+
//
99+
// Complex resources can have multiple components controlled through different podTemplates.
100+
// Each replica for the resource will fall into a component type with requirements defined by its relevant podTemplate.
101+
type ComponentRequirements struct {
102+
103+
// Name of this component
104+
Name string `json:"name,omitempty"`
105+
106+
// Replicas represents the replica number of the resource's component
107+
// +optional
108+
Replicas int32 `json:"replicas,omitempty"`
109+
110+
// ReplicaRequirements represents the requirements required by each replica for this component.
111+
// +optional
112+
ReplicaRequirements *ReplicaRequirements `json:"replicaRequirements,omitempty"`
113+
114+
}
115+
116+
// ReplicaRequirements represents the requirements required by each replica.
117+
type ReplicaRequirements struct {
118+
119+
// NodeClaim represents the node claim HardNodeAffinity, NodeSelector and Tolerations required by each replica.
120+
// +optional
121+
NodeClaim *NodeClaim `json:"nodeClaim,omitempty"`
122+
123+
// ResourceRequest represents the resources required by each replica.
124+
// +optional
125+
ResourceRequest corev1.ResourceList `json:"resourceRequest,omitempty"`
126+
127+
// Namespace represents the resources namespaces
128+
// +optional
129+
Namespace string `json:"namespace,omitempty"`
130+
131+
// PriorityClassName represents the components priorityClassName
132+
// +optional
133+
PriorityClassName string `json:"priorityClassName,omitempty"`
134+
135+
}
136+
```
137+
138+
### Accurate Estimator Changes
139+
140+
Besides the change to the ReplicaRequirements API, we will need to make a code change to the accurate estimator's implementation,
141+
which can be found here: https://github.com/karmada-io/karmada/blob/5e354971c78952e4f992cc5e21ad3eddd8d6716e/pkg/estimator/server/estimate.go#L59.
142+
143+
Currently the accurate estimator will calculate the maxReplica count by:
144+
1. Running the maxReplica calculation for each plugin enabled by the accurate estimator.
145+
2. The accurate estimator will then loop through all nodes and determine if the replica can fit in any of them. This is to account for the resource fragmentation issue.
146+
147+
We can run a maxReplica estimation for each component as is - the difficulty is determining if both components can be scheduled on the same cluster. If we maintain a maxReplica
148+
estimate for each component, it is possible to run into edge cases where both components cannot fit on the same cluster even though individually they could be scheduled. How we implement
149+
the estimation depends on whether we want to maintain a maxReplica estimation per component
150+
151+
![Accurate-Scheduler-Steps](Accurate-Scheduler-Steps.png)
152+
153+
Here we have a couple of options we can think over:
154+
155+
1. Calculate precise amount of ways we can pack all components into existing nodes *(not recommended)*
156+
- For this option we would have to loop through each subcomponent and through all nodes to calculate the total number of ways we can pack all components into the namespace.
157+
- This would become very expensive, and I don't see the benefit of being that precise when all we care about is that the CRD can be scheduled at all.
158+
159+
2. Confirm that all components can be scheduled into one combination of nodes
160+
- We would instead confirm that each component could fit into one of the possible nodes contrained by our destination namespace.
161+
- If we confirm that each component can fit in the available nodes, we would simply return the maxReplica estimation made by the plugin since we know that the CRD can be fully scheduled on the namespace.
162+
- If we notice that one or many of the components can not fit in any available node, we would ignore the maxReplica estimation made by the plugin and return 0.
163+
164+

0 commit comments

Comments
 (0)