-
Notifications
You must be signed in to change notification settings - Fork 14
🌱 Contribute Kubeconfig-Based Provider #26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Welcome @christensenjairus! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some thoughts.
I think mostly on simplifying the example as much as possible to focus on the must have.
Then Im not sure I followed the breakdown of the interfaces. They seem to be very kubeconfig centric, and given they are interfaces I wonder if they should either apply to all providers or if we could simplify.
@corentone I've simplified the example quite a bit. It was a pretty large change - it should be more straightforward and not have redundant interfaces. It follows the format the other providers have a lot more closely than my first draft did. Here's a demonstration of the logs Tear it apart 😆 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey, thanks so much for the contribution! I have some doubts about how the provider actually runs, I think we can streamline that to be in sync with the other provider implementations, happy to further discuss this here or on Slack. :)
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: christensenjairus The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@sttts @embik @corentone I've simplified the example further. It works much faster than it used to, and much of the extra stuff I had originally I didn't end up needing. Like I could use another review when you get a minute |
EDIT: I modified the script and added RBAC to it, so it should help streamline getting started. Also, the way it's written now, it treats the local cluster as if it was remote as far as reconciles go. A user would need to set up RBAC and add a kubeconfig secret for the local cluster to get It may not be ideal, but it's nice for code consistency. What do you think? I can add a local manager if we don't agree. |
I know having separate commits makes reviewing incremental changes easier, but given the scale of the changes in some of the commits here, that’s somewhat moot. Could you squash this, or re-split it if you think separate (logical) commits make sense? |
…oviders. Remove extra unnecessary aspects like isReady, hashing, etc. Add script to create kubeconfig secrets using remote serviceaccounts.
if clusterExists { | ||
log.Info("Cluster already exists, updating it") | ||
if err := p.removeCluster(clusterName); err != nil { | ||
return fmt.Errorf("failed to remove existing cluster: %w", err) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't remember what hashing you were doing earlier, but looking at this code, won't every update to the kubeconfig secret coming down the informer pipeline cancel the engaged cluster and re-engage it? It probably makes sense to ensure we only do that if the kubeconfig actually changed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Jumping bit late to review train. Few questions, but leaving for @embik and @corentone to tell if they make sense or not :)
1. The remote cluster has a service account with the necessary RBAC permissions for your operator | ||
2. The service account exists in the namespace where you want to create the kubeconfig secret |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is a remote cluster in this case? Cluster where the operator runs or each remote cluster it accesses?
I think maybe some topology documentation/explanation would answer this.
log.Info("Engaging cluster") | ||
|
||
// Start a goroutine to periodically list pods | ||
go func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Im a bit not sure if this is a good example.
1st can we use same configMap pattern like in other example so its consistent
2nd - Can we make it watcher (true controller) over just a loop?
This is a functioning PoC for a kubeconfig-based multi-cluster provider. It reads secrets from the operator's namespace with a given label and engages the clusters that the secret applies to.
This works in real time for engaging and disengaging clusters as the provider watches all secrets in the operator's namespace.
I've included a helper script to create the secrets from kubeconfigs.
This is currently working in this repo and I can provide logs demonstrating functionality if requested.
Related issue: link