Description
In kubernetes, there is an analogy that a pod should be treated as cattle as compared to as a pet. The idea is that a pet isn't interchangeable, while cattle is.
Our hub pod communicates with the proxy-api kubernetes service, that will redirect traffic to one of the available proxy pods. But, when our hub pod does so, it actively configures that one pod while it in reality should configure all proxy pods.
Example issue scenario
Assume that there is not only a single proxy pod for some reason during a time interval. It could be that we want to have high availability (HA) and have made two be running at all time, or because we are making a helm chart upgrade that roll out a new proxy pod, or that the proxy pod crashed for some reason and a new started up.
The hub pod will speak with the proxy-api
network service that will delegate traffic to one proxy pod, but not all. The hub will say to the proxy pod things like "Hey, when someone requests to access /user/erik they should go to 10.68.3.5!". The hub will also ask "Hey, what routes are you configured with already?", and if the hub concludes a route should be added or removed, it will speak up about that. But but but... The hub doesn't really know who it speaks with, it thinks it speaks with its single pet, but in reality it speaks with its cattle, and it does not try to make sure all cattle behave the same way but instead is focused on a single pet.
Goals
- I'd like to see that a proxy that starts up, gets up to date directly somehow.
- I'd like to see that our solution allows multiple proxy pods to be kept up to date.
Pain points
- We let the proxy pod await configuration by the hub pod.
- We communicate directly to a singular proxy pod to update it.
Related I think
#1226 - I think the proxy pod restarted, and the hub were clueless and didn't update the proxy's state. After this there may have been automatic updates of the proxy pod implemented, so the issue will go away after a while due to this but will still occur briefly.