Skip to content

Cic recreates Service Group with wrong netprofile #684

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
pslijkhuis opened this issue May 9, 2025 · 2 comments
Open

Cic recreates Service Group with wrong netprofile #684

pslijkhuis opened this issue May 9, 2025 · 2 comments
Labels
bug Something isn't working

Comments

@pslijkhuis
Copy link

🐞 Bug Report: Netprofile Annotations Not Respected on CIC Restart

Summary

When the Citrix Ingress Controller (CIC) restarts, previously applied netprofile annotations on Service Groups are not retained. While the Load Balancing Virtual Server (LB vServer) continues to reference the correct netprofile, the associated Service Groups do not. This results in misconfigured traffic handling until a manual change is made to the Kubernetes Service to trigger a resync.


Steps to Reproduce

  1. Deploy a Kubernetes Service with the following annotations:

    annotations:
      service.citrix.com/lbvserver: '{"80-tcp":{"netProfile": "proxy-protocol-2"},"443-tcp":{"netProfile":"proxy-protocol-2"}}'
      service.citrix.com/servicegroup: '{"80-tcp":{"usip":"no", "netprofile": "proxy-protocol-2"},"443-tcp":{"usip":"no","netprofile": "proxy-protocol-2"}}'
  2. Confirm that the corresponding LB vServer and Service Groups on the NetScaler ADC are created with the correct proxy-protocol-2 netprofile.

  3. Restart the CIC pod.

  4. Observe that:

    • The LB vServer retains the correct netprofile.
    • The Service Groups are assigned an incorrect or default netprofile.
  5. Modify (or touch) the Kubernetes Service (e.g., add a dummy annotation) to trigger a resync.

  6. Verify that the Service Groups are corrected with the intended netprofile.


Expected Behavior

After a CIC restart, all configuration—including Service Group netprofiles—should be restored and remain consistent with the original Kubernetes Service annotations.


Observed Behavior

Upon CIC restart:

  • The LB vServer is correctly configured with proxy-protocol-2.
  • Service Groups lose their configured netprofile and are reset to an unintended state.
  • Manual editing of the Service triggers proper reconciliation and restores correct configuration.

Logs

The following log is observed during CIC startup:

2025-05-09 09:21:32,987 - INFO - [pbrconfighandler.py:multicluster_createbind_netprofile:92] (MainThread) Creating Netprofile: cnc-vdcnpogp_netprof and binding it to all the servicegroups

cic_logs.txt

This suggests that CIC applies a default netprofile during startup without checking or honoring existing annotations for Service Groups.


Environment

  • Citrix Ingress Controller Version: 3.0.5
  • ADC Firmware Version: 14.1 Build 29.72 (VPX/MPX/CPX)
  • Platform: Kubernetes

ConfigMap (citrix-cloud-native-cic-configmap):

JSONLOG: "false"
LOGLEVEL: info
NS_CACERT_PATH: /ca-certificates
NS_PORT: "443"
NS_PROTOCOL: https
NS_SNIPS: <REDACTED>
POD_IPS_FOR_SERVICEGROUP_MEMBERS: "true"
@subashd subashd added the bug Something isn't working label May 13, 2025
@subashd
Copy link
Collaborator

subashd commented May 13, 2025

hi @pslijkhuis,
We will look into it.

I kindly request you to fill out Requirement Gathering Questionnaire. Also, you can reach out us on netscaler-appmodernization@cloud.com.

@apoorvak-citrix
Copy link
Contributor

@pslijkhuis
Just to clarify few points:

  1. Do you have Node controller deployed in your cluster for network plumbing between Kubernetes and NetScaler?
  2. Do you have multiple Ingress Controller from different clusters configuring the same frontend Netscaler?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants