-
Notifications
You must be signed in to change notification settings - Fork 91
ERROR - Nitro Exception while binding group member to servicegroup errorcode=258 message=No such resource #630
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@Avneetdabas Could you kindly provide the YAML definition for the "apexportal-webservice-service" Kubernetes service, mainly the ports section? |
We are making 2 services, Cluster IP is for Netscaler VPX and the Node port is for us to test. The nodeport one is working fine. apiVersion: v1
|
Ok, i was able to make it work by deleting the cic pod. But looks like there is a bug in the latest version. |
Weve got the same problem with the nsic ingress-controller and a netscaler ADC appliance machine. We deploy a service of type loadBalancer and also put the loadbalancerIP in place. Then servicegroups, LBVSs and CS Vservers are created, but after a while the servicegroup for our endpoint is lost. I enabled the DEBUG log so whats going here are the log files and the service file for our deployment:
the big problem is mainly this lines here:
service gateway manifest yaml looks like this: apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.19.0 (f63a961c)
service.citrix.com/service_type: 'SSL'
service.citrix.com/preconfigured-certkey: '_.cert-wildcard-2025'
service.citrix.com/backend-tcpprofile: '{"ws":"ENABLED", "sack" : "enabled"}'
service.citrix.com/frontend-tcpprofile: '{"ws":"ENABLED", "sack" : "enabled"}'
labels:
imcservicename: gateway
name: gateway
spec:
type: LoadBalancer
loadBalancerIP: 172.16.73.107
ports:
- name: "8080"
port: 8080
targetPort: 8080
- name: "https"
port: 443
targetPort: 8080
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 3600
selector:
imcservicename: gateway
status:
loadBalancer: {} It happens now for about 3months an I could not found the underlaying problem. After I reapply the service yaml it works again. |
hi @jeanz6 The annotations service-type takes index as well, like Example:
Regards, |
Hi, first thanks for your fast answer, I have filled out the questionaire. We only need TLS/SSL on port 443, ill change the annotation tomorrorw, and give you an answer also share the full logs with you! Cheers J. |
Hi, I attached the full nsic log, the timestamps are: 2025-03-10 08:35 and 2025-03-13 13:05. at these times it happened. After we recognized that it was gone we had to reapply our gateway-service.yaml manifest. Now I implement the annotations in our dev namespace. Thank you very much for your help !!! Cheers J. |
So I added the annotations, here is the yaml file: apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.19.0 (f63a961c)
#service.citrix.com/service_type: 'SSL'
service.citrix.com/service-type-0: 'HTTP'
service.citrix.com/service-type-1: 'SSL'
service.citrix.com/preconfigured-certkey: '_.cert-wildcard-2025'
service.citrix.com/backend-tcpprofile: '{"ws":"ENABLED", "sack" : "enabled"}'
service.citrix.com/frontend-tcpprofile: '{"ws":"ENABLED", "sack" : "enabled"}'
creationTimestamp: null
labels:
imcservicename: gateway
name: gateway
spec:
type: LoadBalancer
loadBalancerIP: 172.16.73.107
ports:
- name: "8080"
port: 8080
targetPort: 8080
- name: "https"
port: 443
targetPort: 8080
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 3600
selector:
imcservicename: gateway
status:
loadBalancer: {} After applying the yaml everything looks fine, but thats everytime the same, the outage happens random. Do you need more information ? Cheers J. |
hi @jeanz6 Regards, |
Hi, DEBUG Level is on, as you can see in the nsic.log what I shared here:
Ill also write an email to netscaler-appmodernization@cloud.com, |
Describe the bug
CIC is not able to update the pod ip as the backend in the VPX service group members.
To Reproduce
We were able to reproduce by deploying the ingress with with 3 services on the backend, 2 services are working fine only one is showing down as the backend member is missing.
2.CIC Version/Image : quay.io/citrix/citrix-k8s-ingress-controller:1.37.5
Version of VPX - 14.1.12.30
Environment variables (minus secrets)
Expected behavior
After deploying the Ingress all services should show show pod ip in the members so that client can reach the api hosted on those pods.
Logs
kubectl logs
2024-01-15 16:05:16,123 - ERROR - [nitrointerface.py:_configure_services_nondesired:2577] (MainThread) Nitro Exception while binding group member to servicegroup k8s-apexportal-webservice-service_54341_sgp_g6tphz7jrhk6c72t7dyqovf7cwchlvdr errorcode=258 message=No such resource [serviceGroupName, k8s-apexportal-webservice-service_54341_sgp_g6tphz7jrhk6c72t7dyqovf7cwchlvdr]
2024-01-15 16:05:16,154 - ERROR - [nitrointerface.py:_configure_services_nondesired:2577] (MainThread) Nitro Exception while binding group member to servicegroup k8s-apexportal-webservice-service_54341_sgp_g6tphz7jrhk6c72t7dyqovf7cwchlvdr errorcode=258 message=No such resource [serviceGroupName, k8s-apexportal-webservice-service_54341_sgp_g6tphz7jrhk6c72t7dyqovf7cwchlvdr]
2024-01-15 16:05:16,199 - ERROR - [nitrointerface.py:_configure_services_nondesired:2577] (MainThread) Nitro Exception while binding group member to servicegroup k8s-apexportal-webservice-service_54341_sgp_g6tphz7jrhk6c72t7dyqovf7cwchlvdr errorcode=258 message=No such resource [serviceGroupName, k8s-apexportal-webservice-service_54341_sgp_g6tphz7jrhk6c72t7dyqovf7cwchlvdr]
2024-01-15 16:06:04,053 - ERROR - [NSProfileHandler.py:bind_cipher_with_ssl_profile:352] (MainThread) Unable to bind cipher DEFAULT to SSL profile k8s-192.168.243.49_443_ssl
2024-01-15 17:39:14,301 - ERROR - [NSProfileHandler.py:bind_cipher_with_ssl_profile:352] (MainThread) Unable to bind cipher DEFAULT to SSL profile k8s-192.168.243.49_443_ssl
2024-01-15 19:10:39,618 - ERROR - [nitrointerface.py:set_ns_config:6968] (MainThread) Nitro exception during updating csvserver: error message=Profile does not exist
2024-01-15 19:32:38,235 - ERROR - [kubernetes.py:_parse_preconfigured_certs:419] (MainThread) certkey {'name': '.Apexanalytix.com2021-2022', 'type': 'Custom_SSL_Cipher_new'} does not have correct name/type
2024-01-15 19:32:38,235 - ERROR - [kubernetes.py:_parse_preconfigured_certs:421] (MainThread) preconfigured-certkey {"certs": [ {"name": ".Apexanalytix.com2021-2022", "type": "Custom_SSL_Cipher_new"} ] } is not in correct format,It should be in below format
Ingress Yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: citrix
ingress.citrix.com/frontend-ip: "192.168.."
ingress.citrix.com/secure-service-type: "ssl"
ingress.citrix.com/secure-port: "443"
ingress.citrix.com/frontend-sslprofile: "HSTS2022-23"
ingress.citrix.com/preconfigured-certkey: '{"certs": [ {"name": "..com2021-2022", "type": "default"} ] }'
name: services-ingress
spec:
rules:
- host: services.**
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: -webservice-service
port:
number: 80
- path: /F.V***
pathType: Prefix
backend:
service:
name: **-soapservice-service
port:
number: 80
- path: /odata
pathType: Prefix
backend:
service:
name: -odata-service
port:
number: 80
tls:
- hosts:
- *******..com
secretName:
The text was updated successfully, but these errors were encountered: