-
Notifications
You must be signed in to change notification settings - Fork 802
Description
Can someone tell me if this was ever resolved based on a similar issue back in 2023 With the following default settings
elasticsearchv8 cluster_settings unmarshal error #840
GET /_cluster/settings?pretty
{
"persistent": {
"action": {
"auto_create_index": "false"
},
"cluster": {
"logsdb": {
"enabled": "true"
},
"routing": {
"allocation": {
"awareness": {
"attributes": "k8s_node_name"
},
"balance": {
"index": "0.85f",
"shard": "0.85f"
},
"enable": "all"
}
}
}
},
"transient": {}
}
Results form Grafana:
{name="elasticsearch_clustersettings_stats_max_shards_per_node", cluster="xxxxxxxx", container="exporter", endpoint="http", instance="xxx.xxx.xxx.xxx:9108", job="-prometheus-elasticsearch-exporter", namespace="", pod=",cluster>-prometheus-elasticsearch-exporter-bd84gvkpx", prometheus="monitoring/kube-prometheus-kube-prome-prometheus", receive="true", service="-prometheus-elasticsearch-exporter", tenant_id="env"}
1000
as soon as we manually set the watermark:
PUT /_cluster/settings
{
"persistent": {
"cluster.routing.allocation.disk.watermark.low": "88%",
"cluster.routing.allocation.disk.watermark.high": "92%",
"cluster.routing.allocation.disk.watermark.flood_stage": "96%"
}
}
GET /_cluster/settings?pretty
{
"persistent": {
"action": {
"auto_create_index": "false"
},
"cluster": {
"logsdb": {
"enabled": "true"
},
"routing": {
"allocation": {
"disk": {
"watermark": {
"low": "88%",
"flood_stage": "96%",
"high": "92%"
}
},
"awareness": {
"attributes": "k8s_node_name"
},
"balance": {
"index": "0.85f",
"shard": "0.85f"
},
"enable": "all"
}
}
}
},
"transient": {}
}
Then we see this error message:
time=2025-06-02T08:42:48.194Z level=ERROR source=collector.go:188 msg="collector failed" name=clustersettings duration_seconds=0.092605402 err="json: cannot unmarshal object into Go struct field clusterSettingsWatermark.defaults.cluster.routing.allocation.disk.watermark.flood_stage of type string"