Skip to content

Commit ba56855

Browse files
authored
Merge branch '8.19' into update-8.19-apm-data-resource-version
2 parents f1aad5f + 3caefee commit ba56855

File tree

9 files changed

+267
-77
lines changed

9 files changed

+267
-77
lines changed

docs/changelog/130158.yaml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
pr: 130158
2+
summary: Handle unavailable MD5 in ES|QL
3+
area: ES|QL
4+
type: bug
5+
issues: []

docs/reference/index-modules/merge.asciidoc

Lines changed: 28 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -14,18 +14,32 @@ resources between merging and other activities like search.
1414
[[merge-scheduling]]
1515
=== Merge scheduling
1616

17-
The merge scheduler (ConcurrentMergeScheduler) controls the execution of merge
18-
operations when they are needed. Merges run in separate threads, and when the
19-
maximum number of threads is reached, further merges will wait until a merge
20-
thread becomes available.
21-
22-
The merge scheduler supports the following _dynamic_ setting:
23-
24-
`index.merge.scheduler.max_thread_count`::
25-
26-
The maximum number of threads on a single shard that may be merging at once.
27-
Defaults to
28-
`Math.max(1, Math.min(4, <<node.processors, node.processors>> / 2))` which
29-
works well for a good solid-state-disk (SSD). If your index is on spinning
30-
platter drives instead, decrease this to 1.
17+
The merge scheduler controls the execution of merge operations when they are needed.
18+
Merges run on the dedicated `merge` thread pool.
19+
Smaller merges are prioritized over larger ones, across all shards on the node.
20+
Merges are disk IO throttled so that bursts, while merging activity is otherwise low, are smoothed out in order to not impact indexing throughput.
21+
There is no limit on the number of merges that can be enqueued for execution on the thread pool.
22+
However, beyond a certain per-shard limit, after merging is completely disk IO un-throttled, indexing for the shard will itself be throttled until merging catches up.
23+
24+
The available disk space is periodically monitored, such that no new merge tasks are scheduled for execution when the available disk space is low.
25+
This is in order to prevent that the temporary disk space, which is required while merges are executed, completely fills up the disk space on the node.
26+
27+
The merge scheduler supports the following *dynamic* settings:
28+
29+
`index.merge.scheduler.max_thread_count`
30+
: The maximum number of threads on a **single** shard that may be merging at once. Defaults to `Math.max(1, Math.min(4, <<node.processors, node.processors>> / 2))` which works well for a good solid-state-disk (SSD). If your index is on spinning platter drives instead, decrease this to 1.
31+
32+
`indices.merge.disk.check_interval`
33+
: The time interval for checking the available disk space. Defaults to `5s`.
34+
35+
`indices.merge.disk.watermark.high`
36+
: Controls the disk usage watermark, which defaults to `95%`, beyond which no merge tasks can start execution.
37+
The disk usage tally includes the estimated temporary disk space still required by all the currently executing merge tasks.
38+
Any merge task scheduled *before* the limit is reached continues execution, even if the limit is exceeded while executing
39+
(merge tasks are not aborted).
40+
41+
`indices.merge.disk.watermark.high.max_headroom`
42+
: Controls the max headroom for the merge disk usage watermark, in case it is specified as percentage or ratio values.
43+
Defaults to `100GB` when `indices.merge.disk.watermark.high` is not explicitly set.
44+
This caps the amount of free disk space before merge scheduling is blocked.
3145

docs/reference/modules/threadpool.asciidoc

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -79,10 +79,13 @@ There are several thread pools, but the important ones include:
7979
default maximum size of `min(5, (`<<node.processors,
8080
`# of allocated processors`>>`) / 2)`.
8181

82+
`merge`::
83+
For [merge](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-merge.html) operations of all the shards on the node.
84+
Thread pool type is `scaling` with a keep-alive of `5m` and a default maximum size of [`# of allocated processors`](#node.processors).
85+
8286
`force_merge`::
83-
For <<indices-forcemerge,force merge>> operations.
84-
Thread pool type is `fixed` with a size of `max(1, (`<<node.processors,
85-
`# of allocated processors`>>`) / 8)` and an unbounded queue size.
87+
For waiting on blocking [force merge](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-forcemerge) operations.
88+
Thread pool type is `fixed` with a size of `max(1, (`[`# of allocated processors`](#node.processors)`) / 8)` and an unbounded queue size.
8689

8790
`management`::
8891
For cluster management.
Lines changed: 102 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
/*
2+
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
3+
* or more contributor license agreements. Licensed under the "Elastic License
4+
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
5+
* Public License v 1"; you may not use this file except in compliance with, at
6+
* your election, the "Elastic License 2.0", the "GNU Affero General Public
7+
* License v3.0 only", or the "Server Side Public License, v 1".
8+
*/
9+
10+
package org.elasticsearch.common.util;
11+
12+
import org.elasticsearch.common.CheckedSupplier;
13+
14+
import java.util.Optional;
15+
16+
/**
17+
* A wrapper around either
18+
* <ul>
19+
* <li>a successful result of parameterized type {@code V}</li>
20+
* <li>a failure with exception type {@code E}</li>
21+
* </ul>
22+
*/
23+
public abstract class Result<V, E extends Exception> implements CheckedSupplier<V, E> {
24+
25+
public static <V, E extends Exception> Result<V, E> of(V value) {
26+
return new Success<>(value);
27+
}
28+
29+
public static <V, E extends Exception> Result<V, E> failure(E exception) {
30+
return new Failure<>(exception);
31+
}
32+
33+
private Result() {}
34+
35+
public abstract V get() throws E;
36+
37+
public abstract Optional<E> failure();
38+
39+
public abstract boolean isSuccessful();
40+
41+
public boolean isFailure() {
42+
return isSuccessful() == false;
43+
};
44+
45+
public abstract Optional<V> asOptional();
46+
47+
private static class Success<V, E extends Exception> extends Result<V, E> {
48+
private final V value;
49+
50+
Success(V value) {
51+
this.value = value;
52+
}
53+
54+
@Override
55+
public V get() throws E {
56+
return value;
57+
}
58+
59+
@Override
60+
public Optional<E> failure() {
61+
return Optional.empty();
62+
}
63+
64+
@Override
65+
public boolean isSuccessful() {
66+
return true;
67+
}
68+
69+
@Override
70+
public Optional<V> asOptional() {
71+
return Optional.of(value);
72+
}
73+
}
74+
75+
private static class Failure<V, E extends Exception> extends Result<V, E> {
76+
private final E exception;
77+
78+
Failure(E exception) {
79+
this.exception = exception;
80+
}
81+
82+
@Override
83+
public V get() throws E {
84+
throw exception;
85+
}
86+
87+
@Override
88+
public Optional<E> failure() {
89+
return Optional.of(exception);
90+
}
91+
92+
@Override
93+
public boolean isSuccessful() {
94+
return false;
95+
}
96+
97+
@Override
98+
public Optional<V> asOptional() {
99+
return Optional.empty();
100+
}
101+
}
102+
}
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
/*
2+
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
3+
* or more contributor license agreements. Licensed under the "Elastic License
4+
* 2.0", the "GNU Affero General Public License v3.0 only", and the "Server Side
5+
* Public License v 1"; you may not use this file except in compliance with, at
6+
* your election, the "Elastic License 2.0", the "GNU Affero General Public
7+
* License v3.0 only", or the "Server Side Public License, v 1".
8+
*/
9+
10+
package org.elasticsearch.common.util;
11+
12+
import org.elasticsearch.ElasticsearchException;
13+
import org.elasticsearch.ElasticsearchStatusException;
14+
import org.elasticsearch.rest.RestStatus;
15+
import org.elasticsearch.test.ESTestCase;
16+
17+
import static org.elasticsearch.test.hamcrest.OptionalMatchers.isEmpty;
18+
import static org.elasticsearch.test.hamcrest.OptionalMatchers.isPresentWith;
19+
import static org.hamcrest.Matchers.is;
20+
import static org.hamcrest.Matchers.sameInstance;
21+
22+
public class ResultTests extends ESTestCase {
23+
24+
public void testSuccess() {
25+
final String str = randomAlphaOfLengthBetween(3, 8);
26+
final Result<String, ElasticsearchException> result = Result.of(str);
27+
assertThat(result.isSuccessful(), is(true));
28+
assertThat(result.isFailure(), is(false));
29+
assertThat(result.get(), sameInstance(str));
30+
assertThat(result.failure(), isEmpty());
31+
assertThat(result.asOptional(), isPresentWith(str));
32+
}
33+
34+
public void testFailure() {
35+
final ElasticsearchException exception = new ElasticsearchStatusException(
36+
randomAlphaOfLengthBetween(10, 30),
37+
RestStatus.INTERNAL_SERVER_ERROR
38+
);
39+
final Result<String, ElasticsearchException> result = Result.failure(exception);
40+
assertThat(result.isSuccessful(), is(false));
41+
assertThat(result.isFailure(), is(true));
42+
assertThat(expectThrows(Exception.class, result::get), sameInstance(exception));
43+
assertThat(result.failure(), isPresentWith(sameInstance(exception)));
44+
assertThat(result.asOptional(), isEmpty());
45+
}
46+
47+
}

0 commit comments

Comments
 (0)