Skip to content

Commit ddd19f3

Browse files
authored
Adds missing content to the 'Handling delayed data' page (#1497)
### 📸 [Preview](https://docs-v3-preview.elastic.dev/elastic/docs-content/pull/1497/explore-analyze/machine-learning/anomaly-detection/ml-delayed-data-detection) ### Description Due to an error during the migration, the _Handling delayed data_ page was missing some of its content. This PR restores the missing content. ### Related issue: #1460
1 parent f3bdcdf commit ddd19f3

File tree

3 files changed

+64
-0
lines changed

3 files changed

+64
-0
lines changed
501 KB
Loading
398 KB
Loading

explore-analyze/machine-learning/anomaly-detection/ml-delayed-data-detection.md

Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,3 +17,67 @@ When you create a {{dfeed}}, you can specify a [`query_delay`](https://www.elast
1717
::::{important}
1818
If you get an error that says `Datafeed missed XXXX documents due to ingest latency`, consider increasing the value of query_delay. If it doesn’t help, investigate the ingest latency and its cause. You can do this by comparing event and ingest timestamps. High latency is often caused by bursts of ingested documents, misconfiguration of the ingest pipeline, or misalignment of system clocks.
1919
::::
20+
21+
## Why worry about delayed data?
22+
23+
If data are delayed randomly (and consequently are missing from analysis), the
24+
results of certain types of functions are not really affected. In these
25+
situations, it all comes out okay in the end as the delayed data is distributed
26+
randomly. An example would be a `mean` metric for a field in a large collection
27+
of data. In this case, checking for delayed data may not provide much benefit.
28+
If data are consistently delayed, however, {{anomaly-jobs}} with a `low_count`
29+
function may provide false positives. In this situation, it would be useful to
30+
see if data comes in after an anomaly is recorded so that you can determine a
31+
next course of action.
32+
33+
## How do we detect delayed data?
34+
35+
In addition to the `query_delay` field, there is a delayed data check config,
36+
which enables you to configure the datafeed to look in the past for delayed data.
37+
Every 15 minutes or every `check_window`, whichever is smaller, the datafeed
38+
triggers a document search over the configured indices. This search looks over a
39+
time span with a length of `check_window` ending with the latest finalized bucket.
40+
That time span is partitioned into buckets, whose length equals the bucket span
41+
of the associated {{anomaly-job}}. The `doc_count` of those buckets are then
42+
compared with the job's finalized analysis buckets to see whether any data has
43+
arrived since the analysis. If there is indeed missing data due to their ingest
44+
delay, the end user is notified. For example, you can see annotations in {{kib}}
45+
for the periods where these delays occur:
46+
47+
:::{image} /explore-analyze/images/ml-annotations.png
48+
:alt: Delayed data annotations in the Single Metric Viewer
49+
:screenshot:
50+
:::
51+
52+
::::{important}
53+
The delayed data check will not work correctly in the following cases:
54+
55+
* if the {{dfeed}} uses aggregations that filter data,
56+
* if the {{dfeed}} uses aggregations and the job's `analysis_config` does not have
57+
its `summary_count_field_name` set to `doc_count`,
58+
* if the {{dfeed}} is _not_ using aggregations and `summary_count_field_name` is
59+
set to any value.
60+
61+
If the datafeed is using aggregations, set the job's `summary_count_field_name`
62+
to `doc_count`. If `summary_count_field_name` is set to any value other than
63+
`doc_count`, the delayed data check for the datafeed must be disabled.
64+
::::
65+
66+
There is another tool for visualizing the delayed data on the *Annotations* tab
67+
in the {{anomaly-detect}} job management page:
68+
69+
:::{image} /explore-analyze/images/ml-datafeed-chart.png
70+
:alt: Delayed data in the {{dfeed}} chart
71+
:screenshot:
72+
:::
73+
74+
## What to do about delayed data?
75+
76+
The most common course of action is to simply to do nothing. For many functions
77+
and situations, ignoring the data is acceptable. However, if the amount of
78+
delayed data is too great or the situation calls for it, the next course of
79+
action to consider is to increase the `query_delay` of the datafeed. This
80+
increased delay allows more time for data to be indexed. If you have real-time
81+
constraints, however, an increased delay might not be desirable. In which case,
82+
you would have to [tune for better indexing speed](/deploy-manage/production-guidance/optimize-performance/indexing-speed.md).
83+

0 commit comments

Comments
 (0)