@@ -96,10 +96,10 @@ with the C<docker-compose> command.
96
96
97
97
=item C<cpanm> Installation
98
98
99
- As discussed in the task issue L<Docker Deployment Issue |https://github.com/Grinnz/perldoc-browser/issues/26> the
99
+ As discussed in the task issue L<Docker Deployment Task |https://github.com/Grinnz/perldoc-browser/issues/26> the
100
100
installation of the I<Perl> Modules for the SQLite Backend from the F<cpanfile> was executed at Image Build Time.
101
101
So on updates of the F<cpanfile> it is recommendable to rebuild the Container Image as described above
102
- under L<B<IMAGE BUILD>>.
102
+ under L</" B<IMAGE BUILD>" >.
103
103
104
104
The used F<cpanfile> can be found in F</usr/share/perldoc-browser/> within the Docker Image.
105
105
Also the C<cpanm> Installation Log is found inside the Image in F</usr/share/perldoc-browser/log/>.
@@ -114,12 +114,11 @@ Still the Container Start-Up Script F<entrypoint.sh> will detect a different bac
114
114
or the C<perldoc-browser.pl install> Command and check whether the key dependencies are met
115
115
and run the C<cpanm> Installation accordingly
116
116
117
- =item starting up the Docker Cluster
117
+ =item Starting up the Docker Cluster
118
118
119
- The C<PostgreSQL> Database is only within the C<docker-compose> environment known with the hostname C<db>.
120
- So to use the database hostname C<db> any command must be run within the C<docker-compose> environment.
121
- To startup the Docker Cluster with the C<docker-compose> environment the following command
122
- is needed at first:
119
+ The Docker Cluster consists of three components.
120
+ To access any of these components at first the Docker Cluster needs to be started with
121
+ this C<docker-compose> command:
123
122
124
123
docker-compose up -d
125
124
@@ -128,7 +127,64 @@ It is important to verify that the containers are running correctly with:
128
127
129
128
docker-compose ps
130
129
131
- =item populating the search backend
130
+ This will make the three components accessible with the C<docker-compose exec> command.
131
+
132
+ =over 2
133
+
134
+ =item * The I<Web Site>
135
+
136
+ The I<Web Site> is accessible with the component name C<web>.
137
+ In the I<Web Site> component runs the C<perldoc-browser.pl> I<Mojolicious> Web Application.
138
+ To access the I<Web Site> component and get a C<bash> prompt
139
+ this C<docker-compose> command can be used:
140
+
141
+ docker-compose exec web bash
142
+
143
+ =item * The I<PostgreSQL> Database
144
+
145
+ The I<PostgreSQL> Database is only within the C<docker-compose> environment known with the hostname C<db>.
146
+ So, to use the database hostname C<db> any command must be run within the C<docker-compose> environment.
147
+ On a new installation the file F<data/pg/.keep> obstructs the process and must be removed with this command:
148
+
149
+ rm data/pg/.keep
150
+
151
+ =item * The I<Elasticsearch> Engine
152
+
153
+ The I<Elasticsearch> Engine is only within the C<docker-compose> environment known with the hostname C<elasticsearch>.
154
+ So, to use the I<Elasticsearch> hostname C<elasticsearch> any command must be run within the C<docker-compose> environment.
155
+ The I<Elasticsearch> API is also accessible on the external port C<9200>.
156
+ So, it can also be queried with the URL C<http://localhost:9200> from outside the Docker Cluster.
157
+ According to the official I<Elastic.co> documentation the Virtual Memory per process must
158
+ be increased as documented at
159
+ L<Virtual Memory Requirements|https://www.elastic.co/guide/en/elasticsearch/reference/6.8/vm-max-map-count.html>.
160
+ It can be increase temporary with this command:
161
+
162
+ sysctl -w vm.max_map_count=262144
163
+
164
+ or it can be set permanently within a file like F</etc/sysctl.d/elasticserach.conf> or directly in F</etc/sysctl.conf>
165
+
166
+ vm.max_map_count = 262144
167
+
168
+ and then reloaded with the command C<sysctl -p>.
169
+ The I<Suse> Documentation explains very nicely how and whether this will affect the system
170
+ as documented at L<I<Suse> Documentation on C<vm.max_map_count>|https://www.suse.com/support/kb/doc/?id=000016692>
171
+
172
+ B<NOTICE:>
173
+
174
+ =over 2
175
+
176
+ The I<Elasticsearch> Engine is known to start up slow. It can take up to B<30 s>.
177
+ This is also understandable from the referenced I<Suse> Documentation on the Virtual Memory Requirement.
178
+ So, a too early querying can produce an error as documented at
179
+ L<Too early Query produces an Exception|https://github.com/Grinnz/perldoc-browser/issues/45>.
180
+ See L</"Querying the I<Elasticsearch> Engine"> for instructions to check whether
181
+ the engine is ready for service
182
+
183
+ =back
184
+
185
+ =back
186
+
187
+ =item Populating the search backend
132
188
133
189
The new built Container Image contains an empty C<perldoc-browser.pl> Installation
134
190
To run correctly the Search Backend needs to be populated.
@@ -140,10 +196,10 @@ Now the command to populate the Search Backend is:
140
196
This will execute command C<perldoc-browser.pl index all> in the project directory.
141
197
The results will be stored persistently in the project directory for further container launches.
142
198
143
- =item accessing the C <PostgreSQL> Database
199
+ =item Accessing the I <PostgreSQL> Database
144
200
145
201
To be able to access the database the Docker Cluster must be launched as described
146
- in L<B<starting up the Docker Cluster>>.
202
+ in L</"B<Starting up the Docker Cluster>" >.
147
203
148
204
Next the command C<psql> can be used within the C<PostgreSQL> container.
149
205
The C<PostgreSQL> image is based on I<Alpine Linux>
@@ -178,6 +234,129 @@ The C<pods> table can contain for 1 I<Perl> Version 1456 entries:
178
234
1456
179
235
(1 row)
180
236
237
+
238
+ =item Querying the I<Elasticsearch> Engine
239
+
240
+ The I<Elasticsearch> Engine needs to be queried in different occasions to check its
241
+ availability and health and the correctness of the indices.
242
+ This can be done easily over the I<Elasticsearch> API by using the Web Endpoints that it publishes.
243
+
244
+ =over 2
245
+
246
+ =item Ready for Service
247
+
248
+ The I<Elasticsearch> Engine is ready for service when its Root Web Endpoints produces
249
+ a HTTP Response with B<HTTP Status Code> C<200 OK>
250
+
251
+ curl http://localhost:9200
252
+
253
+ The Response will look similar to this:
254
+
255
+ > GET / HTTP/1.1
256
+ > Host: localhost:9200
257
+ > User-Agent: curl/7.64.0
258
+ > Accept: */*
259
+ >
260
+ < HTTP/1.1 200 OK
261
+ < content-type: application/json; charset=UTF-8
262
+ < content-length: 490
263
+ <
264
+ {
265
+ "name" : "Evou766",
266
+ "cluster_name" : "elasticsearch",
267
+ "cluster_uuid" : "D7p_jR1TQBeK7J69Hk3QRg",
268
+ "version" : {
269
+ "number" : "6.8.13",
270
+ "build_flavor" : "oss",
271
+ "build_type" : "tar",
272
+ "build_hash" : "be13c69",
273
+ "build_date" : "2020-10-16T09:09:46.555371Z",
274
+ "build_snapshot" : false,
275
+ "lucene_version" : "7.7.3",
276
+ "minimum_wire_compatibility_version" : "5.6.0",
277
+ "minimum_index_compatibility_version" : "5.0.0"
278
+ },
279
+ "tagline" : "You Know, for Search"
280
+ }
281
+
282
+ =item Cluster Health Status
283
+
284
+ The I<Elasticsearch> Cluster Health is an indicator for any error within the engine
285
+ or the indices. A status of "I<green>" or "I<yellow>" indicates a correct and healthy status.
286
+ While a status "I<red>" indicates an error within the indices. This can originate
287
+ from uncomplete or interrupted indexing or an sudden crash of the engine.
288
+ An re-indexing might fix this issue.
289
+ The Cluster Health can be checked over this API endpoint:
290
+
291
+ curl -v http://localhost:9200/_cat/health
292
+
293
+ The Response will look similar to this:
294
+
295
+ > GET /_cat/health HTTP/1.1
296
+ > Host: localhost:9200
297
+ > User-Agent: curl/7.64.0
298
+ > Accept: */*
299
+ >
300
+ < HTTP/1.1 200 OK
301
+ < content-type: text/plain; charset=UTF-8
302
+ < content-length: 65
303
+ <
304
+ 1636891612 12:06:52 elasticsearch yellow 1 1 5 5 0 0 5 0 - 50.0%
305
+
306
+ =item Indices Health Status
307
+
308
+ The I<Elasticsearch> Indices Health Status is an indicator for any error within the indices.
309
+ A status of "I<green>" or "I<yellow>" indicates a correct and healthy status.
310
+ While a status "I<red>" indicates an error within the indices. This can originate
311
+ from uncomplete or interrupted indexing or an sudden crash of the engine.
312
+ An re-indexing might fix this issue.
313
+ The Indices Health Status can be checked over this API endpoint:
314
+
315
+ curl -v http://localhost:9200/_cat/indices
316
+
317
+ The Response will look similar to this:
318
+
319
+ > GET /_cat/indices HTTP/1.1
320
+ > Host: localhost:9200
321
+ > User-Agent: curl/7.64.0
322
+ > Accept: */*
323
+ >
324
+ < HTTP/1.1 200 OK
325
+ < content-type: text/plain; charset=UTF-8
326
+ < content-length: 455
327
+ <
328
+ yellow open perldeltas_5.28.1_1636798290 BTS4QdaeQk6OJLFnyYUI9g 1 1 2164 0 3.2mb 3.2mb
329
+ yellow open faqs_5.28.1_1636798290 gyrqSq7mQrKXzAmQJ4cGVA 1 1 305 0 784.9kb 784.9kb
330
+ yellow open variables_5.28.1_1636798290 wjDlOrQrRaWb77HTKhdA5Q 1 1 150 0 17.2kb 17.2kb
331
+ yellow open pods_5.28.1_1636798290 PJ-EZ0IbQb67EOzkGrVj1w 1 1 1579 0 23.2mb 23.2mb
332
+ yellow open functions_5.28.1_1636798290 xzukrTriSNWiyPqKMpZU4w 1 1 292 0 570.6kb 570.6kb
333
+
334
+ =item Aliases Associations
335
+
336
+ The Project also uses aliases in I<Elasticsearch>. It is important that they are set correctly.
337
+ The Aliases Associations can be checked over this API endpoint:
338
+
339
+ curl -v http://localhost:9200/_cat/aliases
340
+
341
+ The Response will look similar to this:
342
+
343
+ > GET /_cat/aliases HTTP/1.1
344
+ > Host: localhost:9200
345
+ > User-Agent: curl/7.64.0
346
+ > Accept: */*
347
+ >
348
+ < HTTP/1.1 200 OK
349
+ < content-type: text/plain; charset=UTF-8
350
+ < content-length: 265
351
+ <
352
+ functions_5.28.1 functions_5.28.1_1636798290 - - -
353
+ perldeltas_5.28.1 perldeltas_5.28.1_1636798290 - - -
354
+ faqs_5.28.1 faqs_5.28.1_1636798290 - - -
355
+ variables_5.28.1 variables_5.28.1_1636798290 - - -
356
+ pods_5.28.1 pods_5.28.1_1636798290 - - -
357
+
358
+ =back
359
+
181
360
=back
182
361
183
362
=cut
0 commit comments