You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This script uses Docker Compose to build and copy the compiled dashboard into the `./dist` directory. You can now deploy this directory to AWS behind [CloudFront](https://aws.amazon.com/cloudfront/). If you are in NGAP, follow the instructions for "Request Public or Protected Access to the APIs and Dashboard" on the earthdata wiki page [Using Cumulus with Private APIs](https://wiki.earthdata.nasa.gov/display/CUMULUS/Cumulus+Deployments+in+NGAP).
72
+
This script uses Docker Compose to build and copy the compiled dashboard into the `./dist` directory. You can now deploy this directory to AWS behind [CloudFront](https://aws.amazon.com/cloudfront/). If you are in NGAP, follow the instructions for "Request Public or Protected Access to the APIs and Dashboard" on the earthdata wiki page [Using Cumulus with Private APIs](https://wiki.earthdata.nasa.gov/display/CUMULUS/Using+Cumulus+with+Private+APIs).
73
73
74
74
75
75
### Run the dashboard locally via Docker Image
76
76
77
-
You can also create a Docker container that will serve the dashboard behind a simple nginx configuration. Having a runnable Docker image is useful for testing a build before deployment or for NGAP Sandbox environments, where if you configure your computer to [access Cumulus APIs via SSM](https://wiki.earthdata.nasa.gov/display/CUMULUS/Accessing+Cumulus+APIs+via+SSM), you can run the dashboard container locally against the live Sandbox Cumulus API.
77
+
You can also create a Docker container that will serve the dashboard behind a simple nginx configuration. Having a runnable Docker image is useful for testing a build before deployment or for NGAP Sandbox environments, where if you configure your computer to [access Cumulus APIs via SSM](https://wiki.earthdata.nasa.gov/display/CUMULUS/Accessing+Cumulus+APIs+via+SSM+Port+Forwarding), you can run the dashboard container locally against the live Sandbox Cumulus API.
78
78
79
79
The script `./bin/build_dashboard_image.sh` will build a docker image containing the dashboard bundle served behind a basic [nginx](https://www.nginx.com/) configuration. The script takes one optional parameter, the tag to name the generated image which defaults to cumulus-dashboard:latest. The same customizations as described in the [previous section](#build-the-dashboard-using-docker-and-docker-compose) are available to configure your dashboard.
80
80
@@ -96,10 +96,10 @@ In this example, the dashboard would be available at `http://localhost:3000/` in
96
96
97
97
### Build the dashboard
98
98
99
-
The dashboard uses node v14.19.1. To build/run the dashboard on your local machine, install [nvm](https://github.com/creationix/nvm) and run `nvm install v14.19.1`.
99
+
The dashboard uses node v16.19.0. To build/run the dashboard on your local machine, install [nvm](https://github.com/creationix/nvm) and run `nvm install v16.19.0`.
100
100
101
101
#### install requirements
102
-
We use npm for local package management, run `npm install -g npm@8.6.0` to install npm 8.6.0. To install the requirements:
102
+
We use npm for local package management. To install the requirements:
103
103
```bash
104
104
$ nvm use
105
105
$ npm ci
@@ -149,7 +149,7 @@ During development you can run the webpack development webserver to serve the da
149
149
```bash
150
150
APIROOT=http://<myapi>.com npm run serve
151
151
```
152
-
The dashboard should be available at http://localhost:3000
152
+
The dashboard should be available at `http://localhost:3000`
153
153
154
154
### Run a built dashboard
155
155
@@ -222,7 +222,7 @@ For **development** and **testing** purposes only, you can run a Cumulus API loc
222
222
223
223
*Important Note: These `docker-compose` commands do not build distributable containers, but are a provided as testing conveniences. The docker-compose[-\*].yml files show that they work by linking your local directories into the container.*
224
224
225
-
In order to run the Cumulus API locally you must first [build the dashboard](#buildlocally) and then run the containers that provide LocalStack and Elasticsearch services.
225
+
In order to run the Cumulus API locally you must first [build the dashboard](#build-the-dashboard) and then run the containers that provide LocalStack and Elasticsearch services.
226
226
227
227
These are started and stopped with the commands:
228
228
```bash
@@ -264,7 +264,7 @@ and
264
264
```bash
265
265
localstack_1 | Ready.
266
266
```
267
-
you should be able to verify access to the local Cumulus API at http://localhost:5001/token
267
+
you should be able to verify access to the local Cumulus API at `http://localhost:5001/token`
268
268
269
269
270
270
Then you can run the dashboard locally (without Docker) `[HIDE_PDR=false APIROOT=http://localhost:5001] npm run serve` and open cypress tests `npm run cypress`.
@@ -290,7 +290,7 @@ dashboard_1 | Hit CTRL-C to stop the server
290
290
```
291
291
292
292
293
-
#####Troubleshooting Docker containers.
293
+
#### Troubleshooting Docker Containers
294
294
295
295
If something is not running correctly, or you're just interested, you can view the logs with a helper script, this will print out logs from each of the running docker containers.
296
296
```bash
@@ -305,7 +305,7 @@ ERROR: for localapi_shim_1 Cannot start service shim: driver failed programming
305
305
ERROR: for shim Cannot start service shim: driver failed programming external connectivity on endpoint localapi_shim_1 (7105603a4ff7fbb6f92211086f617bfab45d78cff47232793d152a244eb16feb): Bind for 0.0.0.0:9200 failed: port is already allocated
306
306
```
307
307
308
-
#### Fully contained cypress testing.
308
+
#### Fully Contained Cypress Testing
309
309
310
310
You can run all of the cypress tests locally that Earthdata Bamboo runs with a single command:
311
311
```bash
@@ -314,7 +314,7 @@ You can run all of the cypress tests locally that Earthdata Bamboo runs with a s
314
314
This stands up the entire stack as well as begins the e2e service that will run all cypress commands and report an exit code for their success or failure. This is primarily used for CI, but can be useful to developers.
315
315
316
316
317
-
#### <aname=dockerdiagram></a> Docker Container Service Diagram.
317
+
#### <aname=dockerdiagram></a> Docker Container Service Diagram
318
318

319
319
320
320
@@ -395,8 +395,4 @@ It is likely that no branch plan will exist for the `master` branch.
395
395
- Choose Branch Name `master` and then click `create`.
396
396
- Verify that the build has started for this plan.
397
397
398
-
399
-
400
-
401
-
402
398
<aname="bundlefootnote">1</a>: A dashboard bundle is just a ready-to-deploy compiled version of the dashboard and environment.
constgranulesOrQueryDescription='add either an array of granule objects in the form:\n { "granuleId": "(value)", "collectionId": "(value)" } or an elasticsearch query and index.';
30
+
constgranulesOrQueryText=`In the box below, ${granulesOrQueryDescription}`;
31
+
29
32
constbulkOperationsDefaultQuery={
30
33
workflowName: '',
31
34
index: '',
32
35
query: '',
33
-
ids: [],
36
+
granules: '',
34
37
meta: {}
35
38
};
36
39
37
40
constbulkDeleteDefaultQuery={
38
41
index: '',
39
42
query: '',
40
-
ids: [],
43
+
granules: [],
41
44
forceRemoveFromCmr: false
42
45
};
43
46
44
47
constbulkReingestDefaultQuery={
45
48
index: '',
46
49
query: '',
47
-
ids: []
50
+
granules: []
48
51
};
49
52
50
53
constbulkRecoveryDefaultQuery={
51
54
workflowName: '',
52
55
index: '',
53
56
query: '',
54
-
ids: []
57
+
granules: []
55
58
};
56
59
57
60
constBulkGranule=({
@@ -268,7 +271,7 @@ const BulkGranule = ({
268
271
<h4className="modal_subtitle">To run and complete your bulk granule task:</h4>
269
272
<ol>
270
273
<li>In the box below, enter the <strong>workflowName</strong>.</li>
271
-
<li>Then add either an array of granule Ids or an elasticsearch query and index.</li>
274
+
<li>Then {granulesOrQueryText}</li>
272
275
</ol>
273
276
</BulkGranuleModal>
274
277
<BulkGranuleModal
@@ -293,7 +296,7 @@ const BulkGranule = ({
293
296
>
294
297
<h4className="modal_subtitle">To run and complete your bulk delete task:</h4>
295
298
<ol>
296
-
<li>In the box below, add either an array of granule Ids or an elasticsearch query and index.</li>
299
+
<li>{granulesOrQueryText}</li>
297
300
<li>Set <strong>forceRemoveFromCmr</strong> to <strong>true</strong> to automatically have granules
298
301
removed from CMR as part of deletion.
299
302
If <strong>forceRemoveFromCmr</strong> is <strong>false</strong>, then the bulk granule deletion will
@@ -325,7 +328,7 @@ const BulkGranule = ({
325
328
>
326
329
<h4className="modal_subtitle">To run and complete your bulk reingest task:</h4>
327
330
<ol>
328
-
<li>In the box below, add either an array of granule Ids or an elasticsearch query and index.</li>
331
+
<li>{granulesOrQueryText}.</li>
329
332
<li>Then select workflow to rerun for all the selected granules. The workflows listed are the
330
333
intersection of the selected granules' workflows.</li>
331
334
</ol>
@@ -353,9 +356,8 @@ const BulkGranule = ({
353
356
>
354
357
<h4className="modal_subtitle">To run and complete your bulk granule task:</h4>
355
358
<ol>
356
-
<li>In the box below, enter the workflowName.</li>
357
-
<li>Then add either an array of granule Ids or an Elasticsearch query and
358
-
index (<i>see below</i>).</li>
359
+
<li>In the box below, enter the <strong>workflowName</strong>.</li>
0 commit comments