Skip to content

Commit 9e3a273

Browse files
authored
Merge pull request #6885 from EnterpriseDB/release-2025-06-13b
Release 2025-06-13b
2 parents 351e43a + ae521bb commit 9e3a273

File tree

64 files changed

+1106
-409
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

64 files changed

+1106
-409
lines changed

advocacy_docs/edb-postgres-ai/ai-accelerator/models/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Pipelines has a model registry that manages configured instances of models. Any
1515
* How to [create models](./using-models) in Pipelines.
1616
* Discover the [primitives](./primitives) that can be used to interact with models.
1717
* See the [supported models](./supported-models) that come with Pipelines.
18-
* Using [models with OpenAI API-compatible services and Nvidia NIM](using-with) with Pipelines.
18+
* Using [models with OpenAI API-compatible services and NVIDIA NIM](using-with) with Pipelines.
1919

2020
## Next steps
2121

advocacy_docs/edb-postgres-ai/ai-accelerator/models/supported-models/nim_clip.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,4 +48,4 @@ The following configuration settings are available for CLIP models:
4848

4949
The following credentials are required if executing inside NVIDIA NGC:
5050

51-
* `api_key` — The NVIDIA Cloud API key to use for authentication.
51+
* `api_key` — The NVIDIA NGC API key to use for authentication.

advocacy_docs/edb-postgres-ai/ai-accelerator/models/supported-models/nim_paddle_ocr.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,4 +44,4 @@ The following configuration settings are available for PADDLE_OCR models:
4444

4545
The following credentials are required if executing inside NVIDIA NGC:
4646

47-
* `api_key` — The NVIDIA Cloud API key to use for authentication.
47+
* `api_key` — The NVIDIA NGC API key to use for authentication.
Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
---
22
title: Using models with...
33
navTitle: Using models with...
4-
description: How to use OpenAI-compatible and Nvidia NIM models with AI Accelerator Pipelines.
4+
description: How to use OpenAI-compatible and NVIDIA NIM models with AI Accelerator Pipelines.
55
---
66

77
This section describes some particular ways to use models with AI Accelerator Pipelines.
8-
These techniques show how to use API-compatible services, running locally or in the cloud, with Pipelines and how to make use of Nvidia NIM models.
8+
These techniques show how to use API-compatible services, running locally or in the cloud, with Pipelines and how to make use of NVIDIA NIM models.
99

1010
* [OpenAI API-compatible services](openai-api-compatibility) with Pipelines.
11-
* [Nvidia NIM models](using-nvidia-nim) with Pipelines.
12-
* [In the Nvidia cloud](using-nvidia-nim/using-nim-in-nvidia-cloud)
11+
* [NVIDIA NIM models](using-nvidia-nim) with Pipelines.
12+
* [In the NVIDIA NGC](using-nvidia-nim/using-nim-in-nvidia-ngc)
1313
* [In your environment](using-nvidia-nim/using-nim-in-your-environment)
Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
---
2-
title: Using Nvidia NIM models
3-
navTitle: Nvidia NIM models
4-
description: How to use Nvidia NIM models, either in your own environment or in the Nvidia cloud, with AI Accelerator Pipelines.
2+
title: Using NVIDIA NIM models
3+
navTitle: NVIDIA NIM models
4+
description: How to use NVIDIA NIM models, either in your own environment or in the NVIDIA NGC, with AI Accelerator Pipelines.
55
---
66

7-
You can use Nvidia NIM models with AI Accelerator. The models can run in the Nvidia cloud or in your environment under your control.
7+
You can use NVIDIA NIM models with AI Accelerator. The models can run in the NVIDIA NGC or in your environment under your control.
88

9-
You can learn how to use Nvidia NIM models with AI Accelerator in both scenarios:
9+
You can learn how to use NVIDIA NIM models with AI Accelerator in both scenarios:
1010

11-
* [In the Nvidia cloud](using-nim-in-nvidia-cloud)
11+
* [In the NVIDIA NGC](using-nim-in-nvidia-ngc)
1212
* [In your environment](using-nim-in-your-environment)
Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,21 @@
11
---
2-
title: Using Nvidia NIM models in the Nvidia cloud
3-
navTitle: In the Nvidia cloud
4-
description: Learn how to use Nvidia NIM models in the Nvidia cloud, hosted by Nvidia.
2+
title: Using NVIDIA NIM models in the NVIDIA NGC
3+
navTitle: In the NVIDIA NGC
4+
description: Learn how to use NVIDIA NIM models in the NVIDIA NGC (build.nvidia.com), hosted by NVIDIA.
55
---
66

77

8-
To use a Nvidia NIM that's hosted in Nvidia's cloud, you first need to select a model to use. This tutorial uses the Nvidia NIM model llama-3.3-70b-instruct.
8+
To use a NVIDIA NIM that's hosted in NVIDIA's NGC, you first need to select a model to use. This tutorial uses the NVIDIA NIM model llama-3.3-70b-instruct.
99

1010
## Prerequisites
1111

12-
* An Nvidia NGC account. (If you don't have one, you can create one [here](https://build.nvidia.com/explore/discover/).)
12+
* An NVIDIA NGC account. (If you don't have one, you can create one [here](https://build.nvidia.com/explore/discover/).)
1313

14-
## Configuring the Nvidia cloud
14+
## Configuring the NVIDIA NGC
1515

1616
### 1. Select a model
1717

18-
Choose a model from [Nvidia's model library](https://build.nvidia.com/models). This example uses the [llama-3.3-70b-instruct](https://build.nvidia.com/meta/llama3-70b) model.
18+
Choose a model from [NVIDIA's model library](https://build.nvidia.com/models). This example uses the [llama-3.3-70b-instruct](https://build.nvidia.com/meta/llama3-70b) model.
1919

2020
### 2. Generate an API Key
2121

@@ -58,4 +58,4 @@ __OUTPUT__
5858
As the clock struck midnight, a single tear fell from the porcelain doll's glassy eye.
5959
```
6060
61-
Your output may vary. You've successfully used Nvidia NIM models, running on Nvidia's cloud, integrated with AI Accelerator.
61+
Your output may vary. You've successfully used NVIDIA NIM models, running on NVIDIA's NGC, integrated with AI Accelerator.

advocacy_docs/edb-postgres-ai/ai-accelerator/models/using-with/using-nvidia-nim/using-nim-in-your-environment.mdx

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,21 @@
11
---
2-
title: Using Nvidia NIM model in your environment
2+
title: Using NVIDIA NIM model in your environment
33
navTitle: In your environment
4-
description: Learn how to use Nvidia NIM models in your environment, locally or in your private cloud.
4+
description: Learn how to use NVIDIA NIM models in your environment, locally or in your private cloud.
55
---
66

7-
To use a Nvidia NIM that's hosted in your own environment, you first need an instance of the model. This tutorial shows how to configure an AWS-hosted instance with the Nvidia NIM model. It uses the Nvidia NIM model llama3-8b-instruct.
7+
To use a NVIDIA NIM that's hosted in your own environment, you first need an instance of the model. This tutorial shows how to configure an AWS-hosted instance with the NVIDIA NIM model. It uses the NVIDIA NIM model llama3-8b-instruct.
88

99
## Prerequisites
1010

11-
* A system capable of running Nvidia CUDA Toolkit. For this tutorial, we recommend using an **EC2 g5.8xlarge instance** with **1024 GB of gp3 storage** running **Ubuntu 24.04 LTS**, although smaller instance sizes may also work.
12-
* A Nvidia NGC account. (If you don't have one, you can create one [here](https://build.nvidia.com/explore/discover/).)
11+
* A system capable of running NVIDIA CUDA Toolkit. For this tutorial, we recommend using an **EC2 g5.8xlarge instance** with **1024 GB of gp3 storage** running **Ubuntu 24.04 LTS**, although smaller instance sizes may also work.
12+
* A NVIDIA NGC account. (If you don't have one, you can create one [here](https://build.nvidia.com/explore/discover/).)
1313

1414
## Configuring the system
1515

16-
### 1. Install Nvidia CUDA Toolkit
16+
### 1. Install NVIDIA CUDA Toolkit
1717

18-
Download and install the CUDA Toolkit from [Nvidia's official page](https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=24.04&target_type=deb_local).
18+
Download and install the CUDA Toolkit from [NVIDIA's official page](https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=24.04&target_type=deb_local).
1919

2020
### 2. Install Docker
2121

@@ -26,7 +26,7 @@ If your system doesn't have Docker installed, download and install it:
2626

2727
### 3. Generate an NGC API Key
2828

29-
Obtain an API key from [Nvidia NGC](https://org.ngc.nvidia.com/setup/api-key). This example refers to this key as `<NGC API KEY>`.
29+
Obtain an API key from [NVIDIA NGC](https://org.ngc.nvidia.com/setup/api-key). This example refers to this key as `<NGC API KEY>`.
3030

3131
Log in to the NGC container registry:
3232

@@ -39,13 +39,13 @@ Use the following credentials:
3939
* Username: `$oauthtoken`
4040
* Password: `<NGC API KEY>`
4141

42-
### 4. Install Nvidia NGC CLI
42+
### 4. Install NVIDIA NGC CLI
4343

44-
Download and install the Nvidia NGC CLI from [here](https://org.ngc.nvidia.com/setup/installers/cli).
44+
Download and install the NVIDIA NGC CLI from [here](https://org.ngc.nvidia.com/setup/installers/cli).
4545

4646
### 5. Run the NIM Model
4747

48-
Save the following script as a shell script and execute it. (For more information, see [Nvidia's documentation](https://docs.nvidia.com/nim/large-language-models/latest/getting-started.html#serving-models-from-local-assets).) Remember to substitute `<NGC API KEY>` with the API key you generated.
48+
Save the following script as a shell script and execute it. (For more information, see [NVIDIA's documentation](https://docs.nvidia.com/nim/large-language-models/latest/getting-started.html#serving-models-from-local-assets).) Remember to substitute `<NGC API KEY>` with the API key you generated.
4949

5050
```shell
5151
# Choose a container name
@@ -124,4 +124,4 @@ __OUTPUT__
124124
As the clock struck midnight, a single tear fell from the porcelain doll's glassy eye.
125125
```
126126
127-
Your output may vary. You've successfully used Nvidia NIM models via the EDB AI Accelerator.
127+
Your output may vary. You've successfully used NVIDIA NIM models via the EDB AI Accelerator.

advocacy_docs/edb-postgres-ai/ai-accelerator/rel_notes/ai-accelerator_2.1.1_rel_notes.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ In this release, we add support for Nvida NIM, introduce new model names, and im
1111

1212
## Highlights
1313

14-
- Support for Nvidia NIM added.
14+
- Support for NVIDIA NIM added.
1515
- `embeddings` and `completions` are new model names.
1616
- Reranking using NIM is now available.
1717
- Source tables in retriever pipelines now support schemas.

advocacy_docs/edb-postgres-ai/ai-accelerator/rel_notes/src/rel_notes_2.1.1.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ date: 3 February 2025
55
intro: |
66
In this release, we add support for Nvida NIM, introduce new model names, and improve the retriever pipeline.
77
highlights: |
8-
- Support for Nvidia NIM added.
8+
- Support for NVIDIA NIM added.
99
- `embeddings` and `completions` are new model names.
1010
- Reranking using NIM is now available.
1111
- Source tables in retriever pipelines now support schemas.

advocacy_docs/edb-postgres-ai/cloud-service/known_issues/known_issues_pgd.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ redirects:
1010
These are currently known issues in EDB Postgres Distributed (PGD) on Cloud Service as deployed in distributed high availability clusters.
1111
These known issues are tracked in our ticketing system and are expected to be resolved in a future release.
1212

13-
For general PGD known issues, refer to the [Known Issues](/pgd/latest/known_issues/) and [Limitations](/pgd/latest/planning/limitations/) in the PGD documentation.
13+
For general PGD known issues, refer to the [Known Issues](/pgd/latest/known_issues/) and [Limitations](/pgd/latest/known_issues/) in the PGD documentation.
1414

1515
## Management/administration
1616

advocacy_docs/edb-postgres-ai/cloud-service/references/distributed_high_availability/pgd_cli_ba.mdx

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -29,13 +29,13 @@ sudo yum install edb-pgd5-cli
2929

3030
### Discovering your database connection string
3131

32-
To connect to your distributed high-availability Cloud Service cluster using the PGD CLI, you need to [discover the database connection string](/pgd/latest/cli/discover_connections/). From your Console:
32+
To connect to your distributed high-availability Cloud Service cluster using the PGD CLI, you need to [discover the database connection string](/pgd/latest/reference/cli/discover_connections/). From your Console:
3333

34-
1. Log in to the [Cloud Service clusters](https://portal.biganimal.com/clusters) view.
35-
2. To show only clusters that work with PGD CLI, in the filter, set **Cluster Type** to **Distributed High Availability**.
36-
3. Select your cluster.
37-
4. In the view of your cluster, select the **Connect** tab.
38-
5. Copy the read/write URI from the connection info. This is your connection string.
34+
1. Log in to the [Cloud Service clusters](https://portal.biganimal.com/clusters) view.
35+
2. To show only clusters that work with PGD CLI, in the filter, set **Cluster Type** to **Distributed High Availability**.
36+
3. Select your cluster.
37+
4. In the view of your cluster, select the **Connect** tab.
38+
5. Copy the read/write URI from the connection info. This is your connection string.
3939

4040
### Using the PGD CLI with your database connection string
4141

@@ -101,7 +101,7 @@ p-w75f4ib1pu-a world data 3
101101

102102
### `pgd group set-leader`
103103

104-
`pgd group set-leader` manually changes the write leader of the group and can be used to simulate a [failover](/pgd/latest/quickstart/further_explore_failover).
104+
`pgd group set-leader` manually changes the write leader of the group and can be used to simulate a [failover](/pgd/5.8/quickstart/further_explore_failover/).
105105

106106
```
107107
pgd group p-w75f4ib1pu-a set-leader p-w75f4ib1pu-a-2 --dsn "postgres://edb_admin@p-w75f4ib1pu-a.vmk31wilqpjeopka.biganimal.io:5432/bdrdb?sslmode=require"

advocacy_docs/edb-postgres-ai/cloud-service/references/supported_cluster_types/distributed_highavailability.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Distributed high-availability clusters support both EDB Postgres Advanced Server
1313

1414
Distributed high-availability clusters contain one or two data groups. Your data groups can contain either three data nodes or two data nodes and one witness node. At any given time, one of these data nodes in each group is the leader and accepts writes, while the rest are referred to as [shadow nodes](/pgd/latest/terminology/#write-leader). We recommend that you don't use two data nodes and one witness node in production unless you use asynchronous [commit scopes](/pgd/latest/reference/commit-scopes/commit-scopes/).
1515

16-
[PGD Proxy](/pgd/latest/routing/proxy) routes all application traffic to the leader node, which acts as the principal write target to reduce the potential for data conflicts. PGD Proxy leverages a distributed consensus model to determine availability of the data nodes in the cluster. On failure or unavailability of the leader, PGD Proxy elects a new leader and redirects application traffic. Together with the core capabilities of EDB Postgres Distributed, this mechanism of routing application traffic to the leader node enables fast failover and switchover.
16+
[PGD Proxy](/pgd/5.8/routing/proxy/) routes all application traffic to the leader node, which acts as the principal write target to reduce the potential for data conflicts. PGD Proxy leverages a distributed consensus model to determine availability of the data nodes in the cluster. On failure or unavailability of the leader, PGD Proxy elects a new leader and redirects application traffic. Together with the core capabilities of EDB Postgres Distributed, this mechanism of routing application traffic to the leader node enables fast failover and switchover.
1717

1818
The witness node/witness group doesn't host data but exists for management purposes. It supports operations that require a consensus, for example, in case of an availability zone failure.
1919

@@ -69,7 +69,7 @@ When you enable the read-only workloads option during the cluster creation, a re
6969

7070
If you have more than one data group, you can choose whether to enable the read-only workloads option on a per-data-group basis.
7171

72-
Since the infrastructure of a distributed high-availability cluster is almost entirely based on EDB Postgres Distributed, the same [PGD Proxy read-only routing rules](/pgd/latest/routing/readonly/) apply.
72+
Since the infrastructure of a distributed high-availability cluster is almost entirely based on EDB Postgres Distributed, the same [PGD Proxy read-only routing rules](/pgd/5.8/routing/readonly/) apply.
7373

7474
!!! Important
7575

0 commit comments

Comments
 (0)