Skip to content

Commit b0d866a

Browse files
authored
Update for project_id as optional (#277)
1 parent 415f86c commit b0d866a

File tree

1 file changed

+16
-15
lines changed

1 file changed

+16
-15
lines changed

integrations/google-vertex-ai.md

Lines changed: 16 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,8 @@ Once installed, you will have access to various Haystack Generators:
5757

5858
To use Vertex AI models, you need to have a Google Cloud Platform account and be logged in using Application Default Credentials (ADCs). For more info see the [official documentation](https://colab.research.google.com/corgiredirector?site=https%3A%2F%2Fcloud.google.com%2Fdocs%2Fauthentication%2Fprovide-credentials-adc).
5959

60-
To start using Vertex AI generators in Haystack, you need to set the `project_id` variable to a valid project ID that have enough authorization to use Vertex AI. Find your `project_id` in the [GCP resource manager](https://console.cloud.google.com/cloud-resource-manager) or locally by running `gcloud projects list` in your terminal. For more info on the gcloud CLI see the [official documentation](https://cloud.google.com/cli).
60+
To start using Vertex AI generators in Haystack, it is essential that your account has access to a project authorized to use Google Vertex AI endpoints. The `project_id` needed for initialization of Vertex AI generators is set during GCP authentication mentioned above. Additonally, you can also set a different `project_id` by passing it as a variable during initialization of the generator.
61+
You can find your `project_id` in the [GCP resource manager](https://console.cloud.google.com/cloud-resource-manager) or locally by running `gcloud projects list` in your terminal. For more info on the gcloud CLI see the [official documentation](https://cloud.google.com/cli).
6162

6263
### Gemini API models
6364

@@ -71,7 +72,7 @@ To use Gemini model for text generation, initialize a `VertexAIGeminiGenerator`
7172
from haystack_integrations.components.generators.google_vertex import VertexAIGeminiGenerator
7273

7374

74-
gemini_generator = VertexAIGeminiGenerator(model="gemini-pro", project_id=project_id)
75+
gemini_generator = VertexAIGeminiGenerator(model="gemini-pro")
7576
result = gemini_generator.run(parts = ["What is assemblage in art?"])
7677
print(result["replies"][0])
7778
```
@@ -82,7 +83,7 @@ Assemblage in art refers to the creation of a three-dimensional artwork by combi
8283

8384
**Multimodality with `gemini-1.5-flash`**
8485

85-
To use `gemini-1.5-flash` model for visual question answering, initialize a `VertexAIGeminiGenerator` with `"gemini-1.5-flash"` and `project_id`. Then, run it with the images as well as the prompt:
86+
To use `gemini-1.5-flash` model for visual question answering, initialize a `VertexAIGeminiGenerator` with `"gemini-1.5-flash"`. Then, run it with the images as well as the prompt:
8687

8788
```python
8889
import requests
@@ -99,7 +100,7 @@ images = [
99100
ByteStream(data=requests.get(url).content, mime_type="image/jpeg")
100101
for url in URLS
101102
]
102-
gemini_generator = VertexAIGeminiGenerator(model="gemini-1.5-flash", project_id=project_id)
103+
gemini_generator = VertexAIGeminiGenerator(model="gemini-1.5-flash")
103104
result = gemini_generator.run(parts = ["What can you tell me about these robots?", *images])
104105
for answer in result["replies"]:
105106
print(answer)
@@ -116,15 +117,15 @@ The fourth image is of Marvin from the 1977 film The Hitchhiker's Guide to the G
116117
117118
### PaLM API Models
118119
119-
You can leverage PaLM API models `text-bison`, `text-unicorn` and `text-bison-32k` through `VertexAITextGenerator` for task generation. To use PaLM models, initialize a `VertexAITextGenerator` with model name and `project_id`.
120+
You can leverage PaLM API models `text-bison`, `text-unicorn` and `text-bison-32k` through `VertexAITextGenerator` for task generation. To use PaLM models, initialize a `VertexAITextGenerator` with model name.
120121
121122
Here'a an example of using `text-unicorn` model with VertexAITextGenerator to extract information as a JSON file:
122123

123124
```python
124125
from haystack_integrations.components.generators.google_vertex import VertexAITextGenerator
125126
126127
127-
palm_llm = VertexAITextGenerator(model="text-unicorn", project_id=project_id)
128+
palm_llm = VertexAITextGenerator(model="text-unicorn")
128129
palm_llm_result = palm_llm.run(
129130
"""Extract the technical specifications from the text below in a JSON format. Valid fields are name, network, ram, processor, storage, and color.
130131
Text: Google Pixel 7, 5G network, 8GB RAM, Tensor G2 processor, 128GB of storage, Lemongrass
@@ -135,14 +136,14 @@ print(palm_llm_result["replies"][0])
135136

136137
### Codey API Models
137138

138-
You can leverage Codey API models, `code-bison`, `code-bison-32k` and `code-gecko`, through `VertexAICodeGenerator` for code generation. To use Codey models, initialize a `VertexAICodeGenerator` with model name and `project_id`.
139+
You can leverage Codey API models, `code-bison`, `code-bison-32k` and `code-gecko`, through `VertexAICodeGenerator` for code generation. To use Codey models, initialize a `VertexAICodeGenerator` with model name.
139140

140141
Here'a an example of using `code-bison` model for **code generation**:
141142
```python
142143
from haystack_integrations.components.generators.google_vertex import VertexAICodeGenerator
143144
144145
145-
codey_llm = VertexAICodeGenerator(model="code-bison", project_id=project_id)
146+
codey_llm = VertexAICodeGenerator(model="code-bison")
146147
codey_llm_result = codey_llm.run("Write a code for calculating fibonacci numbers in JavaScript")
147148
print(codey_llm_result["replies"][0])
148149
```
@@ -152,7 +153,7 @@ Here'a an example of using `code-gecko` model for **code completion**:
152153
from haystack_integrations.components.generators.google_vertex import VertexAICodeGenerator
153154
154155
155-
codey_llm = VertexAICodeGenerator(model="code-gecko", project_id=project_id)
156+
codey_llm = VertexAICodeGenerator(model="code-gecko")
156157
codey_llm_result = codey_llm.run("""function fibonacci(n) {
157158
// Base cases
158159
if (n <= 1) {
@@ -168,15 +169,15 @@ You can leverage Imagen models through three components: [VertexAIImageCaptioner
168169

169170
**Image Generation with `imagegeneration`**
170171

171-
To generate an image, initialize a VertexAIImageGenerator with the `imagegeneration` and the `project_id`, Then, you can run it with a prompt:
172+
To generate an image, initialize a VertexAIImageGenerator with the `imagegeneration`. Then, you can run it with a prompt:
172173

173174
```python
174175
import io
175176
import PIL.Image as Image
176177
from haystack_integrations.components.generators.google_verteximport VertexAIImageGenerator
177178
178179
179-
image_generator = VertexAIImageGenerator(model="imagegeneration", project_id=project_id)
180+
image_generator = VertexAIImageGenerator(model="imagegeneration")
180181
image_generator_result = image_generator.run("magazine style, 4k, photorealistic, modern red armchair, natural lighting")
181182
182183
## (Optional) Save the generated image
@@ -186,13 +187,13 @@ image.save("output.png")
186187

187188
**Image Captioning with `imagetext`**
188189

189-
To use generate image captions, initialize a VertexAIImageCaptioner with the `imagetext` model and `project_id`. Then, you can run the VertexAIImageCaptioner with the image that you want to caption:
190+
To use generate image captions, initialize a VertexAIImageCaptioner with the `imagetext` model. Then, you can run the VertexAIImageCaptioner with the image that you want to caption:
190191

191192
```python
192193
from haystack_integrations.components.generators.google_vertex import VertexAIImageCaptioner
193194
194195
195-
image_captioner = VertexAIImageCaptioner(model='imagetext', project_id=project_id)
196+
image_captioner = VertexAIImageCaptioner(model='imagetext')
196197
image = ByteStream.from_file_path("output.png") # you can use the generated image
197198
198199
image_captioner_result = image_captioner.run(image=image)
@@ -201,14 +202,14 @@ print(image_captioner_result["captions"])
201202

202203
**Visual Question Answering (VQA) with `imagetext`**
203204

204-
To answer questions about an image, initialize a VertexAIImageQA with the `imagetext` model and `project_id`. Then, you can run it with the `image` and the `question`:
205+
To answer questions about an image, initialize a VertexAIImageQA with the `imagetext` model. Then, you can run it with the `image` and the `question`:
205206

206207
```python
207208
from haystack.dataclasses.byte_stream import ByteStream
208209
from haystack_integrations.components.generators.google_vertex import VertexAIImageQA
209210
210211
211-
visual_qa = VertexAIImageQA(model='imagetext', project_id=project_id)
212+
visual_qa = VertexAIImageQA(model='imagetext')
212213
image = ByteStream.from_file_path("output.png") # you can use the generated image
213214
question = "what's the color of the furniture?"
214215

0 commit comments

Comments
 (0)