Skip to content

Update Deployment Docs #170

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 22, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions deploy_ai_search_indexes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The associated scripts in this portion of the repository contains pre-built scri
**Execute the following commands in the `deploy_ai_search_indexes/src/deploy_ai_search_indexes` directory:**

3. Adjust `image_processing.py` with any changes to the index / indexer. The `get_skills()` method implements the skills pipeline. Make any adjustments here in the skills needed to enrich the data source.
4. Run `deploy.py` with the following args:
4. Run `uv run deploy.py` with the following args:
- `index_type image_processing`. This selects the `ImageProcessingAISearch` sub class.
- `enable_page_wise_chunking True`. This determines whether page wise chunking is applied in ADI, or whether the inbuilt skill is used for TextSplit. This suits documents that are inheritely page-wise e.g. pptx files.
- `rebuild`. Whether to delete and rebuild the index.
Expand All @@ -34,7 +34,7 @@ The associated scripts in this portion of the repository contains pre-built scri
**Execute the following commands in the `deploy_ai_search_indexes/src/deploy_ai_search_indexes` directory:**

3. Adjust `text_2_sql_schema_store.py` with any changes to the index / indexer. The `get_skills()` method implements the skills pipeline. Make any adjustments here in the skills needed to enrich the data source.
4. Run `deploy.py` with the following args:
4. Run `uv run deploy.py` with the following args:

- `index_type text_2_sql_schema_store`. This selects the `Text2SQLSchemaStoreAISearch` sub class.
- `rebuild`. Whether to delete and rebuild the index.
Expand All @@ -53,7 +53,7 @@ The associated scripts in this portion of the repository contains pre-built scri
**Execute the following commands in the `deploy_ai_search_indexes/src/deploy_ai_search_indexes` directory:**

3. Adjust `text_2_sql_column_value_store.py` with any changes to the index / indexer.
4. Run `deploy.py` with the following args:
4. Run `uv run deploy.py` with the following args:

- `index_type text_2_sql_column_value_store`. This selects the `Text2SQLColumnValueStoreAISearch` sub class.
- `rebuild`. Whether to delete and rebuild the index.
Expand All @@ -71,7 +71,7 @@ The associated scripts in this portion of the repository contains pre-built scri
**Execute the following commands in the `deploy_ai_search_indexes/src/deploy_ai_search_indexes` directory:**

3. Adjust `text_2_sql_query_cache.py` with any changes to the index. **There is an optional provided indexer or skillset for this cache. You may instead want the application code will write directly to it. See the details in the Text2SQL README for different cache strategies.**
4. Run `deploy.py` with the following args:
4. Run `uv run deploy.py` with the following args:

- `index_type text_2_sql_query_cache`. This selects the `Text2SQLQueryCacheAISearch` sub class.
- `rebuild`. Whether to delete and rebuild the index.
Expand Down
4 changes: 2 additions & 2 deletions text_2_sql/data_dictionary/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,7 @@ To generate a data dictionary, perform the following steps:

2. Package and install the `text_2_sql_core` library. See [build](https://docs.astral.sh/uv/concepts/projects/build/) if you want to build as a wheel and install on an agent. Or you can run from within a `uv` environment and skip packaging.
- Install the optional dependencies if you need a database connector other than TSQL. `uv sync --extra <DATABASE ENGINE>`

3. Run `uv run data_dictionary <DATABASE ENGINE>`
- You can pass the following command line arguements:
- `-- output_directory` or `-o`: Optional directory that the script will write the output files to.
Expand All @@ -242,7 +242,7 @@ To generate a data dictionary, perform the following steps:
- `entities`: A list of entities to extract. Defaults to None.
- `excluded_entities`: A list of entities to exclude.
- `excluded_schemas`: A list of schemas to exclude.

4. Upload these generated data dictionaries files to the relevant containers in your storage account. Wait for them to be automatically indexed with the included skillsets.

> [!IMPORTANT]
Expand Down
Loading