You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: adi_function_app/README.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,6 +42,9 @@ The properties returned from the ADI Custom Skill and Chunking are then used to
42
42
- Keyphrase extraction
43
43
- Vectorisation
44
44
45
+
> [!NOTE]
46
+
> See `GETTING_STARTED.md` for a step by step guide of how to use the accelerator.
47
+
45
48
## Sample Output
46
49
47
50
Using the [Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone](https://arxiv.org/pdf/2404.14219) as an example, the following output can be obtained for page 7:
1. Setup Azure OpenAI in your subscription with **gpt-4o-mini** & an embedding model, alongside a SQL Server sample database, AI Search and a storage account.
6
+
2. Clone this repository and deploy the AI Search text2sql indexes from `deploy_ai_search`.
7
+
3. Run `uv sync` within the text_2_sql directory to install dependencies.
8
+
4. Configure the .env file based on the provided sample
9
+
5. Generate a data dictionary for your target server using the instructions in `data_dictionary`.
10
+
6. Upload these data dictionaries to the relevant contains in your storage account. Wait for them to be automatically indexed.
11
+
7. Navigate to `autogen` directory to view the AutoGen implementation. Follow the steps in `Iteration 5 - Agentic Vector Based Text2SQL.ipynb` to get started.
Copy file name to clipboardExpand all lines: text_2_sql/README.md
+20-7Lines changed: 20 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ This portion of the repo contains code to implement a multi-shot approach to Tex
4
4
5
5
The sample provided works with Azure SQL Server, although it has been easily adapted to other SQL sources such as Snowflake.
6
6
7
-
> [!IMPORTANT]
7
+
> [!NOTE]
8
8
>
9
9
> - Previous versions of this approach have now been moved to `previous_iterations/semantic_kernel`. These will not be updated.
10
10
@@ -14,6 +14,9 @@ The following diagram shows a workflow for how the Text2SQL plugin would be inco
14
14
15
15

16
16
17
+
> [!NOTE]
18
+
> See `GETTING_STARTED.md` for a step by step guide of how to use the accelerator.
19
+
17
20
## Why Text2SQL instead of indexing the database contents?
18
21
19
22
Generating SQL queries and executing them to provide context for the RAG application provided several benefits in the use case this was designed for.
@@ -57,6 +60,10 @@ As the query cache is shared between users (no data is stored in the cache), a n
57
60
58
61

59
62
63
+
#### Parallel execution
64
+
65
+
After the first agent has rewritten and decomposed the user input, we execute each of the individual questions in parallel for the quickest time to generate an answer.
66
+
60
67
### Caching Strategy
61
68
62
69
The cache strategy implementation is a simple way to prove that the system works. You can adopt several different strategies for cache population. Below are some of the strategies that could be used:
@@ -68,6 +75,10 @@ The cache strategy implementation is a simple way to prove that the system works
68
75
69
76
## Sample Output
70
77
78
+
> [!NOTE]
79
+
>
80
+
> - Full payloads for input / outputs can be found in `text_2_sql_core/src/text_2_sql_core/payloads/interaction_payloads.py`.
81
+
71
82
### What is the top performing product by quantity of units sold?
72
83
73
84
#### SQL Query Generated
@@ -81,14 +92,12 @@ The cache strategy implementation is a simple way to prove that the system works
81
92
"answer": "The top-performing product by quantity of units sold is the **Classic Vest, S** from the **Classic Vest** product model, with a total of 87 units sold [1][2].",
"sql_query": "SELECT TOP 1 ProductID, SUM(OrderQty) AS TotalUnitsSold FROM SalesLT.SalesOrderDetail GROUP BY ProductID ORDER BY TotalUnitsSold DESC;"
87
97
},
88
98
{
89
-
"title": "Product and Description",
90
-
"chunk": "| Name | ProductModel |\n|----------------|---------------|\n| Classic Vest, S| Classic Vest |\n",
91
-
"reference": "SELECT Name, ProductModel FROM SalesLT.vProductAndDescription WHERE ProductID = 864;"
99
+
"sql_rows": "| Name | ProductModel |\n|----------------|---------------|\n| Classic Vest, S| Classic Vest |\n",
100
+
"sql_query": "SELECT Name, ProductModel FROM SalesLT.vProductAndDescription WHERE ProductID = 864;"
92
101
}
93
102
]
94
103
}
@@ -110,6 +119,10 @@ The top-performing product by quantity of units sold is the **Classic Vest, S**
110
119
|----------------|---------------|
111
120
| Classic Vest, S| Classic Vest |
112
121
122
+
## Disambiguation Requests
123
+
124
+
If the LLM is unable to understand or answer the question asked, it can ask the user follow up questions with a DisambiguationRequest. In cases where multiple columns may be the correct one, or that there user may be referring to several different filter values, the LLM can produce a series of options for the end user to select from.
This portion of the repository contains the core prompts, code and config used to power the text2sql agentic flow. As much of the code as possible is kept separate from the AutoGen implementation to enable it to be easily rewritten for another framework in the future.
0 commit comments