Skip to content

Commit a403ed8

Browse files
committed
Fix more wording and formatting
1 parent 5317f60 commit a403ed8

File tree

2 files changed

+10
-10
lines changed

2 files changed

+10
-10
lines changed

notebooks/atlas-and-kai/notebook.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@
3030
"## What you will learn in this notebook:\n",
3131
"\n",
3232
"1. Install libraries and import modules\n",
33-
"2. Connect to a MongoDB Atlas and SingleStoreDB Kai for Mongo endpoints\n",
33+
"2. Connect to a MongoDB Atlas and SingleStoreDB Kai endpoints\n",
3434
"3. Copy Atlas collections into SingleStoreDB - Synthetic collections are about retail sales transactions with customer information\n",
3535
"\n",
3636
"## Compare performance on same code from simple to more complex queries\n",
@@ -221,7 +221,7 @@
221221
"id": "a6f36725-4b74-4460-b1c9-a0144159a7b4",
222222
"metadata": {},
223223
"source": [
224-
"## 3. Copy Atlas collections into SingleStoreDB Kai for Mongo"
224+
"## 3. Copy Atlas collections into SingleStoreDB Kai"
225225
]
226226
},
227227
{

notebooks/image-matching-with-sql/notebook.ipynb

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -55,13 +55,13 @@
5555
"cell_type": "markdown",
5656
"id": "899100f0-bac6-4e56-a1e3-eaf5ba32d345",
5757
"metadata": {},
58-
"source": "## 3. Select the newly created `image_recognition` database\n\nThe notebook must have a database selected for the `%%sql` magic commands and SQLAlchemy connections\nto connect to the correct database. To do so, select the `image_recognition` database from the\ndrop-down menu at the top of this notebook.\n\n<img src=\"https://raw.githubusercontent.com/singlestore-labs/singlestoredb-samples/main/Tutorials/Face%20matching/pics/Use_Face_Matching_Database.png\" style=\"width: 500px; border: 1px solid darkorchid\">"
58+
"source": "<div class=\"alert alert-block alert-warning\">\n <b class=\"fa fa-solid fa-exclamation-circle\"></b>\n <div>\n <p><b>Action Required</b></p>\n <p>Make sure to select the <tt>image_recognition</tt> database from the drop-down menu at the top of this notebook.\n It updates the <tt>connection_url</tt> which is used by the <tt>%%sql</tt> magic command and SQLAlchemy to make connections to the selected database.</p>\n </div>\n</div>"
5959
},
6060
{
6161
"cell_type": "markdown",
6262
"id": "278053e8-7457-4655-a6fe-5c95ecb361de",
6363
"metadata": {},
64-
"source": "## 4. Install and import the following libraries\n\nThis will take approximately 40 seconds. We are using the `--quiet` option of `pip` here to keep\nthe log messages from filling the output. You can remove that option if you want to see\nthe installation process. \n\nYou may see messages printed about not being able to find cuda drivers or TensorRT. These can\nbe ignored."
64+
"source": "## 3. Install and import the following libraries\n\nThis will take approximately 40 seconds. We are using the `--quiet` option of `pip` here to keep\nthe log messages from filling the output. You can remove that option if you want to see\nthe installation process. \n\nYou may see messages printed about not being able to find cuda drivers or TensorRT. These can\nbe ignored."
6565
},
6666
{
6767
"cell_type": "code",
@@ -91,7 +91,7 @@
9191
"cell_type": "markdown",
9292
"id": "3bb47d4f-d54d-4fcc-835e-6a5066fa84bc",
9393
"metadata": {},
94-
"source": "## 5. Create a table of images of people\n\nThe table will contain two columns: 1) the filename containing the image and 2) the vector embedding\nof the image as a blob containing an array of 32-bit floats."
94+
"source": "## 4. Create a table of images of people\n\nThe table will contain two columns: 1) the filename containing the image and 2) the vector embedding\nof the image as a blob containing an array of 32-bit floats."
9595
},
9696
{
9797
"cell_type": "code",
@@ -121,7 +121,7 @@
121121
"cell_type": "markdown",
122122
"id": "41a990db-9e11-48e3-8011-8bd9770a27a2",
123123
"metadata": {},
124-
"source": "## 6. Import our sample dataset into the table\n\n**This dataset has 7000 vector embeddings of celebrities!**\n\nNote that we are using the `converters=` parameter of `pd.read_csv` to parse the text as a JSON array and convert it\nto a numpy array for the resulting DataFrame column."
124+
"source": "## 5. Import our sample dataset into the table\n\n**This dataset has 7000 vector embeddings of celebrities!**\n\nNote that we are using the `converters=` parameter of `pd.read_csv` to parse the text as a JSON array and convert it\nto a numpy array for the resulting DataFrame column."
125125
},
126126
{
127127
"cell_type": "code",
@@ -172,7 +172,7 @@
172172
"cell_type": "markdown",
173173
"id": "168be056-17da-4f94-8252-3e5d79459a8b",
174174
"metadata": {},
175-
"source": "## 7. Run our image matching algorithm using just 2 lines of SQL\n\nIn this example, we use an image of Adam Sandler and find the 5 closest images in our database to it. We use the `dot_product` function to measure cosine_similarity of each vector in the database to the input image. "
175+
"source": "## 6. Run our image matching algorithm using just 2 lines of SQL\n\nIn this example, we use an image of Adam Sandler and find the 5 closest images in our database to it. We use the `dot_product` function to measure cosine_similarity of each vector in the database to the input image. "
176176
},
177177
{
178178
"cell_type": "code",
@@ -206,7 +206,7 @@
206206
"cell_type": "markdown",
207207
"id": "1d0606a8-6503-4522-8a85-6366263e4b5e",
208208
"metadata": {},
209-
"source": "## 8. Pick an image of a celebrity and see which images matched closest to it!\n\n1. Run the code cell\n2. Pick a celebrity picture\n3. Wait for the match!"
209+
"source": "## 7. Pick an image of a celebrity and see which images matched closest to it!\n\n1. Run the code cell\n2. Pick a celebrity picture\n3. Wait for the match!"
210210
},
211211
{
212212
"cell_type": "code",
@@ -255,7 +255,7 @@
255255
"cell_type": "markdown",
256256
"id": "cea04465-6a69-42f1-8249-4c49488506f6",
257257
"metadata": {},
258-
"source": "## 9. See which celebrity you look most like! \n\nIn this step, you'll need to upload a picture of yourself.\nNote that your image MUST be at least 160x160 pixels. Head-shots and zoomed-in photos work better as we don't preprocess the image to just isolate the facial context! We only have 7,000 pictures so matching might be limited.\n\n1. Run the code cell\n2. Upload your picture\n3. Wait for the match!\n\n**A low score for matching is less than 0.6.**"
258+
"source": "## 8. See which celebrity you look most like! \n\nIn this step, you'll need to upload a picture of yourself.\nNote that your image MUST be at least 160x160 pixels. Head-shots and zoomed-in photos work better as we don't preprocess the image to just isolate the facial context! We only have 7,000 pictures so matching might be limited.\n\n1. Run the code cell\n2. Upload your picture\n3. Wait for the match!\n\n**A low score for matching is less than 0.6.**"
259259
},
260260
{
261261
"cell_type": "code",
@@ -304,7 +304,7 @@
304304
"cell_type": "markdown",
305305
"id": "f3f3c685-0335-46e2-9a8d-e46ec296f074",
306306
"metadata": {},
307-
"source": "## 10. Clean up"
307+
"source": "## 9. Clean up"
308308
},
309309
{
310310
"cell_type": "code",

0 commit comments

Comments
 (0)