Skip to content

Commit 76f73fc

Browse files
committed
fix: Update all Learn the basics notebooks with the latest version of the source transpiler and remove legacy parts
1 parent d3c9cd6 commit 76f73fc

File tree

6 files changed

+441
-272
lines changed

6 files changed

+441
-272
lines changed

learn_the_basics.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
Learn the basics
1+
Learn the Basics
22
----------------
33

44
.. grid:: 1 1 3 3

learn_the_basics/03_trace_code.ipynb

Lines changed: 31 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323
"source": [
2424
"⚠️ If you are running this notebook in Colab, you will have to install `Ivy` and some dependencies manually. You can do so by running the cell below ⬇️\n",
2525
"\n",
26-
"If you want to run the notebook locally but don't have Ivy installed just yet, you can check out the [Get Started section of the docs.](https://unify.ai/docs/ivy/overview/get_started.html)"
26+
"If you want to run the notebook locally but don't have Ivy installed just yet, you can check out the [Get Started section of the docs.](https://www.docs.ivy.dev/overview/get_started.html)"
2727
]
2828
},
2929
{
@@ -40,24 +40,21 @@
4040
"cell_type": "markdown",
4141
"metadata": {},
4242
"source": [
43-
"Firstly, let's pick up where we left off in the [last notebook](02_unify_code.ipynb), with our unified `normalize` function:"
43+
"Let's begin with an implementation of the `normalize` function using `ivy`'s Functional API:"
4444
]
4545
},
4646
{
4747
"cell_type": "code",
48-
"execution_count": 2,
48+
"execution_count": 4,
4949
"metadata": {},
5050
"outputs": [],
5151
"source": [
5252
"import ivy\n",
53-
"import torch\n",
5453
"\n",
5554
"def normalize(x):\n",
56-
" mean = torch.mean(x)\n",
57-
" std = torch.std(x)\n",
58-
" return torch.div(torch.sub(x, mean), std)\n",
59-
"\n",
60-
"normalize = ivy.unify(normalize, source=\"torch\")"
55+
" mean = ivy.mean(x)\n",
56+
" std = ivy.std(x)\n",
57+
" return ivy.divide(ivy.subtract(x, mean), std)"
6158
]
6259
},
6360
{
@@ -70,47 +67,31 @@
7067
},
7168
{
7269
"cell_type": "code",
73-
"execution_count": 4,
74-
"metadata": {},
75-
"outputs": [],
76-
"source": [
77-
"# set ivy's backend to jax\n",
78-
"ivy.set_backend(\"jax\")\n",
79-
"\n",
80-
"# Import jax\n",
81-
"import jax\n",
82-
"\n",
83-
"# create random jax arrays for testing\n",
84-
"key = jax.random.PRNGKey(42)\n",
85-
"x = jax.random.uniform(key, shape=(10,))"
86-
]
87-
},
88-
{
89-
"attachments": {},
90-
"cell_type": "markdown",
91-
"metadata": {},
92-
"source": [
93-
"As in the previous example, the Ivy function can be executed like so (in this case it will trigger lazy unification, see the [Lazy vs Eager](05_lazy_vs_eager.ipynb) section for more details):"
94-
]
95-
},
96-
{
97-
"cell_type": "code",
98-
"execution_count": 5,
70+
"execution_count": 7,
9971
"metadata": {},
10072
"outputs": [
10173
{
10274
"data": {
10375
"text/plain": [
104-
"ivy.array([ 0.55563945, -0.65538704, -1.14150524, 1.46951997, 1.30220294,\n",
105-
" -1.14739668, -0.57017946, -0.91962677, 0.51029003, 0.59644395])"
76+
"ivy.array([ 0.58569533, -0.69083852, -1.20325196, 1.5490098 , 1.37264228,\n",
77+
" -1.20946217, -0.60102183, -0.96937162, 0.53789282, 0.62870705])"
10678
]
10779
},
108-
"execution_count": 5,
80+
"execution_count": 7,
10981
"metadata": {},
11082
"output_type": "execute_result"
11183
}
11284
],
11385
"source": [
86+
"# set ivy's backend to jax\n",
87+
"ivy.set_backend(\"jax\")\n",
88+
"\n",
89+
"# Import jax\n",
90+
"import jax\n",
91+
"\n",
92+
"# create random jax arrays for testing\n",
93+
"key = jax.random.PRNGKey(42)\n",
94+
"x = jax.random.uniform(key, shape=(10,))\n",
11495
"normalize(x)"
11596
]
11697
},
@@ -137,7 +118,7 @@
137118
"cell_type": "markdown",
138119
"metadata": {},
139120
"source": [
140-
"The traced function can be executed in exactly the same manner as the non-traced function (in this case it will also trigger lazy graph tracing, see the [Lazy vs Eager](05_lazy_vs_eager.ipynb) section for more details):"
121+
"The traced function can be executed in exactly the same manner as the non-traced function:"
141122
]
142123
},
143124
{
@@ -148,8 +129,8 @@
148129
{
149130
"data": {
150131
"text/plain": [
151-
"Array([ 0.5556394 , -0.655387 , -1.1415051 , 1.4695197 , 1.3022028 ,\n",
152-
" -1.1473966 , -0.5701794 , -0.91962665, 0.51028997, 0.5964439 ], dtype=float32)"
132+
"Array([ 0.5856953 , -0.6908385 , -1.203252 , 1.5490098 , 1.3726423 ,\n",
133+
" -1.2094622 , -0.6010218 , -0.9693716 , 0.5378928 , 0.62870705], dtype=float32)"
153134
]
154135
},
155136
"execution_count": 9,
@@ -171,14 +152,14 @@
171152
},
172153
{
173154
"cell_type": "code",
174-
"execution_count": 11,
155+
"execution_count": 10,
175156
"metadata": {},
176157
"outputs": [
177158
{
178159
"name": "stdout",
179160
"output_type": "stream",
180161
"text": [
181-
"985 µs ± 6.76 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)\n"
162+
"138 ms ± 3.57 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n"
182163
]
183164
}
184165
],
@@ -196,7 +177,7 @@
196177
"name": "stdout",
197178
"output_type": "stream",
198179
"text": [
199-
"69.5 µs ± 1.24 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)\n"
180+
"122 µs ± 2.02 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)\n"
200181
]
201182
}
202183
],
@@ -210,7 +191,9 @@
210191
"cell_type": "markdown",
211192
"metadata": {},
212193
"source": [
213-
"As expected, we can see that `normalize` is slower, as it includes all `ivy` wrapping overhead. On the other hand, `traced` has no wrapping overhead and it's more efficient!"
194+
"As expected, we can see that `normalize` is slower, as it includes all `ivy` wrapping overhead. On the other hand, `traced` has no wrapping overhead and it's more efficient!\n",
195+
"\n",
196+
"> Fun Fact: You can use the graph tracer with pretty much any code written in one of the ML frameworks Ivy supports i.e. PyTorch, TensorFlow, Jax, NumPy etc. and speed it up by removing unnecessary computations that don't contribute towards the output by extracting an efficient computation graph stitched together in the set backend framework!"
214197
]
215198
},
216199
{
@@ -226,13 +209,13 @@
226209
"cell_type": "markdown",
227210
"metadata": {},
228211
"source": [
229-
"That's it, you can now trace `ivy` code for more efficient inference! However, there are several other important topics to master before you're ready to unify ML code like a pro 🥷. Next, we'll be learning how to transpile code from one framework to another in a single line of code 🔄"
212+
"That's it, you can now trace `ivy` code for more efficient inference! However, there are several other [important topics](https://www.docs.ivy.dev/demos/learn_the_basics.html) to master before you're ready to play with ML code like a pro 🥷."
230213
]
231214
}
232215
],
233216
"metadata": {
234217
"kernelspec": {
235-
"display_name": "Python 3",
218+
"display_name": "tracer-transpiler",
236219
"language": "python",
237220
"name": "python3"
238221
},
@@ -246,7 +229,7 @@
246229
"name": "python",
247230
"nbconvert_exporter": "python",
248231
"pygments_lexer": "ipython3",
249-
"version": "3.8.10"
232+
"version": "3.10.13"
250233
},
251234
"orig_nbformat": 4
252235
},

0 commit comments

Comments
 (0)