Skip to content

Commit 2d4ef2d

Browse files
committed
Created using Colab
1 parent 6ee1c4b commit 2d4ef2d

File tree

1 file changed

+7
-6
lines changed

1 file changed

+7
-6
lines changed

IntroCS_12_ArtificialIntelligence.ipynb

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
"metadata": {
55
"colab": {
66
"provenance": [],
7-
"authorship_tag": "ABX9TyPa0nxOUD78w3bdVL7ThQtY",
7+
"authorship_tag": "ABX9TyMOT096jkPXSz/S3MmpEEpS",
88
"include_colab_link": true
99
},
1010
"kernelspec": {
@@ -30,6 +30,7 @@
3030
"cell_type": "markdown",
3131
"source": [
3232
"# Introduction to Artificial Intelligence: What is AI?\n",
33+
"### Brendan Shea, PhD\n",
3334
"\n",
3435
"Artificial Intelligence (AI) refers to computer systems that can perform tasks that typically require human intelligence. These systems are designed to mimic human cognitive functions such as learning, problem-solving, and pattern recognition. In today's world, AI has become increasingly integrated into our daily lives, from voice assistants on our phones to recommendation systems on streaming platforms.\n",
3536
"\n",
@@ -224,7 +225,7 @@
224225
"source": [
225226
"# Understanding Inputs, Weights, and Outputs\n",
226227
"\n",
227-
"The perceptron processes information through a series of mathematical steps that transform inputs into an output. Each component plays a specific role in this transformation, and understanding these components is crucial to building a working perceptron. This section explores how inputs, weights, and the activation function work together to make decisions.\n",
228+
"The perceptron processes information through a series of mathematical steps that transform inputs into an output. Each component plays a specific role in this transformation, and understanding these components is crucial to building a working perceptron.\n",
228229
"\n",
229230
"* **Inputs (x)** are the values that the perceptron receives, such as features from data (e.g., pixel values in an image or test scores for students).\n",
230231
"* **Weights (w)** determine how important each input is to the final decision, with larger weights giving more importance to their associated inputs.\n",
@@ -256,7 +257,7 @@
256257
"source": [
257258
"# Building a Perceptron in Python: Class Structure\n",
258259
"\n",
259-
"Now that we understand how a perceptron works conceptually, let's implement one in Python. We'll use object-oriented programming to create a Perceptron class that will help us predict whether a student will pass a test based on their study hours and previous quiz score. This simple example will make the perceptron's function easy to understand.\n",
260+
"Now that we understand how a perceptron works conceptually, let's implement one in Python. We'll use object-oriented programming to create a Perceptron class that will help us predict whether a student will pass a test based on their study hours and previous quiz score.\n",
260261
"\n",
261262
"* Our Perceptron class will need to store the **weights** and **bias** for our model.\n",
262263
"* We'll need methods to **predict** outputs for given inputs and to **train** the perceptron.\n",
@@ -328,7 +329,7 @@
328329
"source": [
329330
"# Training Our Perceptron: The Learning Process\n",
330331
"\n",
331-
"Training a perceptron involves showing it examples and adjusting its weights to improve its predictions. This process, known as supervised learning, requires a dataset with inputs and their correct outputs (labels). The perceptron learns by comparing its predictions with the actual labels and making small adjustments to reduce the error.\n",
332+
"Training a perceptron involves showing it examples and adjusting its weights to improve its predictions. This process, known as **supervised learning**, requires a dataset with inputs and their correct outputs (labels). The perceptron learns by comparing its predictions with the actual labels and making small adjustments to reduce the error.\n",
332333
"\n",
333334
"* **Training data** consists of input features and their corresponding correct outputs (labels).\n",
334335
"* The **learning rate** determines how quickly the perceptron's weights are adjusted during training (smaller values mean slower but more stable learning).\n",
@@ -345,7 +346,7 @@
345346
" * Calculate the error: error = actual_output - predicted_output\n",
346347
" * Update each weight: weight_i = weight_i + learning_rate * error * input_i\n",
347348
" * Update bias: bias = bias + learning_rate * error\n",
348-
"3. Repeat step 2 for multiple epochs (complete passes through the training data)\n",
349+
"3. Repeat step 2 for multiple **epochs** (complete passes through the training data)\n",
349350
"\n",
350351
"This algorithm adjusts weights more when errors are larger and in proportion to the input values, gradually improving the perceptron's ability to correctly classify inputs."
351352
],
@@ -931,7 +932,7 @@
931932
"\n",
932933
"## From Training to ChatGPT: How Large Language Models Work\n",
933934
"\n",
934-
"Modern Large Language Models (LLMs) like ChatGPT and Claude are transformer-based systems trained through several stages:\n",
935+
"Modern Large Language Models (LLMs) like ChatGPT, Gemini, and Claude are transformer-based systems trained through several stages:\n",
935936
"\n",
936937
"1. **Pretraining**: The model learns language patterns by processing trillions of words from books, websites, and articles\n",
937938
" * It develops general understanding of grammar, facts, and reasoning\n",

0 commit comments

Comments
 (0)