Six Sigma Pro SMART
Six Sigma Pro SMART
  • 216
  • 2 159 799
The Math behind Neural Networks | Backpropagation simplified for beginners | Deep Learning basics
In this video, we'll dive into the concept of backpropagation using the chain rule, focusing on a simple neural network example. 🧠✨
🔍 What We've Covered So Far:
In Part 1, we discussed the forward pass, where we started with inputs, passed through a hidden layer, and reached an output neuron. This process gave us a predicted value which was 0.95. However, our actual target value was 1. 📊➡️🔄
🔙 Now, Let's Work Backwards:
Calculating the Loss: We begin by calculating the loss. This loss depends on our predicted value. 📉
Our prediction is the output of the sigmoid activation function applied to a previous aggregation. 🧩
The aggregation in turn depends on the weights from the previous layer. 🏋️‍♂️
✨ Updating Weights Using the Chain Rule:
To show how a specific weight gets updated, we'll use the chain rule and partial derivatives. This tells us how much the loss changes with respect to
a specific weight. 🧮
Then using the old weight (which was randomly initialized), η (eta) - the learning rate and the gradient dL/dw we find out the new weight. 🔄📈
🌟 Stay Tuned for More!
By the end of this video, you'll have a solid understanding of how backpropagation works and how it helps in training neural networks.
Don't forget to like, subscribe, and hit the notification bell so you don't miss any updates. 👍🔔
Переглядів: 22

Відео

The Math behind Neural Networks | Forward Pass simplified for beginners | Deep Learning basics
Переглядів 14616 годин тому
👋 Welcome to our hands-on tutorial on neural networks! In this video, we dive into the math behind the forward pass of a neural network. 🎓✨ Recommended Playlist - tinyurl.com/3c5rpnfm 📋 What We Cover: Simple Neural Network Setup: We start with a straightforward neural network architecture designed for a binary classification problem. Our network consists of an input layer with 3 neurons, a hidd...
Optimizers in Neural Networks | Adagrad | RMSprop | ADAM | Deep Learning basics
Переглядів 5421 годину тому
In deep learning, choosing the right learning rate is crucial. If it's too high, we might overshoot the optimal solution. If it's too low, training becomes painfully slow. An adaptive learning rate can help tackle these issues effectively. Previous tutorial - ua-cam.com/video/V39sJEANQDo/v-deo.html Deep Learning Playlist - tinyurl.com/4auxcm66 🛠 Dense vs Sparse Features: Dense Features: Get upd...
Dropout | Regularization in Neural Networks | Deep Learning basics
Переглядів 29День тому
In this video, we introduce the concept of dropout in neural networks. Droput is a regularization technique which helps prevent a network from overfitting and makes it more efficient. Using a simple analogy, we help you get a good grasp of this topic. Imagine a data science team where members have different strengths. Some are experts in specific tasks, so the team relies on them heavily. As a ...
Optimizers in Neural Networks | Gradient Descent with Momentum | NAG | Deep Learning basics
Переглядів 6214 днів тому
In this video, we'll do a recap of Gradient Descent and understand its drawbacks, the we'll be looking at how Momentum and Nesterov Accelerated Gradient Descent (NAG) come to the rescue. 🌟 Gradient Descent: The Basics Gradient Descent is a fundamental optimization technique used to minimize loss functions in machine learning. It works by iteratively adjusting the model parameters in the directi...
The Softmax Activation Function | Deep Learning baiscs
Переглядів 6514 днів тому
Related videos - ua-cam.com/video/aV37n_N1v98/v-deo.html Deep Learning Playlist - tinyurl.com/4auxcm66 In this video, we'll explore: Why Sigmoid is not ideal for the output layer in multiclass classification ❌. Using the Softmax function in real-world examples: Stock market decisions 📈: Buy, Sell, or Hold. Handwritten digit recognition ✏️: Classifying digits from 0 to 9. Why Sigmoid Falls Short...
Comparing Machine Learning Models | Logistic Regression | LDA | Decision Trees | Random Forest
Переглядів 13214 днів тому
🏭⚙️ Welcome to Part 2 of our preventive maintenance case study. In this video, we're going to compare four powerful machine learning models: Logistic Regression, Linear Discriminant Analysis, Decision Trees, and Random Forest. 🌳🧠 Important links: Part 1 of this case study - ua-cam.com/video/VH5YgpI0Dc4/v-deo.html Complete Data Preparation Playlist - tinyurl.com/yraup5jm Complete Supervised Lear...
All-in-one Data Preparation | Missing Values | Outliers | Scaling | Multicollinearity | Encoding
Переглядів 18621 день тому
Welcome to the first part of our exciting hands-on case study from the UCI Machine Learning Library! Dataset Link - tinyurl.com/zncb8u9b 🌟 In this video, we're diving deep into the crucial process of data preparation, ensuring you're equipped with all the skills you need for a successful machine learning project. Let's get started! 🚀 🔍 What's Inside? Checking for Duplicates 📑🔍 Learn how to iden...
Gradient Descent Continued | Deep Learning basics
Переглядів 4721 день тому
Hello, everyone! 👋 Welcome back to our channel! In this video, we continue to dive deeper into the concept of Gradient Descent. 🌟 Gradient Descent Part 1 - ua-cam.com/video/kzIRRY8n_R4/v-deo.html 🔎 What We'll Cover: 🏞️ Convex Loss Function: We'll start by understanding a convex loss function. Imagine a smooth, parabola-shaped curve with a single minimum point. 🌐 The derivative at any point on t...
Gradient Descent | Deep Learning basics
Переглядів 7521 день тому
Hey there! 👋 Imagine playing a super fun game on a 2D contour map where the goal is to score the lowest possible point! 🏆 But there's a twist! 🎢 We have two sliders: one for weight ⚖️ and one for bias 🧩. Adjusting these sliders helps us navigate to the lowest point on the map, which is our ultimate goal. Sounds exciting, right? 🎯 🌟 The Challenge: Now, let's make it more interesting! What if we ...
Batch Normalization in Neural Networks | Deep Learning basics
Переглядів 7528 днів тому
In this video, we'll dive into batch normalization and understand its importance in training neural networks. 🧠 🔍 What is Batch Normalization? Batch normalization is a technique used to normalize the inputs of each layer, making the training process more stable and efficient. 📊 It helps in addressing the vanishing/exploding gradients problem and allows for faster convergence of the network. 🏃‍♂...
Neural Networks throw their weights around 😊 | Xavier & He Initialization | Deep Learning basics
Переглядів 91Місяць тому
In this video, we'll guide you through the crucial concept of weight initialization and why it matters so much in building effective neural networks. Let's dive in! 🌊 Common Mistakes in Weight Initialization 🚫 🔍 Symmetry Breaking Problem: One common mistake is initializing all weights to the same value. If weights are equal, neurons learn the same features, leading to the symmetry breaking prob...
Importance of Weight Initialization in Neural Networks | Deep Learning basics
Переглядів 125Місяць тому
🧠 In this video, we'll start with a simple neural network setup: input layer ➡️ hidden layer ➡️ output layer. Each layer is connected by weights - let's call them W1s between the input and hidden layers, and W2s between the hidden and output layers. As we journey through the feedforward process, 🚶‍♂️ we'll witness the aggregations and activations at each neuron. But as we begin to do backpropag...
All-in-one Activation Functions | Sigmoid, tanh, ReLU, Leaky ReLU, ELU, Swish | Deep Learning basics
Переглядів 1,6 тис.Місяць тому
🌟 In this video, we dive deep into some of the popular activation functions, exploring their unique properties and how they impact our neural network models. Get ready to level up your understanding of these crucial elements in deep learning! 📈 🎬 Here's what we cover: Step Function: 🚶‍♂️ Simple yet powerful, we examine its binary nature and its implications for neural networks. Sigmoid: 🌈 The s...
Activation functions | Better Neural Networks | Deep Learning basics
Переглядів 375Місяць тому
🚀 In this video, we'll explore five key properties that make activation functions essential in neural networks: Non-linearity, Differentiability, Zero-centeredness, Computational efficiency, and Saturation effects. Let's break them down one by one! 💡 Non-linearity: Activation functions add non-linearity to neural networks, allowing them to learn complex patterns and relationships within data. 🔄...
Importance of Activation Functions in Neural Networks | Deep Learning basics
Переглядів 352Місяць тому
Importance of Activation Functions in Neural Networks | Deep Learning basics
Where and how to find the best datasets? | Data Science
Переглядів 129Місяць тому
Where and how to find the best datasets? | Data Science
Getting started with Neural Networks
Переглядів 136Місяць тому
Getting started with Neural Networks
The evolution of Perceptrons: From Sigmoid to Multilayer marvels
Переглядів 60Місяць тому
The evolution of Perceptrons: From Sigmoid to Multilayer marvels
The journey of a neuron | Geometric intuition
Переглядів 81Місяць тому
The journey of a neuron | Geometric intuition
The journey of a neuron | The Perceptron
Переглядів 50Місяць тому
The journey of a neuron | The Perceptron
The journey of a neuron | McCulloch Pitts Neuron
Переглядів 108Місяць тому
The journey of a neuron | McCulloch Pitts Neuron
Kano model
Переглядів 65Місяць тому
Kano model
What's the difference? | Data Science vs Data Analytics
Переглядів 88Місяць тому
What's the difference? | Data Science vs Data Analytics
Tracing the roots of Data Science
Переглядів 51Місяць тому
Tracing the roots of Data Science
Python hands-on tutorial: Lambda Functions | Lambda Expressions
Переглядів 502 місяці тому
Python hands-on tutorial: Lambda Functions | Lambda Expressions
The better Hyperparameter Tuning using Bayesian Search & Optuna
Переглядів 1182 місяці тому
The better Hyperparameter Tuning using Bayesian Search & Optuna
Bayesian Search | The better Hyperparameter Tuning approach
Переглядів 1022 місяці тому
Bayesian Search | The better Hyperparameter Tuning approach
Hands-on Hyperparameter Tuning | Randomized Search
Переглядів 472 місяці тому
Hands-on Hyperparameter Tuning | Randomized Search
Hands-on Hyperparameter Tuning | Grid Search
Переглядів 822 місяці тому
Hands-on Hyperparameter Tuning | Grid Search

КОМЕНТАРІ

  • @user-fn3vg8zn1o
    @user-fn3vg8zn1o 2 дні тому

    Very smooth, good audio and use of visuals. I hardly commented and had to because you explained so well and I would love to see more of your videos. Good Job!

    • @prosmartanalytics
      @prosmartanalytics 2 дні тому

      Thank you! It means a lot. Please explore our playlists on related topics.😊

  • @Neriamonde
    @Neriamonde 6 днів тому

    This video is awesome, great content

  • @avinpereira8495
    @avinpereira8495 8 днів тому

    hi do I need to know linear algebra to understand linear regression better ?

    • @prosmartanalytics
      @prosmartanalytics 8 днів тому

      Good to do but not mandatory. Particularly if you are interested only in the applied side of it.

  • @EfirDop
    @EfirDop 8 днів тому

    Hello sir i'm from ethiopia thank's alot for your explanation!!

  • @angesaguiah4740
    @angesaguiah4740 9 днів тому

    You fulfilled your promise

  • @bobby9423
    @bobby9423 9 днів тому

    Excellent video on the topic!!!!

  • @jaikumarmeena4769
    @jaikumarmeena4769 12 днів тому

    thanks FOE THIS ❤❤❤❤❤❤❤❤

  • @thatchayaniravanan1291
    @thatchayaniravanan1291 13 днів тому

    Great Video! Can you share your code file link? Thanks❤

    • @prosmartanalytics
      @prosmartanalytics 13 днів тому

      Thank you! We faced some IP infringement issues in the past hence the code file is not shared. Hope you can understand.

  • @mechdoudmohammed6558
    @mechdoudmohammed6558 13 днів тому

    It's a good explanation, thanks.

  • @simonkibera4722
    @simonkibera4722 14 днів тому

    Clearly explained, Thank you

  • @ullu65
    @ullu65 14 днів тому

    A real good explanation... I never forget this from now

  • @HEADSPACE820
    @HEADSPACE820 14 днів тому

    Honestly struggled with this for so long 😭thank you so much for helping

  • @robertdr2010
    @robertdr2010 17 днів тому

    Thank you!

    • @prosmartanalytics
      @prosmartanalytics 17 днів тому

      Wow! We see you are following the sequence. That's what a committed learner does. Keep it up. 👍

  • @robertdr2010
    @robertdr2010 17 днів тому

    Thank you!!!

  • @sergiyvasylyk2943
    @sergiyvasylyk2943 18 днів тому

    Thank you for explaining

  • @vishalkarmakar418
    @vishalkarmakar418 20 днів тому

    Tomorrow I have my exam the explanation is top notch thnku so much

  • @devalpatel552
    @devalpatel552 21 день тому

    very useful videos to revise as well as learn the concepts !!

  • @abdolrahimtooraanian5615
    @abdolrahimtooraanian5615 25 днів тому

    Thanks!!!

  • @sairamteja6785
    @sairamteja6785 25 днів тому

    thanks man so for the best explanation i have seen

  • @trustoriakhi5786
    @trustoriakhi5786 25 днів тому

    Thank you so much for your content

  • @Abdulmoiz-lx8vv
    @Abdulmoiz-lx8vv 28 днів тому

    boss

  • @pradeepmallampalli4541
    @pradeepmallampalli4541 29 днів тому

    Well explained, Thanks for your work

  • @idrees7861000
    @idrees7861000 29 днів тому

    great learning video, it help me in my assignment

  • @Someoneelse_XD
    @Someoneelse_XD Місяць тому

    Most underrated problem in data science. I have seen so many models deployed into production, never delivered any value because of this.

  • @sreelakshmikb3927
    @sreelakshmikb3927 Місяць тому

    Thank you , this was a saver

  • @ryanmwise1
    @ryanmwise1 Місяць тому

    This is an important question and a great explanation. Thank you! I'd love to see a follow up video that gives an example of how non-linear activation functions help a NN actually fit a non-linear function.

  • @menrmennotwomenlul
    @menrmennotwomenlul Місяць тому

    Bruh I just found your channel and this is an absolute goldmine. Thank you so much for the videos

    • @prosmartanalytics
      @prosmartanalytics Місяць тому

      Thank you! Stay tuned, there's a lot more coming your way. 😊

  • @user-qv7on3dl9y
    @user-qv7on3dl9y Місяць тому

    Should the 2/4 not be 3/4 in the second part of the equation as are they not independent. Ie 2 different scenarios that we are adding as possible probabilities? at 21 min 31 sec

    • @prosmartanalytics
      @prosmartanalytics Місяць тому

      Please check the second Trial 2 on the left. 2 Red and 2 Black, hence 2/4. Hope it helps.

    • @user-qv7on3dl9y
      @user-qv7on3dl9y Місяць тому

      @@prosmartanalytics thanks man makes sense :)

  • @user-ff6gf8gk5m
    @user-ff6gf8gk5m Місяць тому

    You’ve made bayes’ theorem lovable for me. I am looking forward to getting more videos on probability from you

    • @prosmartanalytics
      @prosmartanalytics Місяць тому

      Thank you! Please try these: ua-cam.com/video/jGG3cIox1co/v-deo.htmlfeature=shared ua-cam.com/video/Ab4viREnP74/v-deo.htmlfeature=shared

  • @DanielTok-bs5mn
    @DanielTok-bs5mn Місяць тому

    awesome, but what about stratify when splitting?

    • @prosmartanalytics
      @prosmartanalytics Місяць тому

      Thank you! Stratify maintains the same proportion of 0s and 1s in both train and val/test sets as that of the overall data, but it won't resolve the class imbalance issue. We may stratify at the time of split to maintain whatever imbalance we have, and then apply imbalance treatment only to the train set.

  • @Anandhu-X
    @Anandhu-X Місяць тому

    10:52 I am running this and I get an error on the vif["value"]= [variance_inflation_factor(df.values,I)… line. Its saying Type Error ufunc isfinite not supported for the input types and the inputs could not be safely coerced to any supported types

  • @avinpereira8495
    @avinpereira8495 Місяць тому

    hi sir would you explain to me when to use t-distribution and when to use a z-distribution?

    • @prosmartanalytics
      @prosmartanalytics Місяць тому

      Hello! Please refer to this link: ua-cam.com/video/QyrvYvgdKLI/v-deo.htmlfeature=shared

  • @user-if7uy4sb6x
    @user-if7uy4sb6x Місяць тому

    most underrated Vedio,but we will praise your efforts, thank u so much 😎😎

  • @beaverbuoy3011
    @beaverbuoy3011 Місяць тому

    Very interesting

  • @aryankumarsingh402
    @aryankumarsingh402 Місяць тому

    Thanks!

  • @vinaykashyap2319
    @vinaykashyap2319 Місяць тому

    Hello, I was just going thru the video. How would you explain the prime number 'for loop' for the digit 2. Because I'm thinking that in first for loop where num gets the value of 2 and in nested for loop 'i' gets the value of 2 and num is still retaining 2 and as per if statement 2%2 gives 0 reminder it should be non prime. But how it is appending to prime number ? Kindly answer

    • @prosmartanalytics
      @prosmartanalytics Місяць тому

      Hint: Try checking the output of list(range(2, 2)). So when we write for i in range(2, num) where num is 2 what would it do?

    • @vinaykashyap2319
      @vinaykashyap2319 Місяць тому

      Understood if 2 is assigned in the first for loop, in the second for loop it becomes num-1 so it takes 1 in the condition statement. Thank you

  • @prernaupadhyay74
    @prernaupadhyay74 Місяць тому

    good

  • @avinpereira8495
    @avinpereira8495 Місяць тому

    Thx for your explanations it was quite clear about both the topics conditional and bayes, however I am confused where I can apply the bases theorem and I should use the conditional probability

    • @prosmartanalytics
      @prosmartanalytics Місяць тому

      Bayes' Theorem is an extension of the conditional probability concept. You may need more practice, it would help.👍

  • @walterhuang5075
    @walterhuang5075 2 місяці тому

    This is definitely the best video explaining CPK that I've ever seen! Thank you so much!

  • @minalgupta7456
    @minalgupta7456 2 місяці тому

    great video

  • @vinaykashyap2319
    @vinaykashyap2319 2 місяці тому

    Thanks for this video and your way of explaining the topic is awesome. I can't believe your channel hasn't reached to the people yet.

    • @prosmartanalytics
      @prosmartanalytics 2 місяці тому

      Thank you! The journey of a thousand miles begins with a single step. We are happy it has reached you. 😊

  • @8eck
    @8eck 2 місяці тому

    Very good explanation. Thank you.

  • @foysal_BD_CSE
    @foysal_BD_CSE 2 місяці тому

    To calculate the shortcut form, problem-1: how take you total products is 700? And also in problem-2: how take you total vehicles is 1000? If I take a random value, I can't get the exact answer.

    • @prosmartanalytics
      @prosmartanalytics 2 місяці тому

      Give it more time to understand it from the beginning. Try to do it yourself. You'll always get the same answer (as in probability) because the assumed values of totals no matter what you take will get cancelled while calculating the probability as it participates both in numerator and denominator.

  • @user-gz3nh4wk9g
    @user-gz3nh4wk9g 2 місяці тому

    After hours of searching and learning I didn't even get 1% of GMM but you have explained in 9 mins. Thankyou Subscribed

  • @aswinimechiri3157
    @aswinimechiri3157 2 місяці тому

    what is the best way to choose initial centroid points?

    • @prosmartanalytics
      @prosmartanalytics 2 місяці тому

      Though starting randomly for the first custer center is ok, but therafter for subsequent cluster centers we would vote in favor of the logic used by kmeans++.

  • @Thomas-ft4jk
    @Thomas-ft4jk 2 місяці тому

    Hi, great video! I have reached a good final point in my analysis and wondering how I can export this sorted table to csv?

    • @prosmartanalytics
      @prosmartanalytics 2 місяці тому

      Thank you! You may use pandas' to_csv() method.

  • @elitea6070
    @elitea6070 2 місяці тому

    Thank you sir, my lecturer can't even explain he just dumps random equations on me. At least this gives me an idea!

  • @anushkarai5564
    @anushkarai5564 2 місяці тому

    Woww! You explained it so well! Best video to study GMM!

  • @thomasmakrodimos1997
    @thomasmakrodimos1997 2 місяці тому

    Amazing explanation! The best tutorial for PCA! Thank you for your work...

  • @Kaalokalawaia
    @Kaalokalawaia 2 місяці тому

    Subbed. Thank you. This makes so much sense.