WHERE FUTURE BEGINS
  • ṢELF ḌEEP ḶEARNING
  • LSE MBA Essentials - The London School of Economics
    • Leading with influence
    • Economics for managers
    • Competitive strategy
    • Corporate strategy
    • Financial accounting
    • Management accounting
    • Analysing financial statements
    • In the mind of the manager
    • Nudging behaviour
    • Organisational culture as a leadership tool
  • Business Foundations Specialization - Wharton Online
    • Introduction to Marketing
      • BRANDING: Marketing Strategy and Brand Positioning
      • Marketing 101: Building Strong Brands Part I
      • Marketing 101: Building Strong Brands Part II
      • Strategic Marketing
      • Segmentation and Targeting
      • Brand Positioning
      • Brand Mantra: The Elevator Speech
      • Experiential Branding
      • CUSTOMER CENTRICITY: The Limits of Product-Centric Thinking
      • Cracks in the Product-Centric Approach
      • Data-Driven Business Models
      • Three Cheers for Direct Marketing
      • Which Firms Are Customer Centric?
      • What is Customer Centricity?
      • Living in a Customer-Centric World
      • More Reflections on Customer CentricityPrev
      • Questions on Customer Centricity
      • GO TO MARKET STRATEGIES: Online-Offline Interaction
      • Online/Offline Competition
      • Friction
      • The Long Tail Theory
      • Preference Isolation
      • How Internet Retailing Startups Grow
      • Customers and Digital Marketing
      • Influence and How Information Spreads
      • Pricing Strategies
      • The 7M
      • BRANDING: Effective Brand Communications Strategies and Repositioning Strategies
      • Brand Messaging & Communication
      • Brand Elements: Choosing a Brand Name
      • Brand Elements: Color & Taglines
      • Brand Elements: Packaging
      • Brand Elements: Persuasion
      • Repositioning a Brand
    • Introduction to Financial Accounting
      • 1.1.1: Financial Reporting Overview
      • 1.1.2: Financial Reporting Example
    • Managing Social and Human Capital
      • Professor Cappelli and Professor Useem Introductions
    • Introduction to Corporate Finance
      • Time Value of Money
      • Intuition and Discounting
      • Compounding
      • Useful Shortcuts
      • Taxes
      • Inflation
      • APR and EAR
      • Term Structure
      • Discounted Cash Flow: Decision Making
      • Discounted Cash Flow Analysis
      • Forecast Drivers
      • Forecasting Free Cash Flow
      • Decision Criteria
      • Sensitivity Analysis
      • Return on Investment
    • Introduction to Operations Management
    • Wharton Business Foundations Capstone
  • Artificial Intelligence Career Program - deeplearning.ai
    • Machine Learning
      • Introduction to Machine Learning
      • Supervised Learning
      • Unsupervised Learning
      • Model Representation - Linear Regression
      • Cost Function
      • Gradient Descent
      • Gradient Descent For Linear Regression
      • Linear Algebra
    • Deep Learning
    • Neutral Networks and Deep Learning
      • Introduction to Deep Learning
      • What is a neural network?
      • Supervised Learning with Neural Networks
      • Why is Deep Learning taking off?
      • About this Course
      • Binary Classification
      • Logistic Regression
      • Gradient Descent
      • Derivatives
      • Computation graph
      • Derivatives with a Computation Graph
      • Logistic Regression Gradient Descent
      • Vectorization
      • Vectorizing Logistic Regression
      • Vectorizing Logistic Regression's Gradient Output
      • Broadcasting in Python
      • A note on python/numpy vectors
      • Explanation of logistic regression cost function (optional)
      • Neural Networks Overview
      • Neural Network Representation
      • Computing a Neural Network's Output
      • Vectorizing across multiple examples
      • Activation functions
      • Derivatives of activation functions
      • Gradient descent for Neural Networks
      • Backpropagation intuition (optional)
      • Random Initialization
      • Deep L-layer neural network
      • Forward Propagation in a Deep Network
      • Getting your matrix dimensions right
      • Why deep representations?
      • Building blocks of deep neural networks
      • Forward and Backward Propagation
      • Parameters vs Hyperparameters
      • What does this have to do with the brain?
    • Convolutional Neural Networks
      • Computer Vision
      • Edge Detection Example
      • Padding
      • Strided Convolutions
      • Convolutions Over Volume
      • One Layer of a Convolutional Network
      • Simple Convolutional Network Example
      • Pooling Layers
      • CNN Example - Fully Connected Layers
      • Why Convolutions?
    • Neural Network Theory [ETH]
    • Natural Language Processing
    • Computer Vision
  • IBM Data Science Professional Certificate
    • What is Data Science?
    • Open Source tools for Data Science
    • Data Science Methodology
    • Python for Data Science and AI
    • Databases and SQL for Data Science
    • Data Analysis with Python
    • Data Visualization with Python
    • Machine Learning with Python
    • Applied Data Science Capstone
  • Data Analytics
    • Python for Data Analysis
    • Data Structure and Algorithms
  • Programming Language
    • Python
    • R
    • SQL
    • C++
    • C
    • Java
    • HTML
  • 机器学习工程师
  • 商业数据分析
Powered by GitBook
On this page

Was this helpful?

  1. Artificial Intelligence Career Program - deeplearning.ai
  2. Neutral Networks and Deep Learning

Neural Networks Overview

PreviousExplanation of logistic regression cost function (optional)NextNeural Network Representation

Last updated 5 years ago

Was this helpful?

欢迎回来 在这一周你将学习 实现一个神经网络 在了解技术细节之前 我将在本视频中做一个快速的介绍 关于本周将学习的内容 所以如果你没有掌握完全 不要担心 它的技术细节 将在之后的视频中详细介绍 但现在我们只是先大概地 了解一下如何实现神经网络 上周我们讲了逻辑回归 以及这个模型是怎么对应到 下面的计算图 其中我们通过特征x以及 参数w和b的运算得到z 然后通过z计算得到a 接着我们用a或者同等的y^ 来计算得到损失函数L 一个神经网络就像这样形成了 正如我之前有顺便提到 我们可以通过堆叠 一系列的σ单元 来构建一个神经网络 前面提到这个节点对应了 两个计算步骤 第一个是计算Z的值 第二个是计算a的值 在这个神经网络中 节点的堆叠对应一个z值 像这样计算 还有一个a值 像这样计算 然后这个节点会对应下一个z值 和下一个a值的计算 之后我们会用到的表示符号 就像这样 首先是输入特征x 以及一些参数w和b 从而我们可以计算得到z1 这里我们需要用到的新的符号就是 我们用右上角[1] 来在表示这一层堆叠的节点的参数 也就是所谓的神经网络的一层 然后用右上角的[2] 来表示下一组节点的参数 也就是所谓的 下一层神经网络 注意不要把这里的右上角的方括号 和我们用来表示 单个训练样本的 右上角的圆括号 弄混淆了 这里的x(i)我用来表示第i个 训练样本 但是[1]和[2]却用来表示 不用的神经网络层 这里表示神经网络层1和层2 我们继续 计算得到z[1]以后 类似于逻辑回归 这里会有一个对a[1]的计算 也就是z[1]的σ值 然后用另一个线性方程来计算z[2] 然后再计算a[2] 这里的a[2]就是 这个神经网络的最终结果 这里同样的可以用输出y^表示 我明白这里有很多的细节 但是最关键的点在于 逻辑回归中这个z 以及后面的a的计算 这个神经网络中我们 做了多次通过z计算得到a 然后通过新的z得到新的a 最后我们就能计算得到 最终的损失值 你应该还记得对于逻辑回归 我们通过向后传播的计算来 计算(每一层的)参数的梯度 如da dz等等 在神经网络的构建中 我们最终会像这样 做向后传播式的运算 这里我们最后会计算da[2] dz[2] 从而我们可以计算得到dw[2] db[2] 然后继续 这就是从右到左红色箭头表示的 向后的运算 你你 今天我们简单概览了 神经网络大概是什么样子 我们拿一个逻辑回归

然后重复这个过程两次 我明白今天课程中有许多的新的符号 以及新的知识细节 请不要担心跟不上 因为我们在接下来的几节课的视频中 我们会慢慢地讲解细节 接下来让我们进入下一节课 我们将会开始介绍 神经网络的表示方法