neural-nets-hinton-15ZLo
- 1 - 1 - Why do we need chine learning- [13 min].mp415.05MB
- 1 - 2 - What are neural networks- [8 min].mp49.76MB
- 1 - 3 - Some simple models of neurons [8 min].mp49.26MB
- 1 - 4 - A simple example of learning [6 min].mp46.57MB
- 1 - 5 - Three types of learning [8 min].mp48.96MB
- 10 - 1 - Why it helps to combine models [13 min].mp415.12MB
- 10 - 2 - Mixtures of Experts [13 min].mp414.98MB
- 10 - 3 - The idea of full Bayesian learning [7 min].mp48.39MB
- 10 - 4 - king full Bayesian learning practical [7 min].mp48.13MB
- 10 - 5 - Dropout [9 min].mp49.69MB
- 11 - 1 - Hopfield Nets [13 min].mp414.65MB
- 11 - 2 - Dealing with spurious mini [11 min].mp412.77MB
- 11 - 3 - Hopfield nets with hidden units [10 min].mp411.31MB
- 11 - 4 - Using stochastic units to improv search [11 min].mp411.76MB
- 11 - 5 - How a Boltznn machine models data [12 min].mp413.28MB
- 12 - 1 - Boltznn machine learning [12 min].mp414.03MB
- 12 - 2 - OPTIONAL VIDEO- More efficient ways to get the statistics [15 mins].mp416.93MB
- 12 - 3 - Restricted Boltznn Machines [11 min].mp412.68MB
- 12 - 4 - An example of RBM learning [7 mins].mp48.71MB
- 12 - 5 - RBMs for collaborative filtering [8 mins].mp49.53MB
- 13 - 1 - The ups and downs of back propagation [10 min].mp411.83MB
- 13 - 2 - Belief Nets [13 min].mp414.86MB
- 13 - 3 - Learning sigmoid belief nets [12 min].mp413.59MB
- 13 - 4 - The wake-sleep algorithm [13 min].mp415.68MB
- 14 - 1 - Learning la<x>yers of features by stacking RBMs [17 min].mp420.07MB
- 14 - 2 - Discriminative learning for DBNs [9 mins].mp411.29MB
- 14 - 3 - What happens during discriminative fine-tuning- [8 mins].mp410.17MB
- 14 - 4 - Modeling real-valued data with an RBM [10 mins].mp411.20MB
- 14 - 5 - OPTIONAL VIDEO- RBMs are infinite sigmoid belief nets [17 mins].mp419.44MB
- 15 - 1 - From PCA to autoencoders [5 mins].mp49.68MB
- 15 - 2 - Deep auto encoders [4 mins].mp44.92MB
- 15 - 3 - Deep auto encoders for doent retrieval [8 mins].mp410.25MB
- 15 - 4 - Sentic Hashing [9 mins].mp49.99MB
- 15 - 5 - Learning binary codes for ige retrieval [9 mins].mp411.51MB
- 15 - 6 - Shallow autoencoders for pre-training [7 mins].mp48.25MB
- 16 - 1 - OPTIONAL- Learning a joint model of iges and captions [10 min].mp413.83MB
- 16 - 2 - OPTIONAL- Hierarchical Coordinate fr<x>ames [10 mins].mp411.16MB
- 16 - 3 - OPTIONAL- Bayesian optimization of hyper-parameters [13 min].mp415.80MB
- 16 - 4 - OPTIONAL- The fog of progress [3 min].mp42.78MB
- 2 - 1 - Types of neural network architectures [7 min].mp48.78MB
- 2 - 2 - Perceptrons- The first generation of neural networks [8 min].mp49.39MB
- 2 - 3 - A geometrical view of perceptrons [6 min].mp47.32MB
- 2 - 4 - Why the learning works [5 min].mp45.90MB
- 2 - 5 - What perceptrons can-\t do [15 min].mp416.57MB
- 3 - 1 - Learning the weights of a linear neuron [12 min].mp413.52MB
- 3 - 2 - The error surface for a linear neuron [5 min].mp45.89MB
- 3 - 3 - Learning the weights of a logistic output neuron [4 min].mp44.37MB
- 3 - 4 - The backpropagation algorithm [12 min].mp413.35MB
- 3 - 5 - Using the derivatives computed by backpropagation [10 min].mp411.15MB
- 4 - 1 - Learning to predict the next word [13 min].mp414.28MB
- 4 - 2 - A brief diversion into cognitive science [4 min].mp45.31MB
- 4 - 3 - Another diversion- The softx output function [7 min].mp48.03MB
- 4 - 4 - Neuro-probabilistic language models [8 min].mp48.93MB
- 4 - 5 - Ways to deal with the large number of possible outputs [15 min].mp414.26MB
- 5 - 1 - Why ob<x>ject recognition is difficult [5 min].mp45.37MB
- 5 - 2 - Achieving viewpoint invariance [6 min].mp46.89MB
- 5 - 3 - Convolutional nets for digit recognition [16 min].mp418.46MB
- 5 - 4 - Convolutional nets for ob<x>ject recognition [17min].mp423.03MB
- 6 - 1 - Overview of mini-batch gradient descent.mp49.60MB
- 6 - 2 - A bag of tricks for mini-batch gradient descent.mp414.90MB
- 6 - 3 - The momentum method.mp49.74MB
- 6 - 4 - Adaptive learning rates for each connection.mp46.63MB
- 6 - 5 - Rmsprop- Divide the gradient by a running erage of its recent gnitude.mp415.12MB
- 7 - 1 - Modeling sequences- A brief overview.mp420.13MB
- 7 - 2 - Training RNNs with back propagation.mp47.33MB
- 7 - 3 - A toy example of training an RNN.mp47.24MB
- 7 - 4 - Why it is difficult to train an RNN.mp48.89MB
- 7 - 5 - Long-term Short-term-memory.mp410.23MB
- 8 - 1 - A brief overview of Hessian Free optimization.mp416.24MB
- 8 - 2 - Modeling character strings with multiplicative connections [14 mins].mp416.56MB
- 8 - 3 - Learning to predict the next character using HF [12 mins].mp413.92MB
- 8 - 4 - Echo State Networks [9 min].mp411.28MB
- 9 - 1 - Overview of ways to improve generalization [12 min].mp413.57MB
- 9 - 2 - Limiting the size of the weights [6 min].mp47.36MB
- 9 - 3 - Using se as a regularizer [7 min].mp48.48MB
- 9 - 4 - Introduction to the full Bayesian approach [12 min].mp412.00MB
- 9 - 5 - The Bayesian interpretation of weight decay [11 min].mp412.27MB
- 9 - 6 - cKay-\s quick and dirty method of setting weight costs [4 min].mp44.37MB
- CreateTime2020-10-30
- UpdateTime2020-11-01
- FileTotalCount78
- TotalSize1.73GBHotTimes5ViewTimes10DMCA Report EmailmagnetLinkThunderTorrent DownBaiduYunLatest Search: 1.MIBD-735 2.SDMS-464 3.RKI-261 4.PBD-137 5.SWSM-001 6.NATR-261 7.SIDD-022 8.ALD-462 9.SMA-575 10.SWD-118 11.PDX-052 12.ID-021 13.KS-8633 14.BNDV-402 15.SAMA-695 16.BSJ-008 17.PKC-012 18.JPDRS-1657 19.MDB-592 20.XRW-074 21.HND-223 22.DFGR-001 23.BNSPS-425 24.YSN-458 25.DJSB-089 26.PBD-335 27.SDMU-375 28.CJOB-023 29.MDTM-204 30.DINM-363 31.MAGN-007 32.PARATHD-1922 33.EVIS-175 34.FABS-093 35.XC-1076 36.KBMS-039 37.HAHOB-011 38.KMVR-453 39.YRH-168 40.T28-537 41.AVOP-415 42.EMRD-122 43.HMD-035 44.735 45.4 46.261 47.137 48.001 49.261 50.022 51.462 52.575 53.118 54.052 55.021 56.8633 57.402 58.695 59.008 60.012 61.1657 62.592 63.074 64.223 65.001 66.425 67.458 68.089 69.335 70.375 71.023 72.204 73.363 74.007 75.1922 76.175 77.093 78.1076 79.039 80.011 81.453 82.168 83.537 84.415 85.122 86.035