Read "Streaming Systems" 1&2, Streaming 101 Read "F1, a distributed SQL database that scales" Read "Zanzibar, Google’s Consistent, Global Authorization System" Read "Spanner, Google's Globally-Distributed Database" Read "Designing Data-intensive applications" 12, The Future of Data Systems IOS development with Swift Read "Designing Data-intensive applications" 10&11, Batch and Stream Processing Read "Designing Data-intensive applications" 9, Consistency and Consensus Read "Designing Data-intensive applications" 8, Distributed System Troubles Read "Designing Data-intensive applications" 7, Transactions Read "Designing Data-intensive applications" 6, Partitioning Read "Designing Data-intensive applications" 5, Replication Read "Designing Data-intensive applications" 3&4, Storage, Retrieval, Encoding Read "Designing Data-intensive applications" 1&2, Foundation of Data Systems Three cases of binary search TAMU Operating System 2 Memory Management TAMU Operating System 1 Introduction Overview in cloud computing 2 TAMU Operating System 7 Virtualization TAMU Operating System 6 File System TAMU Operating System 5 I/O and Disk Management TAMU Operating System 4 Synchronization TAMU Operating System 3 Concurrency and Threading TAMU Computer Networks 5 Data Link Layer TAMU Computer Networks 4 Network Layer TAMU Computer Networks 3 Transport Layer TAMU Computer Networks 2 Application Layer TAMU Computer Networks 1 Introduction Overview in distributed systems and cloud computing 1 A well-optimized Union-Find implementation, in Java A heap implementation supporting deletion TAMU Advanced Algorithms 3, Maximum Bandwidth Path (Dijkstra, MST, Linear) TAMU Advanced Algorithms 2, B+ tree and Segment Intersection TAMU Advanced Algorithms 1, BST, 2-3 Tree and Heap TAMU AI, Searching problems Factorization Machine and Field-aware Factorization Machine for CTR prediction TAMU Neural Network 10 Information-Theoretic Models TAMU Neural Network 9 Principal Component Analysis TAMU Neural Network 8 Neurodynamics TAMU Neural Network 7 Self-Organizing Maps TAMU Neural Network 6 Deep Learning Overview TAMU Neural Network 5 Radial-Basis Function Networks TAMU Neural Network 4 Multi-Layer Perceptrons TAMU Neural Network 3 Single-Layer Perceptrons Princeton Algorithms P1W6 Hash Tables & Symbol Table Applications Stanford ML 11 Application Example Photo OCR Stanford ML 10 Large Scale Machine Learning Stanford ML 9 Anomaly Detection and Recommender Systems Stanford ML 8 Clustering & Principal Component Analysis Princeton Algorithms P1W5 Balanced Search Trees TAMU Neural Network 2 Learning Processes TAMU Neural Network 1 Introduction Stanford ML 7 Support Vector Machine Stanford ML 6 Evaluate Algorithms Princeton Algorithms P1W4 Priority Queues and Symbol Tables Stanford ML 5 Neural Networks Learning Princeton Algorithms P1W3 Mergesort and Quicksort Stanford ML 4 Neural Networks Basics Princeton Algorithms P1W2 Stack and Queue, Basic Sorts Stanford ML 3 Classification Problems Stanford ML 2 Multivariate Regression and Normal Equation Princeton Algorithms P1W1 Union and Find Stanford ML 1 Introduction and Parameter Learning

TAMU Neural Network 1 Introduction

2017-01-23

models of neurons

Human Brain:

Stimulus Receptors Neural Net Effectors Response

  • s per operation
  • neurons and connections
  • per operation

  • Synapses with associated weights : j to k denoted
  • Summing function:
  • Activation function:
  • Bia : or

activation function

  1. threshold unit
  2. piece-wise linear
  3. sigmoid: logistic and
  4. signum function
  5. sign function
  6. hyperbolic tangent function

stochastic models

Instead of deterministic activation, stochastic activation can be done. Activated with a probability of firing .

A typical choice: . T is a pseudotemperature.

In computer simulation, use the rejection method.

definition of a neural network

  • signals are passed between neurons over connection links
  • each connection link has an associated weight, which typically multiplies the signal transmitted
  • each neuron applies an activation function to its net input to determine its output signal

feedback

tamu-nn-slide1-feedback

\[y_k(n) = A[x_j ‘(n)]\]

\[x_j ‘(n) = x_j(n) + B[y_k(n)]\]

So:

\[y_k(n) = \frac{A}{1-AB}[x_j(n)]\]

is called the closed-loop operator and is the open loop operator.

Substitute for and unit delay operator for .

\[\frac{A}{1-AB}=w(1-w z^{-1})^{-1}=w \sum_{l=0}^{\infty}w^l z^{-l}\]

So the output will be:

\[y_k(n)=w\sum_{l=0}^{\infty}w^l z^{-l}[x_j(n)]=\sum_{l=0}^{\infty}w^{l+1} x_j(n-l)\]

With a fixed , the output will be:

  • : converge
  • : linearly diverge
  • : expontially diverge

network architectures

  • single-layer feedforward: one input, one layer of computing units (output layer), acyclic connections
  • multilayer feedforward: one input layer, one (or more) hidden layers, and one output layer
  • recurrent: feedback loop exists

Layers can be fully connected or partially connected.

design of a neural network

  • select architecture, and gather input samples and train using a learning algorithm
  • test with data not seen before
  • it’s a data-driven, unlike conventional programming

similarity measures

  • reciprocal of Euclidean distance :
  • dot product

When :
\[d^2(x_i,x_j) = 2-2 x_i^T x_j\]

  • mean vector
  • Mahalanobis distance:
  • Covariance matrix is assumed to be the same:

Creative Commons License
Melon blog is created by melonskin. This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
© 2016-2019. All rights reserved by melonskin. Powered by Jekyll.