Skip to content

An adapted implementation of the MATLAB code provided in the paper 'Deep Learning: An Introduction for Applied Mathematicians' by C. F. Higham and D. J. Higham (https://epubs.siam.org/doi/pdf/10.1137/18M1165748)

Notifications You must be signed in to change notification settings

jamesrynn/Basic_DNN

Repository files navigation

Basic_DNN

An adapted implementation of the MATLAB code provided in [1]. The original code provided with the paper is available at: https://www.maths.ed.ac.uk/~dhigham/algfiles.html

[1] Higham, C.F. and Higham, D.J., 2019. Deep Learning: An Introduction for Applied Mathematicians. SIAM Review, 61(4), pp.860-891.

Notable Additional Functionality:

  • arbitrary number of hidden layers with arbitrary number of nodes in each
  • choice of activation function including linear, sigmoid, tanh, ReLU, leaky ReLU and ELU
  • batch stochastic Gradient descent rather than (single sample) stochastic gradient descent
  • parameters such as number of iterations and learning rate not fixed

To-Do Items:

  • (potential:) vectorised version - less 'readable' but quicker to perform experiments
  • adaptive learning rate
  • arbitrary number of data points in training set

Python Version Available:

A Python implementation of this code has been written by Alex Hiles and is available at https://github.com/alexhiles/NN

About

An adapted implementation of the MATLAB code provided in the paper 'Deep Learning: An Introduction for Applied Mathematicians' by C. F. Higham and D. J. Higham (https://epubs.siam.org/doi/pdf/10.1137/18M1165748)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages