lssndrbarbieri@gmail.com Alessandro Barbieri Features Reasonably fast, without GPU: With TBB threading and SSE/AVX vectorization. 98.8% accuracy on MNIST in 13 minutes training (@Core i7-3520M). Portable and header-only: Runs anywhere as long as you have a compiler which supports C++14. Just include tiny_dnn.h and write your model in C++. There is nothing to install. Easy to integrate with real applications: No output to stdout/stderr. A constant throughput (simple parallelization model, no garbage collection). Works without throwing an exception. Can import caffe's model. Simply implemented: A good library for learning neural networks. Supported networks layer-types core fully connected dropout linear operation zero padding power convolution convolutional average pooling max pooling deconvolutional average unpooling max unpooling normalization contrast normalization (only forward pass) batch normalization split/merge concat slice elementwise-add activation functions tanh asinh sigmoid softmax softplus softsign rectified linear(relu) leaky relu identity scaled tanh exponential linear units(elu) scaled exponential linear units (selu) loss functions cross-entropy mean squared error mean absolute error mean absolute error with epsilon range optimization algorithms stochastic gradient descent (with/without L2 normalization) momentum and Nesterov momentum adagrad rmsprop adam adamax Build tiny-dnn with double precision computations Build tiny-dnn with OpenCL library support Build tiny-dnn with Serialization support Build tiny-dnn with TBB library support tiny-dnn/tiny-dnn